Dec  2 05:03:29 np0005542249 kernel: Linux version 5.14.0-645.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025
Dec  2 05:03:29 np0005542249 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec  2 05:03:29 np0005542249 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  2 05:03:29 np0005542249 kernel: BIOS-provided physical RAM map:
Dec  2 05:03:29 np0005542249 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec  2 05:03:29 np0005542249 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec  2 05:03:29 np0005542249 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec  2 05:03:29 np0005542249 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec  2 05:03:29 np0005542249 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec  2 05:03:29 np0005542249 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec  2 05:03:29 np0005542249 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec  2 05:03:29 np0005542249 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec  2 05:03:29 np0005542249 kernel: NX (Execute Disable) protection: active
Dec  2 05:03:29 np0005542249 kernel: APIC: Static calls initialized
Dec  2 05:03:29 np0005542249 kernel: SMBIOS 2.8 present.
Dec  2 05:03:29 np0005542249 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec  2 05:03:29 np0005542249 kernel: Hypervisor detected: KVM
Dec  2 05:03:29 np0005542249 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec  2 05:03:29 np0005542249 kernel: kvm-clock: using sched offset of 3844620693 cycles
Dec  2 05:03:29 np0005542249 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec  2 05:03:29 np0005542249 kernel: tsc: Detected 2799.998 MHz processor
Dec  2 05:03:29 np0005542249 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec  2 05:03:29 np0005542249 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec  2 05:03:29 np0005542249 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec  2 05:03:29 np0005542249 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec  2 05:03:29 np0005542249 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec  2 05:03:29 np0005542249 kernel: Using GB pages for direct mapping
Dec  2 05:03:29 np0005542249 kernel: RAMDISK: [mem 0x2e95d000-0x334a6fff]
Dec  2 05:03:29 np0005542249 kernel: ACPI: Early table checksum verification disabled
Dec  2 05:03:29 np0005542249 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec  2 05:03:29 np0005542249 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  2 05:03:29 np0005542249 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  2 05:03:29 np0005542249 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  2 05:03:29 np0005542249 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec  2 05:03:29 np0005542249 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  2 05:03:29 np0005542249 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  2 05:03:29 np0005542249 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec  2 05:03:29 np0005542249 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec  2 05:03:29 np0005542249 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec  2 05:03:29 np0005542249 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec  2 05:03:29 np0005542249 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec  2 05:03:29 np0005542249 kernel: No NUMA configuration found
Dec  2 05:03:29 np0005542249 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec  2 05:03:29 np0005542249 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec  2 05:03:29 np0005542249 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec  2 05:03:29 np0005542249 kernel: Zone ranges:
Dec  2 05:03:29 np0005542249 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec  2 05:03:29 np0005542249 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec  2 05:03:29 np0005542249 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec  2 05:03:29 np0005542249 kernel:  Device   empty
Dec  2 05:03:29 np0005542249 kernel: Movable zone start for each node
Dec  2 05:03:29 np0005542249 kernel: Early memory node ranges
Dec  2 05:03:29 np0005542249 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec  2 05:03:29 np0005542249 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec  2 05:03:29 np0005542249 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec  2 05:03:29 np0005542249 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec  2 05:03:29 np0005542249 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec  2 05:03:29 np0005542249 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec  2 05:03:29 np0005542249 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec  2 05:03:29 np0005542249 kernel: ACPI: PM-Timer IO Port: 0x608
Dec  2 05:03:29 np0005542249 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec  2 05:03:29 np0005542249 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec  2 05:03:29 np0005542249 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec  2 05:03:29 np0005542249 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec  2 05:03:29 np0005542249 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec  2 05:03:29 np0005542249 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec  2 05:03:29 np0005542249 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec  2 05:03:29 np0005542249 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec  2 05:03:29 np0005542249 kernel: TSC deadline timer available
Dec  2 05:03:29 np0005542249 kernel: CPU topo: Max. logical packages:   8
Dec  2 05:03:29 np0005542249 kernel: CPU topo: Max. logical dies:       8
Dec  2 05:03:29 np0005542249 kernel: CPU topo: Max. dies per package:   1
Dec  2 05:03:29 np0005542249 kernel: CPU topo: Max. threads per core:   1
Dec  2 05:03:29 np0005542249 kernel: CPU topo: Num. cores per package:     1
Dec  2 05:03:29 np0005542249 kernel: CPU topo: Num. threads per package:   1
Dec  2 05:03:29 np0005542249 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec  2 05:03:29 np0005542249 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec  2 05:03:29 np0005542249 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec  2 05:03:29 np0005542249 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec  2 05:03:29 np0005542249 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec  2 05:03:29 np0005542249 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec  2 05:03:29 np0005542249 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec  2 05:03:29 np0005542249 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec  2 05:03:29 np0005542249 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec  2 05:03:29 np0005542249 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec  2 05:03:29 np0005542249 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec  2 05:03:29 np0005542249 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec  2 05:03:29 np0005542249 kernel: Booting paravirtualized kernel on KVM
Dec  2 05:03:29 np0005542249 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec  2 05:03:29 np0005542249 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec  2 05:03:29 np0005542249 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec  2 05:03:29 np0005542249 kernel: kvm-guest: PV spinlocks disabled, no host support
Dec  2 05:03:29 np0005542249 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  2 05:03:29 np0005542249 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64", will be passed to user space.
Dec  2 05:03:29 np0005542249 kernel: random: crng init done
Dec  2 05:03:29 np0005542249 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec  2 05:03:29 np0005542249 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec  2 05:03:29 np0005542249 kernel: Fallback order for Node 0: 0 
Dec  2 05:03:29 np0005542249 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec  2 05:03:29 np0005542249 kernel: Policy zone: Normal
Dec  2 05:03:29 np0005542249 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec  2 05:03:29 np0005542249 kernel: software IO TLB: area num 8.
Dec  2 05:03:29 np0005542249 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec  2 05:03:29 np0005542249 kernel: ftrace: allocating 49335 entries in 193 pages
Dec  2 05:03:29 np0005542249 kernel: ftrace: allocated 193 pages with 3 groups
Dec  2 05:03:29 np0005542249 kernel: Dynamic Preempt: voluntary
Dec  2 05:03:29 np0005542249 kernel: rcu: Preemptible hierarchical RCU implementation.
Dec  2 05:03:29 np0005542249 kernel: rcu: #011RCU event tracing is enabled.
Dec  2 05:03:29 np0005542249 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec  2 05:03:29 np0005542249 kernel: #011Trampoline variant of Tasks RCU enabled.
Dec  2 05:03:29 np0005542249 kernel: #011Rude variant of Tasks RCU enabled.
Dec  2 05:03:29 np0005542249 kernel: #011Tracing variant of Tasks RCU enabled.
Dec  2 05:03:29 np0005542249 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec  2 05:03:29 np0005542249 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec  2 05:03:29 np0005542249 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  2 05:03:29 np0005542249 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  2 05:03:29 np0005542249 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  2 05:03:29 np0005542249 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec  2 05:03:29 np0005542249 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec  2 05:03:29 np0005542249 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec  2 05:03:29 np0005542249 kernel: Console: colour VGA+ 80x25
Dec  2 05:03:29 np0005542249 kernel: printk: console [ttyS0] enabled
Dec  2 05:03:29 np0005542249 kernel: ACPI: Core revision 20230331
Dec  2 05:03:29 np0005542249 kernel: APIC: Switch to symmetric I/O mode setup
Dec  2 05:03:29 np0005542249 kernel: x2apic enabled
Dec  2 05:03:29 np0005542249 kernel: APIC: Switched APIC routing to: physical x2apic
Dec  2 05:03:29 np0005542249 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec  2 05:03:29 np0005542249 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Dec  2 05:03:29 np0005542249 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec  2 05:03:29 np0005542249 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec  2 05:03:29 np0005542249 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec  2 05:03:29 np0005542249 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec  2 05:03:29 np0005542249 kernel: Spectre V2 : Mitigation: Retpolines
Dec  2 05:03:29 np0005542249 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec  2 05:03:29 np0005542249 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec  2 05:03:29 np0005542249 kernel: RETBleed: Mitigation: untrained return thunk
Dec  2 05:03:29 np0005542249 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec  2 05:03:29 np0005542249 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec  2 05:03:29 np0005542249 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec  2 05:03:29 np0005542249 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec  2 05:03:29 np0005542249 kernel: x86/bugs: return thunk changed
Dec  2 05:03:29 np0005542249 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec  2 05:03:29 np0005542249 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec  2 05:03:29 np0005542249 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec  2 05:03:29 np0005542249 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec  2 05:03:29 np0005542249 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec  2 05:03:29 np0005542249 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec  2 05:03:29 np0005542249 kernel: Freeing SMP alternatives memory: 40K
Dec  2 05:03:29 np0005542249 kernel: pid_max: default: 32768 minimum: 301
Dec  2 05:03:29 np0005542249 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec  2 05:03:29 np0005542249 kernel: landlock: Up and running.
Dec  2 05:03:29 np0005542249 kernel: Yama: becoming mindful.
Dec  2 05:03:29 np0005542249 kernel: SELinux:  Initializing.
Dec  2 05:03:29 np0005542249 kernel: LSM support for eBPF active
Dec  2 05:03:29 np0005542249 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  2 05:03:29 np0005542249 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  2 05:03:29 np0005542249 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec  2 05:03:29 np0005542249 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec  2 05:03:29 np0005542249 kernel: ... version:                0
Dec  2 05:03:29 np0005542249 kernel: ... bit width:              48
Dec  2 05:03:29 np0005542249 kernel: ... generic registers:      6
Dec  2 05:03:29 np0005542249 kernel: ... value mask:             0000ffffffffffff
Dec  2 05:03:29 np0005542249 kernel: ... max period:             00007fffffffffff
Dec  2 05:03:29 np0005542249 kernel: ... fixed-purpose events:   0
Dec  2 05:03:29 np0005542249 kernel: ... event mask:             000000000000003f
Dec  2 05:03:29 np0005542249 kernel: signal: max sigframe size: 1776
Dec  2 05:03:29 np0005542249 kernel: rcu: Hierarchical SRCU implementation.
Dec  2 05:03:29 np0005542249 kernel: rcu: #011Max phase no-delay instances is 400.
Dec  2 05:03:29 np0005542249 kernel: smp: Bringing up secondary CPUs ...
Dec  2 05:03:29 np0005542249 kernel: smpboot: x86: Booting SMP configuration:
Dec  2 05:03:29 np0005542249 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec  2 05:03:29 np0005542249 kernel: smp: Brought up 1 node, 8 CPUs
Dec  2 05:03:29 np0005542249 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Dec  2 05:03:29 np0005542249 kernel: node 0 deferred pages initialised in 19ms
Dec  2 05:03:29 np0005542249 kernel: Memory: 7774656K/8388068K available (16384K kernel code, 5795K rwdata, 13908K rodata, 4196K init, 7156K bss, 607496K reserved, 0K cma-reserved)
Dec  2 05:03:29 np0005542249 kernel: devtmpfs: initialized
Dec  2 05:03:29 np0005542249 kernel: x86/mm: Memory block size: 128MB
Dec  2 05:03:29 np0005542249 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec  2 05:03:29 np0005542249 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec  2 05:03:29 np0005542249 kernel: pinctrl core: initialized pinctrl subsystem
Dec  2 05:03:29 np0005542249 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec  2 05:03:29 np0005542249 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec  2 05:03:29 np0005542249 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec  2 05:03:29 np0005542249 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec  2 05:03:29 np0005542249 kernel: audit: initializing netlink subsys (disabled)
Dec  2 05:03:29 np0005542249 kernel: audit: type=2000 audit(1764669807.293:1): state=initialized audit_enabled=0 res=1
Dec  2 05:03:29 np0005542249 kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec  2 05:03:29 np0005542249 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec  2 05:03:29 np0005542249 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec  2 05:03:29 np0005542249 kernel: cpuidle: using governor menu
Dec  2 05:03:29 np0005542249 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec  2 05:03:29 np0005542249 kernel: PCI: Using configuration type 1 for base access
Dec  2 05:03:29 np0005542249 kernel: PCI: Using configuration type 1 for extended access
Dec  2 05:03:29 np0005542249 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec  2 05:03:29 np0005542249 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec  2 05:03:29 np0005542249 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec  2 05:03:29 np0005542249 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec  2 05:03:29 np0005542249 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec  2 05:03:29 np0005542249 kernel: Demotion targets for Node 0: null
Dec  2 05:03:29 np0005542249 kernel: cryptd: max_cpu_qlen set to 1000
Dec  2 05:03:29 np0005542249 kernel: ACPI: Added _OSI(Module Device)
Dec  2 05:03:29 np0005542249 kernel: ACPI: Added _OSI(Processor Device)
Dec  2 05:03:29 np0005542249 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec  2 05:03:29 np0005542249 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec  2 05:03:29 np0005542249 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec  2 05:03:29 np0005542249 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec  2 05:03:29 np0005542249 kernel: ACPI: Interpreter enabled
Dec  2 05:03:29 np0005542249 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec  2 05:03:29 np0005542249 kernel: ACPI: Using IOAPIC for interrupt routing
Dec  2 05:03:29 np0005542249 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec  2 05:03:29 np0005542249 kernel: PCI: Using E820 reservations for host bridge windows
Dec  2 05:03:29 np0005542249 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec  2 05:03:29 np0005542249 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec  2 05:03:29 np0005542249 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [3] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [4] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [5] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [6] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [7] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [8] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [9] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [10] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [11] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [12] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [13] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [14] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [15] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [16] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [17] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [18] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [19] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [20] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [21] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [22] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [23] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [24] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [25] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [26] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [27] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [28] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [29] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [30] registered
Dec  2 05:03:29 np0005542249 kernel: acpiphp: Slot [31] registered
Dec  2 05:03:29 np0005542249 kernel: PCI host bridge to bus 0000:00
Dec  2 05:03:29 np0005542249 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec  2 05:03:29 np0005542249 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec  2 05:03:29 np0005542249 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec  2 05:03:29 np0005542249 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec  2 05:03:29 np0005542249 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec  2 05:03:29 np0005542249 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec  2 05:03:29 np0005542249 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec  2 05:03:29 np0005542249 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec  2 05:03:29 np0005542249 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec  2 05:03:29 np0005542249 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec  2 05:03:29 np0005542249 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec  2 05:03:29 np0005542249 kernel: iommu: Default domain type: Translated
Dec  2 05:03:29 np0005542249 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec  2 05:03:29 np0005542249 kernel: SCSI subsystem initialized
Dec  2 05:03:29 np0005542249 kernel: ACPI: bus type USB registered
Dec  2 05:03:29 np0005542249 kernel: usbcore: registered new interface driver usbfs
Dec  2 05:03:29 np0005542249 kernel: usbcore: registered new interface driver hub
Dec  2 05:03:29 np0005542249 kernel: usbcore: registered new device driver usb
Dec  2 05:03:29 np0005542249 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec  2 05:03:29 np0005542249 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec  2 05:03:29 np0005542249 kernel: PTP clock support registered
Dec  2 05:03:29 np0005542249 kernel: EDAC MC: Ver: 3.0.0
Dec  2 05:03:29 np0005542249 kernel: NetLabel: Initializing
Dec  2 05:03:29 np0005542249 kernel: NetLabel:  domain hash size = 128
Dec  2 05:03:29 np0005542249 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec  2 05:03:29 np0005542249 kernel: NetLabel:  unlabeled traffic allowed by default
Dec  2 05:03:29 np0005542249 kernel: PCI: Using ACPI for IRQ routing
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec  2 05:03:29 np0005542249 kernel: vgaarb: loaded
Dec  2 05:03:29 np0005542249 kernel: clocksource: Switched to clocksource kvm-clock
Dec  2 05:03:29 np0005542249 kernel: VFS: Disk quotas dquot_6.6.0
Dec  2 05:03:29 np0005542249 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec  2 05:03:29 np0005542249 kernel: pnp: PnP ACPI init
Dec  2 05:03:29 np0005542249 kernel: pnp: PnP ACPI: found 5 devices
Dec  2 05:03:29 np0005542249 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec  2 05:03:29 np0005542249 kernel: NET: Registered PF_INET protocol family
Dec  2 05:03:29 np0005542249 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec  2 05:03:29 np0005542249 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec  2 05:03:29 np0005542249 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec  2 05:03:29 np0005542249 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec  2 05:03:29 np0005542249 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec  2 05:03:29 np0005542249 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec  2 05:03:29 np0005542249 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec  2 05:03:29 np0005542249 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  2 05:03:29 np0005542249 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  2 05:03:29 np0005542249 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec  2 05:03:29 np0005542249 kernel: NET: Registered PF_XDP protocol family
Dec  2 05:03:29 np0005542249 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec  2 05:03:29 np0005542249 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec  2 05:03:29 np0005542249 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec  2 05:03:29 np0005542249 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec  2 05:03:29 np0005542249 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec  2 05:03:29 np0005542249 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec  2 05:03:29 np0005542249 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 79124 usecs
Dec  2 05:03:29 np0005542249 kernel: PCI: CLS 0 bytes, default 64
Dec  2 05:03:29 np0005542249 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec  2 05:03:29 np0005542249 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec  2 05:03:29 np0005542249 kernel: ACPI: bus type thunderbolt registered
Dec  2 05:03:29 np0005542249 kernel: Trying to unpack rootfs image as initramfs...
Dec  2 05:03:29 np0005542249 kernel: Initialise system trusted keyrings
Dec  2 05:03:29 np0005542249 kernel: Key type blacklist registered
Dec  2 05:03:29 np0005542249 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec  2 05:03:29 np0005542249 kernel: zbud: loaded
Dec  2 05:03:29 np0005542249 kernel: integrity: Platform Keyring initialized
Dec  2 05:03:29 np0005542249 kernel: integrity: Machine keyring initialized
Dec  2 05:03:29 np0005542249 kernel: Freeing initrd memory: 77096K
Dec  2 05:03:29 np0005542249 kernel: NET: Registered PF_ALG protocol family
Dec  2 05:03:29 np0005542249 kernel: xor: automatically using best checksumming function   avx       
Dec  2 05:03:29 np0005542249 kernel: Key type asymmetric registered
Dec  2 05:03:29 np0005542249 kernel: Asymmetric key parser 'x509' registered
Dec  2 05:03:29 np0005542249 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec  2 05:03:29 np0005542249 kernel: io scheduler mq-deadline registered
Dec  2 05:03:29 np0005542249 kernel: io scheduler kyber registered
Dec  2 05:03:29 np0005542249 kernel: io scheduler bfq registered
Dec  2 05:03:29 np0005542249 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec  2 05:03:29 np0005542249 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec  2 05:03:29 np0005542249 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec  2 05:03:29 np0005542249 kernel: ACPI: button: Power Button [PWRF]
Dec  2 05:03:29 np0005542249 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec  2 05:03:29 np0005542249 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec  2 05:03:29 np0005542249 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec  2 05:03:29 np0005542249 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec  2 05:03:29 np0005542249 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec  2 05:03:29 np0005542249 kernel: Non-volatile memory driver v1.3
Dec  2 05:03:29 np0005542249 kernel: rdac: device handler registered
Dec  2 05:03:29 np0005542249 kernel: hp_sw: device handler registered
Dec  2 05:03:29 np0005542249 kernel: emc: device handler registered
Dec  2 05:03:29 np0005542249 kernel: alua: device handler registered
Dec  2 05:03:29 np0005542249 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec  2 05:03:29 np0005542249 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec  2 05:03:29 np0005542249 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec  2 05:03:29 np0005542249 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec  2 05:03:29 np0005542249 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec  2 05:03:29 np0005542249 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec  2 05:03:29 np0005542249 kernel: usb usb1: Product: UHCI Host Controller
Dec  2 05:03:29 np0005542249 kernel: usb usb1: Manufacturer: Linux 5.14.0-645.el9.x86_64 uhci_hcd
Dec  2 05:03:29 np0005542249 kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec  2 05:03:29 np0005542249 kernel: hub 1-0:1.0: USB hub found
Dec  2 05:03:29 np0005542249 kernel: hub 1-0:1.0: 2 ports detected
Dec  2 05:03:29 np0005542249 kernel: usbcore: registered new interface driver usbserial_generic
Dec  2 05:03:29 np0005542249 kernel: usbserial: USB Serial support registered for generic
Dec  2 05:03:29 np0005542249 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec  2 05:03:29 np0005542249 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec  2 05:03:29 np0005542249 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec  2 05:03:29 np0005542249 kernel: mousedev: PS/2 mouse device common for all mice
Dec  2 05:03:29 np0005542249 kernel: rtc_cmos 00:04: RTC can wake from S4
Dec  2 05:03:29 np0005542249 kernel: rtc_cmos 00:04: registered as rtc0
Dec  2 05:03:29 np0005542249 kernel: rtc_cmos 00:04: setting system clock to 2025-12-02T10:03:28 UTC (1764669808)
Dec  2 05:03:29 np0005542249 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec  2 05:03:29 np0005542249 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec  2 05:03:29 np0005542249 kernel: hid: raw HID events driver (C) Jiri Kosina
Dec  2 05:03:29 np0005542249 kernel: usbcore: registered new interface driver usbhid
Dec  2 05:03:29 np0005542249 kernel: usbhid: USB HID core driver
Dec  2 05:03:29 np0005542249 kernel: drop_monitor: Initializing network drop monitor service
Dec  2 05:03:29 np0005542249 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec  2 05:03:29 np0005542249 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec  2 05:03:29 np0005542249 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec  2 05:03:29 np0005542249 kernel: Initializing XFRM netlink socket
Dec  2 05:03:29 np0005542249 kernel: NET: Registered PF_INET6 protocol family
Dec  2 05:03:29 np0005542249 kernel: Segment Routing with IPv6
Dec  2 05:03:29 np0005542249 kernel: NET: Registered PF_PACKET protocol family
Dec  2 05:03:29 np0005542249 kernel: mpls_gso: MPLS GSO support
Dec  2 05:03:29 np0005542249 kernel: IPI shorthand broadcast: enabled
Dec  2 05:03:29 np0005542249 kernel: AVX2 version of gcm_enc/dec engaged.
Dec  2 05:03:29 np0005542249 kernel: AES CTR mode by8 optimization enabled
Dec  2 05:03:29 np0005542249 kernel: sched_clock: Marking stable (1519002733, 160886605)->(1833934380, -154045042)
Dec  2 05:03:29 np0005542249 kernel: registered taskstats version 1
Dec  2 05:03:29 np0005542249 kernel: Loading compiled-in X.509 certificates
Dec  2 05:03:29 np0005542249 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec  2 05:03:29 np0005542249 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec  2 05:03:29 np0005542249 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec  2 05:03:29 np0005542249 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec  2 05:03:29 np0005542249 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec  2 05:03:29 np0005542249 kernel: Demotion targets for Node 0: null
Dec  2 05:03:29 np0005542249 kernel: page_owner is disabled
Dec  2 05:03:29 np0005542249 kernel: Key type .fscrypt registered
Dec  2 05:03:29 np0005542249 kernel: Key type fscrypt-provisioning registered
Dec  2 05:03:29 np0005542249 kernel: Key type big_key registered
Dec  2 05:03:29 np0005542249 kernel: Key type encrypted registered
Dec  2 05:03:29 np0005542249 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec  2 05:03:29 np0005542249 kernel: Loading compiled-in module X.509 certificates
Dec  2 05:03:29 np0005542249 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec  2 05:03:29 np0005542249 kernel: ima: Allocated hash algorithm: sha256
Dec  2 05:03:29 np0005542249 kernel: ima: No architecture policies found
Dec  2 05:03:29 np0005542249 kernel: evm: Initialising EVM extended attributes:
Dec  2 05:03:29 np0005542249 kernel: evm: security.selinux
Dec  2 05:03:29 np0005542249 kernel: evm: security.SMACK64 (disabled)
Dec  2 05:03:29 np0005542249 kernel: evm: security.SMACK64EXEC (disabled)
Dec  2 05:03:29 np0005542249 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec  2 05:03:29 np0005542249 kernel: evm: security.SMACK64MMAP (disabled)
Dec  2 05:03:29 np0005542249 kernel: evm: security.apparmor (disabled)
Dec  2 05:03:29 np0005542249 kernel: evm: security.ima
Dec  2 05:03:29 np0005542249 kernel: evm: security.capability
Dec  2 05:03:29 np0005542249 kernel: evm: HMAC attrs: 0x1
Dec  2 05:03:29 np0005542249 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec  2 05:03:29 np0005542249 kernel: Running certificate verification RSA selftest
Dec  2 05:03:29 np0005542249 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec  2 05:03:29 np0005542249 kernel: Running certificate verification ECDSA selftest
Dec  2 05:03:29 np0005542249 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec  2 05:03:29 np0005542249 kernel: clk: Disabling unused clocks
Dec  2 05:03:29 np0005542249 kernel: Freeing unused decrypted memory: 2028K
Dec  2 05:03:29 np0005542249 kernel: Freeing unused kernel image (initmem) memory: 4196K
Dec  2 05:03:29 np0005542249 kernel: Write protecting the kernel read-only data: 30720k
Dec  2 05:03:29 np0005542249 kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Dec  2 05:03:29 np0005542249 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec  2 05:03:29 np0005542249 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec  2 05:03:29 np0005542249 kernel: usb 1-1: Product: QEMU USB Tablet
Dec  2 05:03:29 np0005542249 kernel: usb 1-1: Manufacturer: QEMU
Dec  2 05:03:29 np0005542249 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec  2 05:03:29 np0005542249 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec  2 05:03:29 np0005542249 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec  2 05:03:29 np0005542249 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec  2 05:03:29 np0005542249 kernel: Run /init as init process
Dec  2 05:03:29 np0005542249 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  2 05:03:29 np0005542249 systemd: Detected virtualization kvm.
Dec  2 05:03:29 np0005542249 systemd: Detected architecture x86-64.
Dec  2 05:03:29 np0005542249 systemd: Running in initrd.
Dec  2 05:03:29 np0005542249 systemd: No hostname configured, using default hostname.
Dec  2 05:03:29 np0005542249 systemd: Hostname set to <localhost>.
Dec  2 05:03:29 np0005542249 systemd: Initializing machine ID from VM UUID.
Dec  2 05:03:29 np0005542249 systemd: Queued start job for default target Initrd Default Target.
Dec  2 05:03:29 np0005542249 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  2 05:03:29 np0005542249 systemd: Reached target Local Encrypted Volumes.
Dec  2 05:03:29 np0005542249 systemd: Reached target Initrd /usr File System.
Dec  2 05:03:29 np0005542249 systemd: Reached target Local File Systems.
Dec  2 05:03:29 np0005542249 systemd: Reached target Path Units.
Dec  2 05:03:29 np0005542249 systemd: Reached target Slice Units.
Dec  2 05:03:29 np0005542249 systemd: Reached target Swaps.
Dec  2 05:03:29 np0005542249 systemd: Reached target Timer Units.
Dec  2 05:03:29 np0005542249 systemd: Listening on D-Bus System Message Bus Socket.
Dec  2 05:03:29 np0005542249 systemd: Listening on Journal Socket (/dev/log).
Dec  2 05:03:29 np0005542249 systemd: Listening on Journal Socket.
Dec  2 05:03:29 np0005542249 systemd: Listening on udev Control Socket.
Dec  2 05:03:29 np0005542249 systemd: Listening on udev Kernel Socket.
Dec  2 05:03:29 np0005542249 systemd: Reached target Socket Units.
Dec  2 05:03:29 np0005542249 systemd: Starting Create List of Static Device Nodes...
Dec  2 05:03:29 np0005542249 systemd: Starting Journal Service...
Dec  2 05:03:29 np0005542249 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  2 05:03:29 np0005542249 systemd: Starting Apply Kernel Variables...
Dec  2 05:03:29 np0005542249 systemd: Starting Create System Users...
Dec  2 05:03:29 np0005542249 systemd: Starting Setup Virtual Console...
Dec  2 05:03:29 np0005542249 systemd: Finished Create List of Static Device Nodes.
Dec  2 05:03:29 np0005542249 systemd: Finished Apply Kernel Variables.
Dec  2 05:03:29 np0005542249 systemd: Finished Create System Users.
Dec  2 05:03:29 np0005542249 systemd-journald[305]: Journal started
Dec  2 05:03:29 np0005542249 systemd-journald[305]: Runtime Journal (/run/log/journal/b5d8029ebce443989c24ad4d219021cb) is 8.0M, max 153.6M, 145.6M free.
Dec  2 05:03:29 np0005542249 systemd-sysusers[310]: Creating group 'users' with GID 100.
Dec  2 05:03:29 np0005542249 systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Dec  2 05:03:29 np0005542249 systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec  2 05:03:29 np0005542249 systemd: Started Journal Service.
Dec  2 05:03:29 np0005542249 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  2 05:03:29 np0005542249 systemd[1]: Starting Create Volatile Files and Directories...
Dec  2 05:03:29 np0005542249 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  2 05:03:29 np0005542249 systemd[1]: Finished Create Volatile Files and Directories.
Dec  2 05:03:29 np0005542249 systemd[1]: Finished Setup Virtual Console.
Dec  2 05:03:29 np0005542249 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec  2 05:03:29 np0005542249 systemd[1]: Starting dracut cmdline hook...
Dec  2 05:03:29 np0005542249 dracut-cmdline[326]: dracut-9 dracut-057-102.git20250818.el9
Dec  2 05:03:29 np0005542249 dracut-cmdline[326]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  2 05:03:29 np0005542249 systemd[1]: Finished dracut cmdline hook.
Dec  2 05:03:29 np0005542249 systemd[1]: Starting dracut pre-udev hook...
Dec  2 05:03:29 np0005542249 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec  2 05:03:29 np0005542249 kernel: device-mapper: uevent: version 1.0.3
Dec  2 05:03:29 np0005542249 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec  2 05:03:29 np0005542249 kernel: RPC: Registered named UNIX socket transport module.
Dec  2 05:03:29 np0005542249 kernel: RPC: Registered udp transport module.
Dec  2 05:03:29 np0005542249 kernel: RPC: Registered tcp transport module.
Dec  2 05:03:29 np0005542249 kernel: RPC: Registered tcp-with-tls transport module.
Dec  2 05:03:29 np0005542249 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec  2 05:03:29 np0005542249 rpc.statd[442]: Version 2.5.4 starting
Dec  2 05:03:29 np0005542249 rpc.statd[442]: Initializing NSM state
Dec  2 05:03:29 np0005542249 rpc.idmapd[447]: Setting log level to 0
Dec  2 05:03:29 np0005542249 systemd[1]: Finished dracut pre-udev hook.
Dec  2 05:03:29 np0005542249 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  2 05:03:29 np0005542249 systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Dec  2 05:03:29 np0005542249 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  2 05:03:29 np0005542249 systemd[1]: Starting dracut pre-trigger hook...
Dec  2 05:03:29 np0005542249 systemd[1]: Finished dracut pre-trigger hook.
Dec  2 05:03:29 np0005542249 systemd[1]: Starting Coldplug All udev Devices...
Dec  2 05:03:29 np0005542249 systemd[1]: Created slice Slice /system/modprobe.
Dec  2 05:03:29 np0005542249 systemd[1]: Starting Load Kernel Module configfs...
Dec  2 05:03:29 np0005542249 systemd[1]: Finished Coldplug All udev Devices.
Dec  2 05:03:29 np0005542249 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  2 05:03:29 np0005542249 systemd[1]: Finished Load Kernel Module configfs.
Dec  2 05:03:29 np0005542249 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  2 05:03:29 np0005542249 systemd[1]: Reached target Network.
Dec  2 05:03:29 np0005542249 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  2 05:03:29 np0005542249 systemd[1]: Starting dracut initqueue hook...
Dec  2 05:03:30 np0005542249 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec  2 05:03:30 np0005542249 kernel: scsi host0: ata_piix
Dec  2 05:03:30 np0005542249 systemd-udevd[481]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 05:03:30 np0005542249 kernel: scsi host1: ata_piix
Dec  2 05:03:30 np0005542249 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec  2 05:03:30 np0005542249 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec  2 05:03:30 np0005542249 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec  2 05:03:30 np0005542249 kernel: vda: vda1
Dec  2 05:03:30 np0005542249 systemd[1]: Mounting Kernel Configuration File System...
Dec  2 05:03:30 np0005542249 systemd[1]: Mounted Kernel Configuration File System.
Dec  2 05:03:30 np0005542249 systemd[1]: Reached target System Initialization.
Dec  2 05:03:30 np0005542249 systemd[1]: Reached target Basic System.
Dec  2 05:03:30 np0005542249 kernel: ata1: found unknown device (class 0)
Dec  2 05:03:30 np0005542249 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec  2 05:03:30 np0005542249 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec  2 05:03:30 np0005542249 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec  2 05:03:30 np0005542249 systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Dec  2 05:03:30 np0005542249 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec  2 05:03:30 np0005542249 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec  2 05:03:30 np0005542249 systemd[1]: Reached target Initrd Root Device.
Dec  2 05:03:30 np0005542249 systemd[1]: Finished dracut initqueue hook.
Dec  2 05:03:30 np0005542249 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  2 05:03:30 np0005542249 systemd[1]: Reached target Remote Encrypted Volumes.
Dec  2 05:03:30 np0005542249 systemd[1]: Reached target Remote File Systems.
Dec  2 05:03:30 np0005542249 systemd[1]: Starting dracut pre-mount hook...
Dec  2 05:03:30 np0005542249 systemd[1]: Finished dracut pre-mount hook.
Dec  2 05:03:30 np0005542249 systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Dec  2 05:03:30 np0005542249 systemd-fsck[554]: /usr/sbin/fsck.xfs: XFS file system.
Dec  2 05:03:30 np0005542249 systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Dec  2 05:03:30 np0005542249 systemd[1]: Mounting /sysroot...
Dec  2 05:03:31 np0005542249 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec  2 05:03:31 np0005542249 kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Dec  2 05:03:31 np0005542249 kernel: XFS (vda1): Ending clean mount
Dec  2 05:03:31 np0005542249 systemd[1]: Mounted /sysroot.
Dec  2 05:03:31 np0005542249 systemd[1]: Reached target Initrd Root File System.
Dec  2 05:03:31 np0005542249 systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec  2 05:03:31 np0005542249 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec  2 05:03:31 np0005542249 systemd[1]: Reached target Initrd File Systems.
Dec  2 05:03:31 np0005542249 systemd[1]: Reached target Initrd Default Target.
Dec  2 05:03:31 np0005542249 systemd[1]: Starting dracut mount hook...
Dec  2 05:03:31 np0005542249 systemd[1]: Finished dracut mount hook.
Dec  2 05:03:31 np0005542249 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec  2 05:03:31 np0005542249 rpc.idmapd[447]: exiting on signal 15
Dec  2 05:03:31 np0005542249 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec  2 05:03:31 np0005542249 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped target Network.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped target Remote Encrypted Volumes.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped target Timer Units.
Dec  2 05:03:31 np0005542249 systemd[1]: dbus.socket: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: Closed D-Bus System Message Bus Socket.
Dec  2 05:03:31 np0005542249 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped target Initrd Default Target.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped target Basic System.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped target Initrd Root Device.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped target Initrd /usr File System.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped target Path Units.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped target Remote File Systems.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped target Preparation for Remote File Systems.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped target Slice Units.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped target Socket Units.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped target System Initialization.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped target Local File Systems.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped target Swaps.
Dec  2 05:03:31 np0005542249 systemd[1]: dracut-mount.service: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped dracut mount hook.
Dec  2 05:03:31 np0005542249 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped dracut pre-mount hook.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped target Local Encrypted Volumes.
Dec  2 05:03:31 np0005542249 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec  2 05:03:31 np0005542249 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped dracut initqueue hook.
Dec  2 05:03:31 np0005542249 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped Apply Kernel Variables.
Dec  2 05:03:31 np0005542249 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped Create Volatile Files and Directories.
Dec  2 05:03:31 np0005542249 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped Coldplug All udev Devices.
Dec  2 05:03:31 np0005542249 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped dracut pre-trigger hook.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec  2 05:03:31 np0005542249 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped Setup Virtual Console.
Dec  2 05:03:31 np0005542249 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec  2 05:03:31 np0005542249 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec  2 05:03:31 np0005542249 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: Closed udev Control Socket.
Dec  2 05:03:31 np0005542249 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: Closed udev Kernel Socket.
Dec  2 05:03:31 np0005542249 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped dracut pre-udev hook.
Dec  2 05:03:31 np0005542249 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped dracut cmdline hook.
Dec  2 05:03:31 np0005542249 systemd[1]: Starting Cleanup udev Database...
Dec  2 05:03:31 np0005542249 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec  2 05:03:31 np0005542249 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped Create List of Static Device Nodes.
Dec  2 05:03:31 np0005542249 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: Stopped Create System Users.
Dec  2 05:03:31 np0005542249 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd[1]: Finished Cleanup udev Database.
Dec  2 05:03:31 np0005542249 systemd[1]: Reached target Switch Root.
Dec  2 05:03:31 np0005542249 systemd[1]: Starting Switch Root...
Dec  2 05:03:31 np0005542249 systemd[1]: Switching root.
Dec  2 05:03:31 np0005542249 systemd-journald[305]: Journal stopped
Dec  2 05:03:31 np0005542249 systemd-journald: Received SIGTERM from PID 1 (systemd).
Dec  2 05:03:31 np0005542249 kernel: audit: type=1404 audit(1764669811.412:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec  2 05:03:31 np0005542249 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 05:03:31 np0005542249 kernel: SELinux:  policy capability open_perms=1
Dec  2 05:03:31 np0005542249 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 05:03:31 np0005542249 kernel: SELinux:  policy capability always_check_network=0
Dec  2 05:03:31 np0005542249 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 05:03:31 np0005542249 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 05:03:31 np0005542249 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 05:03:31 np0005542249 kernel: audit: type=1403 audit(1764669811.541:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec  2 05:03:31 np0005542249 systemd: Successfully loaded SELinux policy in 132.613ms.
Dec  2 05:03:31 np0005542249 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.579ms.
Dec  2 05:03:31 np0005542249 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  2 05:03:31 np0005542249 systemd: Detected virtualization kvm.
Dec  2 05:03:31 np0005542249 systemd: Detected architecture x86-64.
Dec  2 05:03:31 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:03:31 np0005542249 systemd: initrd-switch-root.service: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd: Stopped Switch Root.
Dec  2 05:03:31 np0005542249 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec  2 05:03:31 np0005542249 systemd: Created slice Slice /system/getty.
Dec  2 05:03:31 np0005542249 systemd: Created slice Slice /system/serial-getty.
Dec  2 05:03:31 np0005542249 systemd: Created slice Slice /system/sshd-keygen.
Dec  2 05:03:31 np0005542249 systemd: Created slice User and Session Slice.
Dec  2 05:03:31 np0005542249 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  2 05:03:31 np0005542249 systemd: Started Forward Password Requests to Wall Directory Watch.
Dec  2 05:03:31 np0005542249 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec  2 05:03:31 np0005542249 systemd: Reached target Local Encrypted Volumes.
Dec  2 05:03:31 np0005542249 systemd: Stopped target Switch Root.
Dec  2 05:03:31 np0005542249 systemd: Stopped target Initrd File Systems.
Dec  2 05:03:31 np0005542249 systemd: Stopped target Initrd Root File System.
Dec  2 05:03:31 np0005542249 systemd: Reached target Local Integrity Protected Volumes.
Dec  2 05:03:31 np0005542249 systemd: Reached target Path Units.
Dec  2 05:03:31 np0005542249 systemd: Reached target rpc_pipefs.target.
Dec  2 05:03:31 np0005542249 systemd: Reached target Slice Units.
Dec  2 05:03:31 np0005542249 systemd: Reached target Swaps.
Dec  2 05:03:31 np0005542249 systemd: Reached target Local Verity Protected Volumes.
Dec  2 05:03:31 np0005542249 systemd: Listening on RPCbind Server Activation Socket.
Dec  2 05:03:31 np0005542249 systemd: Reached target RPC Port Mapper.
Dec  2 05:03:31 np0005542249 systemd: Listening on Process Core Dump Socket.
Dec  2 05:03:31 np0005542249 systemd: Listening on initctl Compatibility Named Pipe.
Dec  2 05:03:31 np0005542249 systemd: Listening on udev Control Socket.
Dec  2 05:03:31 np0005542249 systemd: Listening on udev Kernel Socket.
Dec  2 05:03:31 np0005542249 systemd: Mounting Huge Pages File System...
Dec  2 05:03:31 np0005542249 systemd: Mounting POSIX Message Queue File System...
Dec  2 05:03:31 np0005542249 systemd: Mounting Kernel Debug File System...
Dec  2 05:03:31 np0005542249 systemd: Mounting Kernel Trace File System...
Dec  2 05:03:31 np0005542249 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  2 05:03:31 np0005542249 systemd: Starting Create List of Static Device Nodes...
Dec  2 05:03:31 np0005542249 systemd: Starting Load Kernel Module configfs...
Dec  2 05:03:31 np0005542249 systemd: Starting Load Kernel Module drm...
Dec  2 05:03:31 np0005542249 systemd: Starting Load Kernel Module efi_pstore...
Dec  2 05:03:31 np0005542249 systemd: Starting Load Kernel Module fuse...
Dec  2 05:03:31 np0005542249 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec  2 05:03:31 np0005542249 systemd: systemd-fsck-root.service: Deactivated successfully.
Dec  2 05:03:31 np0005542249 systemd: Stopped File System Check on Root Device.
Dec  2 05:03:31 np0005542249 systemd: Stopped Journal Service.
Dec  2 05:03:31 np0005542249 systemd: Starting Journal Service...
Dec  2 05:03:31 np0005542249 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  2 05:03:31 np0005542249 systemd: Starting Generate network units from Kernel command line...
Dec  2 05:03:31 np0005542249 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  2 05:03:31 np0005542249 systemd: Starting Remount Root and Kernel File Systems...
Dec  2 05:03:31 np0005542249 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec  2 05:03:31 np0005542249 systemd: Starting Apply Kernel Variables...
Dec  2 05:03:31 np0005542249 kernel: fuse: init (API version 7.37)
Dec  2 05:03:31 np0005542249 systemd: Starting Coldplug All udev Devices...
Dec  2 05:03:31 np0005542249 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec  2 05:03:31 np0005542249 systemd: Mounted Huge Pages File System.
Dec  2 05:03:31 np0005542249 systemd: Mounted POSIX Message Queue File System.
Dec  2 05:03:31 np0005542249 systemd: Mounted Kernel Debug File System.
Dec  2 05:03:31 np0005542249 systemd: Mounted Kernel Trace File System.
Dec  2 05:03:31 np0005542249 systemd: Finished Create List of Static Device Nodes.
Dec  2 05:03:31 np0005542249 systemd-journald[679]: Journal started
Dec  2 05:03:31 np0005542249 systemd-journald[679]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Dec  2 05:03:31 np0005542249 systemd[1]: Queued start job for default target Multi-User System.
Dec  2 05:03:31 np0005542249 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec  2 05:03:32 np0005542249 systemd: Started Journal Service.
Dec  2 05:03:32 np0005542249 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  2 05:03:32 np0005542249 systemd[1]: Finished Load Kernel Module configfs.
Dec  2 05:03:32 np0005542249 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec  2 05:03:32 np0005542249 systemd[1]: Finished Load Kernel Module efi_pstore.
Dec  2 05:03:32 np0005542249 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec  2 05:03:32 np0005542249 systemd[1]: Finished Load Kernel Module fuse.
Dec  2 05:03:32 np0005542249 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec  2 05:03:32 np0005542249 systemd[1]: Finished Generate network units from Kernel command line.
Dec  2 05:03:32 np0005542249 systemd[1]: Finished Remount Root and Kernel File Systems.
Dec  2 05:03:32 np0005542249 systemd[1]: Finished Apply Kernel Variables.
Dec  2 05:03:32 np0005542249 kernel: ACPI: bus type drm_connector registered
Dec  2 05:03:32 np0005542249 systemd[1]: Mounting FUSE Control File System...
Dec  2 05:03:32 np0005542249 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  2 05:03:32 np0005542249 systemd[1]: Starting Rebuild Hardware Database...
Dec  2 05:03:32 np0005542249 systemd[1]: Starting Flush Journal to Persistent Storage...
Dec  2 05:03:32 np0005542249 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec  2 05:03:32 np0005542249 systemd[1]: Starting Load/Save OS Random Seed...
Dec  2 05:03:32 np0005542249 systemd[1]: Starting Create System Users...
Dec  2 05:03:32 np0005542249 systemd-journald[679]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Dec  2 05:03:32 np0005542249 systemd-journald[679]: Received client request to flush runtime journal.
Dec  2 05:03:32 np0005542249 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec  2 05:03:32 np0005542249 systemd[1]: Finished Load Kernel Module drm.
Dec  2 05:03:32 np0005542249 systemd[1]: Mounted FUSE Control File System.
Dec  2 05:03:32 np0005542249 systemd[1]: Finished Flush Journal to Persistent Storage.
Dec  2 05:03:32 np0005542249 systemd[1]: Finished Load/Save OS Random Seed.
Dec  2 05:03:32 np0005542249 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  2 05:03:32 np0005542249 systemd[1]: Finished Create System Users.
Dec  2 05:03:32 np0005542249 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  2 05:03:32 np0005542249 systemd[1]: Finished Coldplug All udev Devices.
Dec  2 05:03:32 np0005542249 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  2 05:03:32 np0005542249 systemd[1]: Reached target Preparation for Local File Systems.
Dec  2 05:03:32 np0005542249 systemd[1]: Reached target Local File Systems.
Dec  2 05:03:32 np0005542249 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec  2 05:03:32 np0005542249 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec  2 05:03:32 np0005542249 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec  2 05:03:32 np0005542249 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec  2 05:03:32 np0005542249 systemd[1]: Starting Automatic Boot Loader Update...
Dec  2 05:03:32 np0005542249 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec  2 05:03:32 np0005542249 systemd[1]: Starting Create Volatile Files and Directories...
Dec  2 05:03:32 np0005542249 bootctl[696]: Couldn't find EFI system partition, skipping.
Dec  2 05:03:32 np0005542249 systemd[1]: Finished Automatic Boot Loader Update.
Dec  2 05:03:32 np0005542249 systemd[1]: Finished Create Volatile Files and Directories.
Dec  2 05:03:32 np0005542249 systemd[1]: Starting Security Auditing Service...
Dec  2 05:03:32 np0005542249 systemd[1]: Starting RPC Bind...
Dec  2 05:03:32 np0005542249 systemd[1]: Starting Rebuild Journal Catalog...
Dec  2 05:03:32 np0005542249 auditd[702]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec  2 05:03:32 np0005542249 auditd[702]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec  2 05:03:32 np0005542249 systemd[1]: Finished Rebuild Journal Catalog.
Dec  2 05:03:32 np0005542249 systemd[1]: Started RPC Bind.
Dec  2 05:03:32 np0005542249 augenrules[707]: /sbin/augenrules: No change
Dec  2 05:03:32 np0005542249 augenrules[722]: No rules
Dec  2 05:03:32 np0005542249 augenrules[722]: enabled 1
Dec  2 05:03:32 np0005542249 augenrules[722]: failure 1
Dec  2 05:03:32 np0005542249 augenrules[722]: pid 702
Dec  2 05:03:32 np0005542249 augenrules[722]: rate_limit 0
Dec  2 05:03:32 np0005542249 augenrules[722]: backlog_limit 8192
Dec  2 05:03:32 np0005542249 augenrules[722]: lost 0
Dec  2 05:03:32 np0005542249 augenrules[722]: backlog 0
Dec  2 05:03:32 np0005542249 augenrules[722]: backlog_wait_time 60000
Dec  2 05:03:32 np0005542249 augenrules[722]: backlog_wait_time_actual 0
Dec  2 05:03:32 np0005542249 augenrules[722]: enabled 1
Dec  2 05:03:32 np0005542249 augenrules[722]: failure 1
Dec  2 05:03:32 np0005542249 augenrules[722]: pid 702
Dec  2 05:03:32 np0005542249 augenrules[722]: rate_limit 0
Dec  2 05:03:32 np0005542249 augenrules[722]: backlog_limit 8192
Dec  2 05:03:32 np0005542249 augenrules[722]: lost 0
Dec  2 05:03:32 np0005542249 augenrules[722]: backlog 2
Dec  2 05:03:32 np0005542249 augenrules[722]: backlog_wait_time 60000
Dec  2 05:03:32 np0005542249 augenrules[722]: backlog_wait_time_actual 0
Dec  2 05:03:32 np0005542249 augenrules[722]: enabled 1
Dec  2 05:03:32 np0005542249 augenrules[722]: failure 1
Dec  2 05:03:32 np0005542249 augenrules[722]: pid 702
Dec  2 05:03:32 np0005542249 augenrules[722]: rate_limit 0
Dec  2 05:03:32 np0005542249 augenrules[722]: backlog_limit 8192
Dec  2 05:03:32 np0005542249 augenrules[722]: lost 0
Dec  2 05:03:32 np0005542249 augenrules[722]: backlog 0
Dec  2 05:03:32 np0005542249 augenrules[722]: backlog_wait_time 60000
Dec  2 05:03:32 np0005542249 augenrules[722]: backlog_wait_time_actual 0
Dec  2 05:03:32 np0005542249 systemd[1]: Started Security Auditing Service.
Dec  2 05:03:32 np0005542249 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec  2 05:03:32 np0005542249 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec  2 05:03:32 np0005542249 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec  2 05:03:32 np0005542249 systemd[1]: Finished Rebuild Hardware Database.
Dec  2 05:03:32 np0005542249 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  2 05:03:32 np0005542249 systemd[1]: Starting Update is Completed...
Dec  2 05:03:32 np0005542249 systemd[1]: Finished Update is Completed.
Dec  2 05:03:32 np0005542249 systemd-udevd[730]: Using default interface naming scheme 'rhel-9.0'.
Dec  2 05:03:32 np0005542249 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  2 05:03:32 np0005542249 systemd[1]: Reached target System Initialization.
Dec  2 05:03:32 np0005542249 systemd[1]: Started dnf makecache --timer.
Dec  2 05:03:32 np0005542249 systemd[1]: Started Daily rotation of log files.
Dec  2 05:03:32 np0005542249 systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec  2 05:03:32 np0005542249 systemd[1]: Reached target Timer Units.
Dec  2 05:03:32 np0005542249 systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec  2 05:03:32 np0005542249 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec  2 05:03:32 np0005542249 systemd[1]: Reached target Socket Units.
Dec  2 05:03:32 np0005542249 systemd[1]: Starting D-Bus System Message Bus...
Dec  2 05:03:32 np0005542249 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  2 05:03:32 np0005542249 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec  2 05:03:32 np0005542249 systemd[1]: Starting Load Kernel Module configfs...
Dec  2 05:03:32 np0005542249 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  2 05:03:32 np0005542249 systemd[1]: Finished Load Kernel Module configfs.
Dec  2 05:03:32 np0005542249 systemd-udevd[752]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 05:03:32 np0005542249 systemd[1]: Started D-Bus System Message Bus.
Dec  2 05:03:32 np0005542249 systemd[1]: Reached target Basic System.
Dec  2 05:03:32 np0005542249 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec  2 05:03:32 np0005542249 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec  2 05:03:32 np0005542249 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec  2 05:03:32 np0005542249 dbus-broker-lau[756]: Ready
Dec  2 05:03:32 np0005542249 systemd[1]: Starting NTP client/server...
Dec  2 05:03:32 np0005542249 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec  2 05:03:32 np0005542249 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec  2 05:03:32 np0005542249 systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec  2 05:03:32 np0005542249 systemd[1]: Starting IPv4 firewall with iptables...
Dec  2 05:03:32 np0005542249 systemd[1]: Started irqbalance daemon.
Dec  2 05:03:32 np0005542249 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec  2 05:03:32 np0005542249 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  2 05:03:32 np0005542249 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  2 05:03:32 np0005542249 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  2 05:03:32 np0005542249 systemd[1]: Reached target sshd-keygen.target.
Dec  2 05:03:32 np0005542249 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec  2 05:03:32 np0005542249 systemd[1]: Reached target User and Group Name Lookups.
Dec  2 05:03:32 np0005542249 systemd[1]: Starting User Login Management...
Dec  2 05:03:33 np0005542249 chronyd[791]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  2 05:03:33 np0005542249 chronyd[791]: Loaded 0 symmetric keys
Dec  2 05:03:33 np0005542249 chronyd[791]: Using right/UTC timezone to obtain leap second data
Dec  2 05:03:33 np0005542249 chronyd[791]: Loaded seccomp filter (level 2)
Dec  2 05:03:33 np0005542249 systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec  2 05:03:33 np0005542249 systemd[1]: Started NTP client/server.
Dec  2 05:03:33 np0005542249 systemd-logind[787]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  2 05:03:33 np0005542249 systemd-logind[787]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  2 05:03:33 np0005542249 systemd-logind[787]: New seat seat0.
Dec  2 05:03:33 np0005542249 systemd[1]: Started User Login Management.
Dec  2 05:03:33 np0005542249 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec  2 05:03:33 np0005542249 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec  2 05:03:33 np0005542249 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec  2 05:03:33 np0005542249 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec  2 05:03:33 np0005542249 kernel: Console: switching to colour dummy device 80x25
Dec  2 05:03:33 np0005542249 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec  2 05:03:33 np0005542249 kernel: [drm] features: -context_init
Dec  2 05:03:33 np0005542249 kernel: [drm] number of scanouts: 1
Dec  2 05:03:33 np0005542249 kernel: [drm] number of cap sets: 0
Dec  2 05:03:33 np0005542249 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec  2 05:03:33 np0005542249 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec  2 05:03:33 np0005542249 kernel: kvm_amd: TSC scaling supported
Dec  2 05:03:33 np0005542249 kernel: kvm_amd: Nested Virtualization enabled
Dec  2 05:03:33 np0005542249 kernel: kvm_amd: Nested Paging enabled
Dec  2 05:03:33 np0005542249 kernel: kvm_amd: LBR virtualization supported
Dec  2 05:03:33 np0005542249 kernel: Console: switching to colour frame buffer device 128x48
Dec  2 05:03:33 np0005542249 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec  2 05:03:33 np0005542249 iptables.init[780]: iptables: Applying firewall rules: [  OK  ]
Dec  2 05:03:33 np0005542249 systemd[1]: Finished IPv4 firewall with iptables.
Dec  2 05:03:33 np0005542249 cloud-init[839]: Cloud-init v. 24.4-7.el9 running 'init-local' at Tue, 02 Dec 2025 10:03:33 +0000. Up 6.61 seconds.
Dec  2 05:03:33 np0005542249 systemd[1]: run-cloud\x2dinit-tmp-tmpl3vea8v7.mount: Deactivated successfully.
Dec  2 05:03:33 np0005542249 systemd[1]: Starting Hostname Service...
Dec  2 05:03:33 np0005542249 systemd[1]: Started Hostname Service.
Dec  2 05:03:33 np0005542249 systemd-hostnamed[853]: Hostname set to <np0005542249.novalocal> (static)
Dec  2 05:03:34 np0005542249 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec  2 05:03:34 np0005542249 systemd[1]: Reached target Preparation for Network.
Dec  2 05:03:34 np0005542249 systemd[1]: Starting Network Manager...
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1066] NetworkManager (version 1.54.1-1.el9) is starting... (boot:34cc4f94-0800-49e6-880f-a0b8f85957c9)
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1071] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1132] manager[0x562f1c5ce080]: monitoring kernel firmware directory '/lib/firmware'.
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1159] hostname: hostname: using hostnamed
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1159] hostname: static hostname changed from (none) to "np0005542249.novalocal"
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1163] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1263] manager[0x562f1c5ce080]: rfkill: Wi-Fi hardware radio set enabled
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1265] manager[0x562f1c5ce080]: rfkill: WWAN hardware radio set enabled
Dec  2 05:03:34 np0005542249 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1331] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1332] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1332] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1333] manager: Networking is enabled by state file
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1334] settings: Loaded settings plugin: keyfile (internal)
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1347] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1367] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1378] dhcp: init: Using DHCP client 'internal'
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1381] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1395] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1402] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1409] device (lo): Activation: starting connection 'lo' (7b4d0754-52f9-443a-a1c7-3b9ed959a3a9)
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1419] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1422] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1455] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1460] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  2 05:03:34 np0005542249 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1463] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1478] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1481] device (eth0): carrier: link connected
Dec  2 05:03:34 np0005542249 systemd[1]: Started Network Manager.
Dec  2 05:03:34 np0005542249 systemd[1]: Reached target Network.
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1509] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1517] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1526] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1530] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  2 05:03:34 np0005542249 systemd[1]: Starting Network Manager Wait Online...
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1531] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1546] manager: NetworkManager state is now CONNECTING
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1547] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1556] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1559] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  2 05:03:34 np0005542249 systemd[1]: Starting GSSAPI Proxy Daemon...
Dec  2 05:03:34 np0005542249 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1630] dhcp4 (eth0): state changed new lease, address=38.102.83.233
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1641] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1665] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1677] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1681] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1689] device (lo): Activation: successful, device activated.
Dec  2 05:03:34 np0005542249 systemd[1]: Started GSSAPI Proxy Daemon.
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1715] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1716] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 05:03:34 np0005542249 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1719] manager: NetworkManager state is now CONNECTED_SITE
Dec  2 05:03:34 np0005542249 systemd[1]: Reached target NFS client services.
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1734] device (eth0): Activation: successful, device activated.
Dec  2 05:03:34 np0005542249 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  2 05:03:34 np0005542249 systemd[1]: Reached target Remote File Systems.
Dec  2 05:03:34 np0005542249 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1768] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  2 05:03:34 np0005542249 NetworkManager[857]: <info>  [1764669814.1776] manager: startup complete
Dec  2 05:03:34 np0005542249 systemd[1]: Finished Network Manager Wait Online.
Dec  2 05:03:34 np0005542249 systemd[1]: Starting Cloud-init: Network Stage...
Dec  2 05:03:34 np0005542249 cloud-init[922]: Cloud-init v. 24.4-7.el9 running 'init' at Tue, 02 Dec 2025 10:03:34 +0000. Up 7.48 seconds.
Dec  2 05:03:34 np0005542249 cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec  2 05:03:34 np0005542249 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  2 05:03:34 np0005542249 cloud-init[922]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec  2 05:03:34 np0005542249 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  2 05:03:34 np0005542249 cloud-init[922]: ci-info: |  eth0  | True |        38.102.83.233         | 255.255.255.0 | global | fa:16:3e:cc:8c:ab |
Dec  2 05:03:34 np0005542249 cloud-init[922]: ci-info: |  eth0  | True | fe80::f816:3eff:fecc:8cab/64 |       .       |  link  | fa:16:3e:cc:8c:ab |
Dec  2 05:03:34 np0005542249 cloud-init[922]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec  2 05:03:34 np0005542249 cloud-init[922]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec  2 05:03:34 np0005542249 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  2 05:03:34 np0005542249 cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec  2 05:03:34 np0005542249 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  2 05:03:34 np0005542249 cloud-init[922]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec  2 05:03:34 np0005542249 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  2 05:03:34 np0005542249 cloud-init[922]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec  2 05:03:34 np0005542249 cloud-init[922]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec  2 05:03:34 np0005542249 cloud-init[922]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec  2 05:03:34 np0005542249 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  2 05:03:34 np0005542249 cloud-init[922]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec  2 05:03:34 np0005542249 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  2 05:03:34 np0005542249 cloud-init[922]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec  2 05:03:34 np0005542249 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  2 05:03:34 np0005542249 cloud-init[922]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec  2 05:03:34 np0005542249 cloud-init[922]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec  2 05:03:34 np0005542249 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  2 05:03:36 np0005542249 cloud-init[922]: Generating public/private rsa key pair.
Dec  2 05:03:36 np0005542249 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec  2 05:03:36 np0005542249 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec  2 05:03:36 np0005542249 cloud-init[922]: The key fingerprint is:
Dec  2 05:03:36 np0005542249 cloud-init[922]: SHA256:SU0Z+l8ZfpycDRm/9dAgy6ZHjpTyfKNxZaMolbBNvF8 root@np0005542249.novalocal
Dec  2 05:03:36 np0005542249 cloud-init[922]: The key's randomart image is:
Dec  2 05:03:36 np0005542249 cloud-init[922]: +---[RSA 3072]----+
Dec  2 05:03:36 np0005542249 cloud-init[922]: |        ..+o. o  |
Dec  2 05:03:36 np0005542249 cloud-init[922]: |         Oo+ o * |
Dec  2 05:03:36 np0005542249 cloud-init[922]: |        = B.= O +|
Dec  2 05:03:36 np0005542249 cloud-init[922]: |       . O.B =EBB|
Dec  2 05:03:36 np0005542249 cloud-init[922]: |        S O.B.+==|
Dec  2 05:03:36 np0005542249 cloud-init[922]: |         . B.o . |
Dec  2 05:03:36 np0005542249 cloud-init[922]: |          . .    |
Dec  2 05:03:36 np0005542249 cloud-init[922]: |                 |
Dec  2 05:03:36 np0005542249 cloud-init[922]: |                 |
Dec  2 05:03:36 np0005542249 cloud-init[922]: +----[SHA256]-----+
Dec  2 05:03:36 np0005542249 cloud-init[922]: Generating public/private ecdsa key pair.
Dec  2 05:03:36 np0005542249 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec  2 05:03:36 np0005542249 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec  2 05:03:36 np0005542249 cloud-init[922]: The key fingerprint is:
Dec  2 05:03:36 np0005542249 cloud-init[922]: SHA256:wmhU5k2ybtyXd2dwxhCFS9OJIK+/yYqDCxrC0zWSyO4 root@np0005542249.novalocal
Dec  2 05:03:36 np0005542249 cloud-init[922]: The key's randomart image is:
Dec  2 05:03:36 np0005542249 cloud-init[922]: +---[ECDSA 256]---+
Dec  2 05:03:36 np0005542249 cloud-init[922]: |      + .. .. +=o|
Dec  2 05:03:36 np0005542249 cloud-init[922]: |     + =  o  .++.|
Dec  2 05:03:36 np0005542249 cloud-init[922]: |    . o .  . ..o+|
Dec  2 05:03:36 np0005542249 cloud-init[922]: |. ...= .  ..  .+ |
Dec  2 05:03:36 np0005542249 cloud-init[922]: | o ooo* S.o . . o|
Dec  2 05:03:36 np0005542249 cloud-init[922]: |o ..o... ... . o |
Dec  2 05:03:36 np0005542249 cloud-init[922]: |.= o  .    .     |
Dec  2 05:03:36 np0005542249 cloud-init[922]: |o + .. .. . o    |
Dec  2 05:03:36 np0005542249 cloud-init[922]: | E   ......+     |
Dec  2 05:03:36 np0005542249 cloud-init[922]: +----[SHA256]-----+
Dec  2 05:03:36 np0005542249 cloud-init[922]: Generating public/private ed25519 key pair.
Dec  2 05:03:36 np0005542249 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec  2 05:03:36 np0005542249 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec  2 05:03:36 np0005542249 cloud-init[922]: The key fingerprint is:
Dec  2 05:03:36 np0005542249 cloud-init[922]: SHA256:hyYiJ+B5sdUEmgsRMw/xlW94xxmeo18QBtujmK7qfHo root@np0005542249.novalocal
Dec  2 05:03:36 np0005542249 cloud-init[922]: The key's randomart image is:
Dec  2 05:03:36 np0005542249 cloud-init[922]: +--[ED25519 256]--+
Dec  2 05:03:36 np0005542249 cloud-init[922]: | Bo  ooo.        |
Dec  2 05:03:36 np0005542249 cloud-init[922]: |  B +.o o+       |
Dec  2 05:03:36 np0005542249 cloud-init[922]: |.. * .oo+o=      |
Dec  2 05:03:36 np0005542249 cloud-init[922]: |..o =.o+.X.      |
Dec  2 05:03:36 np0005542249 cloud-init[922]: | oo+oo+.S +      |
Dec  2 05:03:36 np0005542249 cloud-init[922]: |  .+.. + . .     |
Dec  2 05:03:36 np0005542249 cloud-init[922]: |     .  . .      |
Dec  2 05:03:36 np0005542249 cloud-init[922]: |.  E.    .       |
Dec  2 05:03:36 np0005542249 cloud-init[922]: |.==.             |
Dec  2 05:03:36 np0005542249 cloud-init[922]: +----[SHA256]-----+
Dec  2 05:03:36 np0005542249 systemd[1]: Finished Cloud-init: Network Stage.
Dec  2 05:03:36 np0005542249 systemd[1]: Reached target Cloud-config availability.
Dec  2 05:03:36 np0005542249 systemd[1]: Reached target Network is Online.
Dec  2 05:03:36 np0005542249 systemd[1]: Starting Cloud-init: Config Stage...
Dec  2 05:03:36 np0005542249 systemd[1]: Starting Crash recovery kernel arming...
Dec  2 05:03:36 np0005542249 systemd[1]: Starting Notify NFS peers of a restart...
Dec  2 05:03:36 np0005542249 systemd[1]: Starting System Logging Service...
Dec  2 05:03:36 np0005542249 sm-notify[1004]: Version 2.5.4 starting
Dec  2 05:03:36 np0005542249 systemd[1]: Starting OpenSSH server daemon...
Dec  2 05:03:36 np0005542249 systemd[1]: Starting Permit User Sessions...
Dec  2 05:03:36 np0005542249 systemd[1]: Started Notify NFS peers of a restart.
Dec  2 05:03:36 np0005542249 systemd[1]: Started OpenSSH server daemon.
Dec  2 05:03:36 np0005542249 systemd[1]: Finished Permit User Sessions.
Dec  2 05:03:36 np0005542249 systemd[1]: Started Command Scheduler.
Dec  2 05:03:36 np0005542249 systemd[1]: Started Getty on tty1.
Dec  2 05:03:36 np0005542249 systemd[1]: Started Serial Getty on ttyS0.
Dec  2 05:03:36 np0005542249 systemd[1]: Reached target Login Prompts.
Dec  2 05:03:36 np0005542249 rsyslogd[1005]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1005" x-info="https://www.rsyslog.com"] start
Dec  2 05:03:36 np0005542249 rsyslogd[1005]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec  2 05:03:36 np0005542249 systemd[1]: Started System Logging Service.
Dec  2 05:03:36 np0005542249 systemd[1]: Reached target Multi-User System.
Dec  2 05:03:36 np0005542249 systemd[1]: Starting Record Runlevel Change in UTMP...
Dec  2 05:03:36 np0005542249 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec  2 05:03:36 np0005542249 systemd[1]: Finished Record Runlevel Change in UTMP.
Dec  2 05:03:36 np0005542249 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 05:03:36 np0005542249 kdumpctl[1017]: kdump: No kdump initial ramdisk found.
Dec  2 05:03:36 np0005542249 kdumpctl[1017]: kdump: Rebuilding /boot/initramfs-5.14.0-645.el9.x86_64kdump.img
Dec  2 05:03:36 np0005542249 cloud-init[1141]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Tue, 02 Dec 2025 10:03:36 +0000. Up 9.47 seconds.
Dec  2 05:03:36 np0005542249 systemd[1]: Finished Cloud-init: Config Stage.
Dec  2 05:03:36 np0005542249 systemd[1]: Starting Cloud-init: Final Stage...
Dec  2 05:03:36 np0005542249 cloud-init[1241]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Tue, 02 Dec 2025 10:03:36 +0000. Up 9.85 seconds.
Dec  2 05:03:37 np0005542249 cloud-init[1265]: #############################################################
Dec  2 05:03:37 np0005542249 cloud-init[1270]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec  2 05:03:37 np0005542249 cloud-init[1276]: 256 SHA256:wmhU5k2ybtyXd2dwxhCFS9OJIK+/yYqDCxrC0zWSyO4 root@np0005542249.novalocal (ECDSA)
Dec  2 05:03:37 np0005542249 cloud-init[1283]: 256 SHA256:hyYiJ+B5sdUEmgsRMw/xlW94xxmeo18QBtujmK7qfHo root@np0005542249.novalocal (ED25519)
Dec  2 05:03:37 np0005542249 cloud-init[1291]: 3072 SHA256:SU0Z+l8ZfpycDRm/9dAgy6ZHjpTyfKNxZaMolbBNvF8 root@np0005542249.novalocal (RSA)
Dec  2 05:03:37 np0005542249 cloud-init[1292]: -----END SSH HOST KEY FINGERPRINTS-----
Dec  2 05:03:37 np0005542249 cloud-init[1293]: #############################################################
Dec  2 05:03:37 np0005542249 cloud-init[1241]: Cloud-init v. 24.4-7.el9 finished at Tue, 02 Dec 2025 10:03:37 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.18 seconds
Dec  2 05:03:37 np0005542249 dracut[1300]: dracut-057-102.git20250818.el9
Dec  2 05:03:37 np0005542249 systemd[1]: Finished Cloud-init: Final Stage.
Dec  2 05:03:37 np0005542249 systemd[1]: Reached target Cloud-init target.
Dec  2 05:03:37 np0005542249 dracut[1302]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-645.el9.x86_64kdump.img 5.14.0-645.el9.x86_64
Dec  2 05:03:37 np0005542249 dracut[1302]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec  2 05:03:37 np0005542249 dracut[1302]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec  2 05:03:37 np0005542249 dracut[1302]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec  2 05:03:37 np0005542249 dracut[1302]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  2 05:03:37 np0005542249 dracut[1302]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  2 05:03:37 np0005542249 dracut[1302]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  2 05:03:37 np0005542249 dracut[1302]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  2 05:03:37 np0005542249 dracut[1302]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  2 05:03:37 np0005542249 dracut[1302]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  2 05:03:37 np0005542249 dracut[1302]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  2 05:03:37 np0005542249 dracut[1302]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  2 05:03:37 np0005542249 dracut[1302]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  2 05:03:37 np0005542249 dracut[1302]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  2 05:03:37 np0005542249 dracut[1302]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  2 05:03:37 np0005542249 dracut[1302]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  2 05:03:37 np0005542249 dracut[1302]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  2 05:03:37 np0005542249 dracut[1302]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  2 05:03:37 np0005542249 dracut[1302]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  2 05:03:37 np0005542249 dracut[1302]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  2 05:03:37 np0005542249 dracut[1302]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  2 05:03:37 np0005542249 dracut[1302]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  2 05:03:37 np0005542249 dracut[1302]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  2 05:03:37 np0005542249 dracut[1302]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  2 05:03:37 np0005542249 dracut[1302]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: memstrack is not available
Dec  2 05:03:38 np0005542249 dracut[1302]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  2 05:03:38 np0005542249 dracut[1302]: memstrack is not available
Dec  2 05:03:38 np0005542249 dracut[1302]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  2 05:03:38 np0005542249 dracut[1302]: *** Including module: systemd ***
Dec  2 05:03:38 np0005542249 dracut[1302]: *** Including module: fips ***
Dec  2 05:03:39 np0005542249 chronyd[791]: Selected source 138.197.135.239 (2.centos.pool.ntp.org)
Dec  2 05:03:39 np0005542249 chronyd[791]: System clock TAI offset set to 37 seconds
Dec  2 05:03:39 np0005542249 dracut[1302]: *** Including module: systemd-initrd ***
Dec  2 05:03:39 np0005542249 dracut[1302]: *** Including module: i18n ***
Dec  2 05:03:39 np0005542249 dracut[1302]: *** Including module: drm ***
Dec  2 05:03:39 np0005542249 dracut[1302]: *** Including module: prefixdevname ***
Dec  2 05:03:39 np0005542249 dracut[1302]: *** Including module: kernel-modules ***
Dec  2 05:03:39 np0005542249 kernel: block vda: the capability attribute has been deprecated.
Dec  2 05:03:40 np0005542249 dracut[1302]: *** Including module: kernel-modules-extra ***
Dec  2 05:03:40 np0005542249 dracut[1302]: *** Including module: qemu ***
Dec  2 05:03:40 np0005542249 dracut[1302]: *** Including module: fstab-sys ***
Dec  2 05:03:40 np0005542249 dracut[1302]: *** Including module: rootfs-block ***
Dec  2 05:03:40 np0005542249 dracut[1302]: *** Including module: terminfo ***
Dec  2 05:03:40 np0005542249 dracut[1302]: *** Including module: udev-rules ***
Dec  2 05:03:40 np0005542249 dracut[1302]: Skipping udev rule: 91-permissions.rules
Dec  2 05:03:40 np0005542249 dracut[1302]: Skipping udev rule: 80-drivers-modprobe.rules
Dec  2 05:03:40 np0005542249 dracut[1302]: *** Including module: virtiofs ***
Dec  2 05:03:40 np0005542249 dracut[1302]: *** Including module: dracut-systemd ***
Dec  2 05:03:40 np0005542249 dracut[1302]: *** Including module: usrmount ***
Dec  2 05:03:40 np0005542249 dracut[1302]: *** Including module: base ***
Dec  2 05:03:41 np0005542249 dracut[1302]: *** Including module: fs-lib ***
Dec  2 05:03:41 np0005542249 chronyd[791]: Selected source 162.159.200.1 (2.centos.pool.ntp.org)
Dec  2 05:03:41 np0005542249 dracut[1302]: *** Including module: kdumpbase ***
Dec  2 05:03:41 np0005542249 dracut[1302]: *** Including module: microcode_ctl-fw_dir_override ***
Dec  2 05:03:41 np0005542249 dracut[1302]:  microcode_ctl module: mangling fw_dir
Dec  2 05:03:41 np0005542249 dracut[1302]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec  2 05:03:41 np0005542249 dracut[1302]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec  2 05:03:41 np0005542249 dracut[1302]:    microcode_ctl: configuration "intel" is ignored
Dec  2 05:03:41 np0005542249 dracut[1302]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec  2 05:03:41 np0005542249 dracut[1302]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec  2 05:03:41 np0005542249 dracut[1302]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec  2 05:03:41 np0005542249 dracut[1302]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec  2 05:03:41 np0005542249 dracut[1302]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec  2 05:03:41 np0005542249 dracut[1302]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec  2 05:03:41 np0005542249 dracut[1302]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec  2 05:03:41 np0005542249 dracut[1302]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Dec  2 05:03:41 np0005542249 dracut[1302]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec  2 05:03:41 np0005542249 dracut[1302]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec  2 05:03:41 np0005542249 dracut[1302]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec  2 05:03:41 np0005542249 dracut[1302]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec  2 05:03:41 np0005542249 dracut[1302]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec  2 05:03:41 np0005542249 dracut[1302]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec  2 05:03:41 np0005542249 dracut[1302]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec  2 05:03:41 np0005542249 dracut[1302]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec  2 05:03:41 np0005542249 dracut[1302]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec  2 05:03:41 np0005542249 dracut[1302]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec  2 05:03:41 np0005542249 dracut[1302]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec  2 05:03:41 np0005542249 dracut[1302]: *** Including module: openssl ***
Dec  2 05:03:41 np0005542249 dracut[1302]: *** Including module: shutdown ***
Dec  2 05:03:41 np0005542249 dracut[1302]: *** Including module: squash ***
Dec  2 05:03:41 np0005542249 dracut[1302]: *** Including modules done ***
Dec  2 05:03:41 np0005542249 dracut[1302]: *** Installing kernel module dependencies ***
Dec  2 05:03:42 np0005542249 dracut[1302]: *** Installing kernel module dependencies done ***
Dec  2 05:03:42 np0005542249 dracut[1302]: *** Resolving executable dependencies ***
Dec  2 05:03:43 np0005542249 dracut[1302]: *** Resolving executable dependencies done ***
Dec  2 05:03:43 np0005542249 dracut[1302]: *** Generating early-microcode cpio image ***
Dec  2 05:03:43 np0005542249 dracut[1302]: *** Store current command line parameters ***
Dec  2 05:03:43 np0005542249 dracut[1302]: Stored kernel commandline:
Dec  2 05:03:43 np0005542249 dracut[1302]: No dracut internal kernel commandline stored in the initramfs
Dec  2 05:03:44 np0005542249 irqbalance[782]: Cannot change IRQ 35 affinity: Operation not permitted
Dec  2 05:03:44 np0005542249 irqbalance[782]: IRQ 35 affinity is now unmanaged
Dec  2 05:03:44 np0005542249 irqbalance[782]: Cannot change IRQ 25 affinity: Operation not permitted
Dec  2 05:03:44 np0005542249 irqbalance[782]: IRQ 25 affinity is now unmanaged
Dec  2 05:03:44 np0005542249 irqbalance[782]: Cannot change IRQ 31 affinity: Operation not permitted
Dec  2 05:03:44 np0005542249 irqbalance[782]: IRQ 31 affinity is now unmanaged
Dec  2 05:03:44 np0005542249 irqbalance[782]: Cannot change IRQ 26 affinity: Operation not permitted
Dec  2 05:03:44 np0005542249 irqbalance[782]: IRQ 26 affinity is now unmanaged
Dec  2 05:03:44 np0005542249 irqbalance[782]: Cannot change IRQ 29 affinity: Operation not permitted
Dec  2 05:03:44 np0005542249 irqbalance[782]: IRQ 29 affinity is now unmanaged
Dec  2 05:03:44 np0005542249 dracut[1302]: *** Install squash loader ***
Dec  2 05:03:44 np0005542249 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  2 05:03:44 np0005542249 dracut[1302]: *** Squashing the files inside the initramfs ***
Dec  2 05:03:45 np0005542249 dracut[1302]: *** Squashing the files inside the initramfs done ***
Dec  2 05:03:45 np0005542249 dracut[1302]: *** Creating image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' ***
Dec  2 05:03:45 np0005542249 dracut[1302]: *** Hardlinking files ***
Dec  2 05:03:45 np0005542249 dracut[1302]: *** Hardlinking files done ***
Dec  2 05:03:46 np0005542249 dracut[1302]: *** Creating initramfs image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' done ***
Dec  2 05:03:46 np0005542249 kdumpctl[1017]: kdump: kexec: loaded kdump kernel
Dec  2 05:03:46 np0005542249 kdumpctl[1017]: kdump: Starting kdump: [OK]
Dec  2 05:03:46 np0005542249 systemd[1]: Finished Crash recovery kernel arming.
Dec  2 05:03:46 np0005542249 systemd[1]: Startup finished in 1.906s (kernel) + 2.479s (initrd) + 15.533s (userspace) = 19.919s.
Dec  2 05:04:04 np0005542249 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  2 05:04:12 np0005542249 systemd[1]: Created slice User Slice of UID 1000.
Dec  2 05:04:12 np0005542249 systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec  2 05:04:12 np0005542249 systemd-logind[787]: New session 1 of user zuul.
Dec  2 05:04:12 np0005542249 systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec  2 05:04:12 np0005542249 systemd[1]: Starting User Manager for UID 1000...
Dec  2 05:04:12 np0005542249 systemd[4310]: Queued start job for default target Main User Target.
Dec  2 05:04:12 np0005542249 systemd[4310]: Created slice User Application Slice.
Dec  2 05:04:12 np0005542249 systemd[4310]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  2 05:04:12 np0005542249 systemd[4310]: Started Daily Cleanup of User's Temporary Directories.
Dec  2 05:04:12 np0005542249 systemd[4310]: Reached target Paths.
Dec  2 05:04:12 np0005542249 systemd[4310]: Reached target Timers.
Dec  2 05:04:12 np0005542249 systemd[4310]: Starting D-Bus User Message Bus Socket...
Dec  2 05:04:12 np0005542249 systemd[4310]: Starting Create User's Volatile Files and Directories...
Dec  2 05:04:12 np0005542249 systemd[4310]: Finished Create User's Volatile Files and Directories.
Dec  2 05:04:12 np0005542249 systemd[4310]: Listening on D-Bus User Message Bus Socket.
Dec  2 05:04:12 np0005542249 systemd[4310]: Reached target Sockets.
Dec  2 05:04:12 np0005542249 systemd[4310]: Reached target Basic System.
Dec  2 05:04:12 np0005542249 systemd[4310]: Reached target Main User Target.
Dec  2 05:04:12 np0005542249 systemd[4310]: Startup finished in 117ms.
Dec  2 05:04:12 np0005542249 systemd[1]: Started User Manager for UID 1000.
Dec  2 05:04:12 np0005542249 systemd[1]: Started Session 1 of User zuul.
Dec  2 05:04:12 np0005542249 python3[4392]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:04:15 np0005542249 python3[4420]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:04:22 np0005542249 python3[4478]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:04:23 np0005542249 python3[4518]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec  2 05:04:24 np0005542249 python3[4544]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFzx11/76oVx/NzMpo8eyTzx8M/URs/YZYmcfTFsVniPyTXK1cL/1jZExq/XmGtHJ24CxYuP026LFft5155hVSXGkgUaPptYX4q/21kx1X53U/czA6nbV8TfhYMPLg04/9QeK/qM0ZFIRJ8ZOG/Pf+W+YlJ3JsI1vdyFS5TPrzJ7J3fYAyQdqcKYQTFlZ+Y7jE7hfzxPdo167pufL9ae9tp9Jsw2cXVz6t+icpN771jFLCR2eR+Vtn4UtwH6MM0q3/rD3MfD1j8I/NlhDVK0u9fcmYIRIUgn6RPBpkzZgE222OG0wWvPoGK/D/BpWQIPjfr2liTTU/BsLraVFEbqsfVpBEQMEG/lY1ruC49LQdALCVUJ1ylFgXhNXU9+9IHPvu6dhnaSsWmgJClt2u+sNHh0953+6OCvoOgygD7+GYGRZ/Kndc320wBN6vTRbwIrBFNTFODcbY0G+YMUoIVIMoQJeJfGzyIk1SzS5vrKEMLZGa24+6RFt5myE2egboH4c= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:25 np0005542249 python3[4568]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:04:25 np0005542249 python3[4667]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:04:26 np0005542249 python3[4738]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764669865.418007-207-270091404604187/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=900eb85e018f43c7b86cb705b38a9f70_id_rsa follow=False checksum=c4a65b75e6862d91ef986a6e17a762c99d8f6490 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:04:26 np0005542249 python3[4861]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:04:26 np0005542249 python3[4932]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764669866.3228033-240-33554764193700/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=900eb85e018f43c7b86cb705b38a9f70_id_rsa.pub follow=False checksum=f3c6c529524514e6a83136b6c37a3869cae09edb backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:04:28 np0005542249 python3[4980]: ansible-ping Invoked with data=pong
Dec  2 05:04:29 np0005542249 python3[5004]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:04:30 np0005542249 python3[5062]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec  2 05:04:31 np0005542249 python3[5094]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:04:32 np0005542249 python3[5118]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:04:32 np0005542249 python3[5142]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:04:32 np0005542249 python3[5166]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:04:32 np0005542249 python3[5190]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:04:33 np0005542249 python3[5214]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:04:35 np0005542249 python3[5240]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:04:35 np0005542249 python3[5318]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:04:36 np0005542249 python3[5391]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764669875.381733-21-230420094984827/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:04:36 np0005542249 python3[5439]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:37 np0005542249 python3[5463]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:37 np0005542249 python3[5487]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:37 np0005542249 python3[5511]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:37 np0005542249 python3[5535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:38 np0005542249 python3[5559]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:38 np0005542249 python3[5583]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:38 np0005542249 python3[5607]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:39 np0005542249 python3[5631]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:39 np0005542249 python3[5655]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:39 np0005542249 python3[5679]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:40 np0005542249 python3[5703]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:40 np0005542249 python3[5727]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:40 np0005542249 python3[5751]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:40 np0005542249 python3[5775]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:41 np0005542249 python3[5799]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:41 np0005542249 python3[5823]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:41 np0005542249 python3[5847]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:42 np0005542249 python3[5871]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:42 np0005542249 python3[5895]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:42 np0005542249 python3[5919]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:42 np0005542249 python3[5943]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:43 np0005542249 python3[5967]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:43 np0005542249 python3[5991]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:43 np0005542249 python3[6015]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:44 np0005542249 irqbalance[782]: Cannot change IRQ 28 affinity: Operation not permitted
Dec  2 05:04:44 np0005542249 irqbalance[782]: IRQ 28 affinity is now unmanaged
Dec  2 05:04:44 np0005542249 python3[6039]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:04:46 np0005542249 python3[6065]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  2 05:04:46 np0005542249 systemd[1]: Starting Time & Date Service...
Dec  2 05:04:47 np0005542249 systemd[1]: Started Time & Date Service.
Dec  2 05:04:47 np0005542249 systemd-timedated[6067]: Changed time zone to 'UTC' (UTC).
Dec  2 05:04:47 np0005542249 python3[6096]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:04:47 np0005542249 python3[6172]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:04:48 np0005542249 python3[6243]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764669887.7428386-153-46840494091675/source _original_basename=tmpy0a_i4pb follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:04:48 np0005542249 python3[6343]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:04:49 np0005542249 python3[6414]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764669888.625796-183-219872789779043/source _original_basename=tmpcq1wnl11 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:04:49 np0005542249 python3[6516]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:04:50 np0005542249 python3[6589]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764669889.713621-231-86884050627724/source _original_basename=tmp12xmek78 follow=False checksum=147a28745cdbec7efac586baa5d602a97fd127e8 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:04:50 np0005542249 python3[6637]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:04:51 np0005542249 python3[6663]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:04:51 np0005542249 python3[6743]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:04:51 np0005542249 python3[6816]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764669891.3831017-273-259671482073307/source _original_basename=tmpbnfpwqt3 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:04:52 np0005542249 python3[6867]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-4b81-abee-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:04:53 np0005542249 python3[6895]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-4b81-abee-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec  2 05:04:54 np0005542249 python3[6924]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:05:12 np0005542249 python3[6952]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:05:17 np0005542249 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  2 05:05:47 np0005542249 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  2 05:05:47 np0005542249 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec  2 05:05:47 np0005542249 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec  2 05:05:47 np0005542249 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec  2 05:05:47 np0005542249 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec  2 05:05:47 np0005542249 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec  2 05:05:47 np0005542249 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec  2 05:05:47 np0005542249 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec  2 05:05:47 np0005542249 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec  2 05:05:47 np0005542249 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec  2 05:05:47 np0005542249 NetworkManager[857]: <info>  [1764669947.9047] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  2 05:05:47 np0005542249 systemd-udevd[6960]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 05:05:47 np0005542249 NetworkManager[857]: <info>  [1764669947.9197] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 05:05:47 np0005542249 NetworkManager[857]: <info>  [1764669947.9227] settings: (eth1): created default wired connection 'Wired connection 1'
Dec  2 05:05:47 np0005542249 NetworkManager[857]: <info>  [1764669947.9232] device (eth1): carrier: link connected
Dec  2 05:05:47 np0005542249 NetworkManager[857]: <info>  [1764669947.9234] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  2 05:05:47 np0005542249 NetworkManager[857]: <info>  [1764669947.9241] policy: auto-activating connection 'Wired connection 1' (b7a7d1c3-8714-37fa-b787-ecc28ef4bc4a)
Dec  2 05:05:47 np0005542249 NetworkManager[857]: <info>  [1764669947.9244] device (eth1): Activation: starting connection 'Wired connection 1' (b7a7d1c3-8714-37fa-b787-ecc28ef4bc4a)
Dec  2 05:05:47 np0005542249 NetworkManager[857]: <info>  [1764669947.9245] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 05:05:47 np0005542249 NetworkManager[857]: <info>  [1764669947.9249] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 05:05:47 np0005542249 NetworkManager[857]: <info>  [1764669947.9254] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 05:05:47 np0005542249 NetworkManager[857]: <info>  [1764669947.9258] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  2 05:05:48 np0005542249 python3[6986]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-127d-a3b6-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:05:58 np0005542249 python3[7068]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:05:59 np0005542249 python3[7141]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764669958.5467687-102-127106510555169/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=77fa4f5893f92e45970fb0039bd711512da7362c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:05:59 np0005542249 python3[7191]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 05:06:00 np0005542249 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  2 05:06:00 np0005542249 systemd[1]: Stopped Network Manager Wait Online.
Dec  2 05:06:00 np0005542249 systemd[1]: Stopping Network Manager Wait Online...
Dec  2 05:06:00 np0005542249 systemd[1]: Stopping Network Manager...
Dec  2 05:06:00 np0005542249 NetworkManager[857]: <info>  [1764669960.0215] caught SIGTERM, shutting down normally.
Dec  2 05:06:00 np0005542249 NetworkManager[857]: <info>  [1764669960.0226] dhcp4 (eth0): canceled DHCP transaction
Dec  2 05:06:00 np0005542249 NetworkManager[857]: <info>  [1764669960.0227] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  2 05:06:00 np0005542249 NetworkManager[857]: <info>  [1764669960.0227] dhcp4 (eth0): state changed no lease
Dec  2 05:06:00 np0005542249 NetworkManager[857]: <info>  [1764669960.0230] manager: NetworkManager state is now CONNECTING
Dec  2 05:06:00 np0005542249 NetworkManager[857]: <info>  [1764669960.0331] dhcp4 (eth1): canceled DHCP transaction
Dec  2 05:06:00 np0005542249 NetworkManager[857]: <info>  [1764669960.0332] dhcp4 (eth1): state changed no lease
Dec  2 05:06:00 np0005542249 NetworkManager[857]: <info>  [1764669960.0392] exiting (success)
Dec  2 05:06:00 np0005542249 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  2 05:06:00 np0005542249 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  2 05:06:00 np0005542249 systemd[1]: Stopped Network Manager.
Dec  2 05:06:00 np0005542249 systemd[1]: Starting Network Manager...
Dec  2 05:06:00 np0005542249 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.0755] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:34cc4f94-0800-49e6-880f-a0b8f85957c9)
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.0758] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.0812] manager[0x559a88723070]: monitoring kernel firmware directory '/lib/firmware'.
Dec  2 05:06:00 np0005542249 systemd[1]: Starting Hostname Service...
Dec  2 05:06:00 np0005542249 systemd[1]: Started Hostname Service.
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1570] hostname: hostname: using hostnamed
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1571] hostname: static hostname changed from (none) to "np0005542249.novalocal"
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1578] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1584] manager[0x559a88723070]: rfkill: Wi-Fi hardware radio set enabled
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1585] manager[0x559a88723070]: rfkill: WWAN hardware radio set enabled
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1614] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1614] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1614] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1615] manager: Networking is enabled by state file
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1617] settings: Loaded settings plugin: keyfile (internal)
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1621] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1648] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1659] dhcp: init: Using DHCP client 'internal'
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1661] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1665] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1668] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1674] device (lo): Activation: starting connection 'lo' (7b4d0754-52f9-443a-a1c7-3b9ed959a3a9)
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1679] device (eth0): carrier: link connected
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1683] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1686] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1686] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1691] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1696] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1700] device (eth1): carrier: link connected
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1704] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1708] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (b7a7d1c3-8714-37fa-b787-ecc28ef4bc4a) (indicated)
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1708] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1711] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1716] device (eth1): Activation: starting connection 'Wired connection 1' (b7a7d1c3-8714-37fa-b787-ecc28ef4bc4a)
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1721] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  2 05:06:00 np0005542249 systemd[1]: Started Network Manager.
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1724] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1726] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1727] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1729] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1731] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1733] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1734] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1736] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1740] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1742] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1753] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1757] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1776] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1782] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1786] device (lo): Activation: successful, device activated.
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1800] dhcp4 (eth0): state changed new lease, address=38.102.83.233
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.1805] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  2 05:06:00 np0005542249 systemd[1]: Starting Network Manager Wait Online...
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.2028] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.2057] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.2059] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.2065] manager: NetworkManager state is now CONNECTED_SITE
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.2069] device (eth0): Activation: successful, device activated.
Dec  2 05:06:00 np0005542249 NetworkManager[7197]: <info>  [1764669960.2074] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  2 05:06:00 np0005542249 python3[7275]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-127d-a3b6-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:06:10 np0005542249 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  2 05:06:30 np0005542249 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  2 05:06:45 np0005542249 NetworkManager[7197]: <info>  [1764670005.0264] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  2 05:06:45 np0005542249 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  2 05:06:45 np0005542249 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  2 05:06:45 np0005542249 NetworkManager[7197]: <info>  [1764670005.0529] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  2 05:06:45 np0005542249 NetworkManager[7197]: <info>  [1764670005.0534] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  2 05:06:45 np0005542249 NetworkManager[7197]: <info>  [1764670005.0545] device (eth1): Activation: successful, device activated.
Dec  2 05:06:45 np0005542249 NetworkManager[7197]: <info>  [1764670005.0551] manager: startup complete
Dec  2 05:06:45 np0005542249 NetworkManager[7197]: <info>  [1764670005.0553] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec  2 05:06:45 np0005542249 NetworkManager[7197]: <warn>  [1764670005.0557] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec  2 05:06:45 np0005542249 NetworkManager[7197]: <info>  [1764670005.0564] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec  2 05:06:45 np0005542249 systemd[1]: Finished Network Manager Wait Online.
Dec  2 05:06:45 np0005542249 NetworkManager[7197]: <info>  [1764670005.0691] dhcp4 (eth1): canceled DHCP transaction
Dec  2 05:06:45 np0005542249 NetworkManager[7197]: <info>  [1764670005.0692] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  2 05:06:45 np0005542249 NetworkManager[7197]: <info>  [1764670005.0693] dhcp4 (eth1): state changed no lease
Dec  2 05:06:45 np0005542249 NetworkManager[7197]: <info>  [1764670005.0717] policy: auto-activating connection 'ci-private-network' (d9158bdb-1886-5d5e-a36e-fd98f73b7772)
Dec  2 05:06:45 np0005542249 NetworkManager[7197]: <info>  [1764670005.0726] device (eth1): Activation: starting connection 'ci-private-network' (d9158bdb-1886-5d5e-a36e-fd98f73b7772)
Dec  2 05:06:45 np0005542249 NetworkManager[7197]: <info>  [1764670005.0728] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 05:06:45 np0005542249 NetworkManager[7197]: <info>  [1764670005.0734] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 05:06:45 np0005542249 NetworkManager[7197]: <info>  [1764670005.0745] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 05:06:45 np0005542249 NetworkManager[7197]: <info>  [1764670005.0758] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 05:06:45 np0005542249 NetworkManager[7197]: <info>  [1764670005.0813] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 05:06:45 np0005542249 NetworkManager[7197]: <info>  [1764670005.0816] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 05:06:45 np0005542249 NetworkManager[7197]: <info>  [1764670005.0829] device (eth1): Activation: successful, device activated.
Dec  2 05:06:55 np0005542249 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  2 05:07:00 np0005542249 systemd-logind[787]: Session 1 logged out. Waiting for processes to exit.
Dec  2 05:07:00 np0005542249 systemd-logind[787]: New session 3 of user zuul.
Dec  2 05:07:00 np0005542249 systemd[1]: Started Session 3 of User zuul.
Dec  2 05:07:01 np0005542249 python3[7388]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:07:01 np0005542249 python3[7461]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764670021.0988576-267-1487281985056/source _original_basename=tmpf7baem_6 follow=False checksum=c579b7c35fd977a092a8df58d2f9632b8cfdd212 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:07:03 np0005542249 systemd[1]: session-3.scope: Deactivated successfully.
Dec  2 05:07:03 np0005542249 systemd-logind[787]: Session 3 logged out. Waiting for processes to exit.
Dec  2 05:07:03 np0005542249 systemd-logind[787]: Removed session 3.
Dec  2 05:07:11 np0005542249 systemd[4310]: Starting Mark boot as successful...
Dec  2 05:07:11 np0005542249 systemd[4310]: Finished Mark boot as successful.
Dec  2 05:10:11 np0005542249 systemd[4310]: Created slice User Background Tasks Slice.
Dec  2 05:10:11 np0005542249 systemd[4310]: Starting Cleanup of User's Temporary Files and Directories...
Dec  2 05:10:11 np0005542249 systemd[4310]: Finished Cleanup of User's Temporary Files and Directories.
Dec  2 05:14:09 np0005542249 systemd-logind[787]: New session 4 of user zuul.
Dec  2 05:14:09 np0005542249 systemd[1]: Started Session 4 of User zuul.
Dec  2 05:14:09 np0005542249 python3[7538]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-7b77-f523-000000001cda-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:14:09 np0005542249 python3[7566]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:14:10 np0005542249 python3[7592]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:14:10 np0005542249 python3[7619]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:14:10 np0005542249 python3[7645]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:14:11 np0005542249 python3[7671]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:14:11 np0005542249 python3[7749]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:14:12 np0005542249 python3[7822]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764670451.7354653-480-126785289326175/source _original_basename=tmpzkuxad74 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:14:13 np0005542249 python3[7872]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 05:14:13 np0005542249 systemd[1]: Reloading.
Dec  2 05:14:13 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:14:15 np0005542249 python3[7929]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec  2 05:14:15 np0005542249 python3[7955]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:14:15 np0005542249 python3[7983]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:14:15 np0005542249 python3[8011]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:14:16 np0005542249 python3[8039]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:14:16 np0005542249 python3[8066]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-7b77-f523-000000001ce1-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:14:17 np0005542249 python3[8096]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  2 05:14:19 np0005542249 systemd[1]: session-4.scope: Deactivated successfully.
Dec  2 05:14:19 np0005542249 systemd[1]: session-4.scope: Consumed 4.045s CPU time.
Dec  2 05:14:19 np0005542249 systemd-logind[787]: Session 4 logged out. Waiting for processes to exit.
Dec  2 05:14:19 np0005542249 systemd-logind[787]: Removed session 4.
Dec  2 05:14:20 np0005542249 systemd-logind[787]: New session 5 of user zuul.
Dec  2 05:14:20 np0005542249 systemd[1]: Started Session 5 of User zuul.
Dec  2 05:14:21 np0005542249 python3[8130]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  2 05:14:34 np0005542249 kernel: SELinux:  Converting 385 SID table entries...
Dec  2 05:14:34 np0005542249 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 05:14:34 np0005542249 kernel: SELinux:  policy capability open_perms=1
Dec  2 05:14:34 np0005542249 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 05:14:34 np0005542249 kernel: SELinux:  policy capability always_check_network=0
Dec  2 05:14:34 np0005542249 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 05:14:34 np0005542249 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 05:14:34 np0005542249 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 05:14:43 np0005542249 kernel: SELinux:  Converting 385 SID table entries...
Dec  2 05:14:43 np0005542249 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 05:14:43 np0005542249 kernel: SELinux:  policy capability open_perms=1
Dec  2 05:14:43 np0005542249 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 05:14:43 np0005542249 kernel: SELinux:  policy capability always_check_network=0
Dec  2 05:14:43 np0005542249 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 05:14:43 np0005542249 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 05:14:43 np0005542249 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 05:14:52 np0005542249 kernel: SELinux:  Converting 385 SID table entries...
Dec  2 05:14:52 np0005542249 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 05:14:52 np0005542249 kernel: SELinux:  policy capability open_perms=1
Dec  2 05:14:52 np0005542249 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 05:14:52 np0005542249 kernel: SELinux:  policy capability always_check_network=0
Dec  2 05:14:52 np0005542249 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 05:14:52 np0005542249 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 05:14:52 np0005542249 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 05:14:53 np0005542249 setsebool[8199]: The virt_use_nfs policy boolean was changed to 1 by root
Dec  2 05:14:53 np0005542249 setsebool[8199]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec  2 05:15:04 np0005542249 kernel: SELinux:  Converting 388 SID table entries...
Dec  2 05:15:04 np0005542249 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 05:15:04 np0005542249 kernel: SELinux:  policy capability open_perms=1
Dec  2 05:15:04 np0005542249 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 05:15:04 np0005542249 kernel: SELinux:  policy capability always_check_network=0
Dec  2 05:15:04 np0005542249 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 05:15:04 np0005542249 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 05:15:04 np0005542249 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 05:15:23 np0005542249 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  2 05:15:23 np0005542249 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  2 05:15:23 np0005542249 systemd[1]: Starting man-db-cache-update.service...
Dec  2 05:15:24 np0005542249 systemd[1]: Reloading.
Dec  2 05:15:24 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:15:24 np0005542249 systemd[1]: Starting dnf makecache...
Dec  2 05:15:24 np0005542249 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  2 05:15:24 np0005542249 dnf[9120]: Failed determining last makecache time.
Dec  2 05:15:25 np0005542249 dnf[9120]: CentOS Stream 9 - BaseOS                         56 kB/s | 5.8 kB     00:00
Dec  2 05:15:25 np0005542249 dnf[9120]: CentOS Stream 9 - AppStream                      51 kB/s | 5.8 kB     00:00
Dec  2 05:15:25 np0005542249 dnf[9120]: CentOS Stream 9 - CRB                            61 kB/s | 5.7 kB     00:00
Dec  2 05:15:25 np0005542249 dnf[9120]: CentOS Stream 9 - Extras packages                80 kB/s | 8.1 kB     00:00
Dec  2 05:15:25 np0005542249 dnf[9120]: Metadata cache created.
Dec  2 05:15:25 np0005542249 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec  2 05:15:25 np0005542249 systemd[1]: Finished dnf makecache.
Dec  2 05:15:30 np0005542249 python3[14029]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-5d1a-56c5-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:15:31 np0005542249 kernel: evm: overlay not supported
Dec  2 05:15:31 np0005542249 systemd[4310]: Starting D-Bus User Message Bus...
Dec  2 05:15:31 np0005542249 dbus-broker-launch[14561]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec  2 05:15:31 np0005542249 dbus-broker-launch[14561]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec  2 05:15:31 np0005542249 systemd[4310]: Started D-Bus User Message Bus.
Dec  2 05:15:31 np0005542249 dbus-broker-lau[14561]: Ready
Dec  2 05:15:31 np0005542249 systemd[4310]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  2 05:15:31 np0005542249 systemd[4310]: Created slice Slice /user.
Dec  2 05:15:31 np0005542249 systemd[4310]: podman-14480.scope: unit configures an IP firewall, but not running as root.
Dec  2 05:15:31 np0005542249 systemd[4310]: (This warning is only shown for the first unit using IP firewalling.)
Dec  2 05:15:31 np0005542249 systemd[4310]: Started podman-14480.scope.
Dec  2 05:15:31 np0005542249 systemd[4310]: Started podman-pause-2aa5b999.scope.
Dec  2 05:15:32 np0005542249 python3[15004]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.129.56.7:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.129.56.7:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:15:32 np0005542249 python3[15004]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec  2 05:15:32 np0005542249 systemd[1]: session-5.scope: Deactivated successfully.
Dec  2 05:15:32 np0005542249 systemd[1]: session-5.scope: Consumed 1min 926ms CPU time.
Dec  2 05:15:32 np0005542249 systemd-logind[787]: Session 5 logged out. Waiting for processes to exit.
Dec  2 05:15:32 np0005542249 systemd-logind[787]: Removed session 5.
Dec  2 05:15:56 np0005542249 systemd-logind[787]: New session 6 of user zuul.
Dec  2 05:15:56 np0005542249 systemd[1]: Started Session 6 of User zuul.
Dec  2 05:15:56 np0005542249 python3[25143]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJYou/rw+lpfXrqujkrEAuJn6jBiihi1jHzRkZri11l5JvWIXvZqkITcjFbD2mVLU1KIaSqvuaUYVq6xH/9wQso= zuul@np0005542248.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:15:57 np0005542249 python3[25324]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJYou/rw+lpfXrqujkrEAuJn6jBiihi1jHzRkZri11l5JvWIXvZqkITcjFbD2mVLU1KIaSqvuaUYVq6xH/9wQso= zuul@np0005542248.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:15:58 np0005542249 python3[25684]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005542249.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec  2 05:15:58 np0005542249 python3[25932]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJYou/rw+lpfXrqujkrEAuJn6jBiihi1jHzRkZri11l5JvWIXvZqkITcjFbD2mVLU1KIaSqvuaUYVq6xH/9wQso= zuul@np0005542248.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 05:15:58 np0005542249 python3[26239]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:15:59 np0005542249 python3[26512]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764670558.6829643-135-129666315024274/source _original_basename=tmpctymhvez follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:16:00 np0005542249 python3[26867]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec  2 05:16:00 np0005542249 systemd[1]: Starting Hostname Service...
Dec  2 05:16:00 np0005542249 systemd[1]: Started Hostname Service.
Dec  2 05:16:00 np0005542249 systemd-hostnamed[26977]: Changed pretty hostname to 'compute-0'
Dec  2 05:16:00 np0005542249 systemd-hostnamed[26977]: Hostname set to <compute-0> (static)
Dec  2 05:16:00 np0005542249 NetworkManager[7197]: <info>  [1764670560.3098] hostname: static hostname changed from "np0005542249.novalocal" to "compute-0"
Dec  2 05:16:00 np0005542249 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  2 05:16:00 np0005542249 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  2 05:16:00 np0005542249 systemd[1]: session-6.scope: Deactivated successfully.
Dec  2 05:16:00 np0005542249 systemd[1]: session-6.scope: Consumed 2.216s CPU time.
Dec  2 05:16:00 np0005542249 systemd-logind[787]: Session 6 logged out. Waiting for processes to exit.
Dec  2 05:16:00 np0005542249 systemd-logind[787]: Removed session 6.
Dec  2 05:16:04 np0005542249 irqbalance[782]: Cannot change IRQ 27 affinity: Operation not permitted
Dec  2 05:16:04 np0005542249 irqbalance[782]: IRQ 27 affinity is now unmanaged
Dec  2 05:16:10 np0005542249 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  2 05:16:11 np0005542249 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  2 05:16:11 np0005542249 systemd[1]: Finished man-db-cache-update.service.
Dec  2 05:16:11 np0005542249 systemd[1]: man-db-cache-update.service: Consumed 53.862s CPU time.
Dec  2 05:16:11 np0005542249 systemd[1]: run-r5607ef5c77f94e9ab8f05800d1fa6187.service: Deactivated successfully.
Dec  2 05:16:30 np0005542249 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  2 05:18:41 np0005542249 systemd[1]: Starting Cleanup of Temporary Directories...
Dec  2 05:18:41 np0005542249 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec  2 05:18:41 np0005542249 systemd[1]: Finished Cleanup of Temporary Directories.
Dec  2 05:18:41 np0005542249 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec  2 05:20:05 np0005542249 systemd-logind[787]: New session 7 of user zuul.
Dec  2 05:20:05 np0005542249 systemd[1]: Started Session 7 of User zuul.
Dec  2 05:20:06 np0005542249 python3[30044]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:20:07 np0005542249 python3[30160]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:20:08 np0005542249 python3[30233]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764670807.489791-33566-243566709540074/source mode=0755 _original_basename=delorean.repo follow=False checksum=39c885eb875fd03e010d1b0454241c26b121dfb2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:20:08 np0005542249 python3[30259]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:20:08 np0005542249 python3[30332]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764670807.489791-33566-243566709540074/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:20:09 np0005542249 python3[30358]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:20:09 np0005542249 python3[30431]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764670807.489791-33566-243566709540074/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:20:09 np0005542249 python3[30457]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:20:10 np0005542249 python3[30530]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764670807.489791-33566-243566709540074/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:20:10 np0005542249 python3[30556]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:20:10 np0005542249 python3[30629]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764670807.489791-33566-243566709540074/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:20:10 np0005542249 python3[30655]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:20:11 np0005542249 python3[30728]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764670807.489791-33566-243566709540074/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:20:11 np0005542249 python3[30754]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:20:12 np0005542249 python3[30827]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764670807.489791-33566-243566709540074/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6e18e2038d54303b4926db53c0b6cced515a9151 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:20:23 np0005542249 python3[30885]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:25:22 np0005542249 systemd[1]: session-7.scope: Deactivated successfully.
Dec  2 05:25:22 np0005542249 systemd[1]: session-7.scope: Consumed 5.282s CPU time.
Dec  2 05:25:22 np0005542249 systemd-logind[787]: Session 7 logged out. Waiting for processes to exit.
Dec  2 05:25:22 np0005542249 systemd-logind[787]: Removed session 7.
Dec  2 05:39:38 np0005542249 systemd-logind[787]: New session 8 of user zuul.
Dec  2 05:39:38 np0005542249 systemd[1]: Started Session 8 of User zuul.
Dec  2 05:39:39 np0005542249 python3.9[31074]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:39:41 np0005542249 python3.9[31255]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:39:49 np0005542249 systemd[1]: session-8.scope: Deactivated successfully.
Dec  2 05:39:49 np0005542249 systemd[1]: session-8.scope: Consumed 8.298s CPU time.
Dec  2 05:39:49 np0005542249 systemd-logind[787]: Session 8 logged out. Waiting for processes to exit.
Dec  2 05:39:49 np0005542249 systemd-logind[787]: Removed session 8.
Dec  2 05:40:04 np0005542249 systemd-logind[787]: New session 9 of user zuul.
Dec  2 05:40:04 np0005542249 systemd[1]: Started Session 9 of User zuul.
Dec  2 05:40:05 np0005542249 python3.9[31468]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  2 05:40:06 np0005542249 python3.9[31642]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:40:07 np0005542249 python3.9[31794]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:40:08 np0005542249 python3.9[31947]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 05:40:09 np0005542249 python3.9[32099]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:40:10 np0005542249 python3.9[32251]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:40:11 np0005542249 python3.9[32374]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764672010.0542145-73-199412748984338/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:40:12 np0005542249 python3.9[32526]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:40:12 np0005542249 python3.9[32682]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:40:13 np0005542249 python3.9[32834]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:40:14 np0005542249 python3.9[32984]: ansible-ansible.builtin.service_facts Invoked
Dec  2 05:40:18 np0005542249 python3.9[33237]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:40:19 np0005542249 python3.9[33387]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:40:20 np0005542249 python3.9[33541]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:40:21 np0005542249 python3.9[33699]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 05:40:22 np0005542249 python3.9[33783]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 05:41:14 np0005542249 systemd[1]: Reloading.
Dec  2 05:41:14 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:41:14 np0005542249 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec  2 05:41:15 np0005542249 systemd[1]: Reloading.
Dec  2 05:41:15 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:41:15 np0005542249 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec  2 05:41:15 np0005542249 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec  2 05:41:15 np0005542249 systemd[1]: Reloading.
Dec  2 05:41:15 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:41:15 np0005542249 systemd[1]: Listening on LVM2 poll daemon socket.
Dec  2 05:41:17 np0005542249 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Dec  2 05:41:17 np0005542249 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Dec  2 05:41:17 np0005542249 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Dec  2 05:42:36 np0005542249 kernel: SELinux:  Converting 2718 SID table entries...
Dec  2 05:42:36 np0005542249 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 05:42:36 np0005542249 kernel: SELinux:  policy capability open_perms=1
Dec  2 05:42:36 np0005542249 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 05:42:36 np0005542249 kernel: SELinux:  policy capability always_check_network=0
Dec  2 05:42:36 np0005542249 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 05:42:36 np0005542249 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 05:42:36 np0005542249 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 05:42:36 np0005542249 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec  2 05:42:37 np0005542249 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  2 05:42:37 np0005542249 systemd[1]: Starting man-db-cache-update.service...
Dec  2 05:42:37 np0005542249 systemd[1]: Reloading.
Dec  2 05:42:37 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:42:37 np0005542249 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  2 05:42:38 np0005542249 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  2 05:42:38 np0005542249 systemd[1]: Finished man-db-cache-update.service.
Dec  2 05:42:38 np0005542249 systemd[1]: man-db-cache-update.service: Consumed 1.386s CPU time.
Dec  2 05:42:38 np0005542249 systemd[1]: run-rebce1213c56743d4a07a631972802c83.service: Deactivated successfully.
Dec  2 05:42:38 np0005542249 python3.9[35342]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:42:40 np0005542249 python3.9[35625]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  2 05:42:41 np0005542249 python3.9[35777]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  2 05:42:43 np0005542249 python3.9[35930]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:42:44 np0005542249 python3.9[36082]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  2 05:42:46 np0005542249 python3.9[36234]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:42:46 np0005542249 python3.9[36386]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:42:47 np0005542249 python3.9[36509]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764672166.2960963-236-81979316254371/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=d7146d9f3fadd3a8b8a7aad758bb65bc8e959c93 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:42:50 np0005542249 python3.9[36661]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 05:42:52 np0005542249 python3.9[36813]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:42:53 np0005542249 python3.9[36966]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:42:54 np0005542249 python3.9[37118]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  2 05:42:54 np0005542249 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 05:42:55 np0005542249 python3.9[37272]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  2 05:42:56 np0005542249 python3.9[37430]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  2 05:42:57 np0005542249 python3.9[37590]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  2 05:42:57 np0005542249 python3.9[37743]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  2 05:42:58 np0005542249 python3.9[37901]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  2 05:42:59 np0005542249 python3.9[38053]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 05:43:01 np0005542249 python3.9[38206]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:43:02 np0005542249 python3.9[38358]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:43:03 np0005542249 python3.9[38481]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764672182.044045-355-12460248844742/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:43:04 np0005542249 python3.9[38633]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 05:43:04 np0005542249 systemd[1]: Starting Load Kernel Modules...
Dec  2 05:43:04 np0005542249 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec  2 05:43:04 np0005542249 kernel: Bridge firewalling registered
Dec  2 05:43:04 np0005542249 systemd-modules-load[38637]: Inserted module 'br_netfilter'
Dec  2 05:43:04 np0005542249 systemd[1]: Finished Load Kernel Modules.
Dec  2 05:43:05 np0005542249 python3.9[38792]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:43:05 np0005542249 python3.9[38915]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764672184.5798485-378-65740555027291/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:43:06 np0005542249 python3.9[39067]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 05:43:09 np0005542249 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Dec  2 05:43:09 np0005542249 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Dec  2 05:43:10 np0005542249 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  2 05:43:10 np0005542249 systemd[1]: Starting man-db-cache-update.service...
Dec  2 05:43:10 np0005542249 systemd[1]: Reloading.
Dec  2 05:43:10 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:43:10 np0005542249 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  2 05:43:12 np0005542249 python3.9[40249]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 05:43:12 np0005542249 python3.9[41197]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  2 05:43:13 np0005542249 python3.9[41967]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 05:43:14 np0005542249 python3.9[42855]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:43:14 np0005542249 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  2 05:43:14 np0005542249 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  2 05:43:14 np0005542249 systemd[1]: Finished man-db-cache-update.service.
Dec  2 05:43:14 np0005542249 systemd[1]: man-db-cache-update.service: Consumed 5.096s CPU time.
Dec  2 05:43:14 np0005542249 systemd[1]: run-rc725e9e931b04bbd869e5702b400ffe3.service: Deactivated successfully.
Dec  2 05:43:14 np0005542249 systemd[1]: Starting Authorization Manager...
Dec  2 05:43:14 np0005542249 polkitd[43476]: Started polkitd version 0.117
Dec  2 05:43:14 np0005542249 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  2 05:43:14 np0005542249 systemd[1]: Started Authorization Manager.
Dec  2 05:43:15 np0005542249 python3.9[43646]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 05:43:15 np0005542249 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  2 05:43:15 np0005542249 systemd[1]: tuned.service: Deactivated successfully.
Dec  2 05:43:15 np0005542249 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  2 05:43:15 np0005542249 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  2 05:43:16 np0005542249 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  2 05:43:16 np0005542249 python3.9[43808]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  2 05:43:18 np0005542249 python3.9[43960]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 05:43:19 np0005542249 systemd[1]: Reloading.
Dec  2 05:43:19 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:43:20 np0005542249 python3.9[44149]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 05:43:20 np0005542249 systemd[1]: Reloading.
Dec  2 05:43:20 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:43:21 np0005542249 python3.9[44338]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:43:21 np0005542249 python3.9[44491]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:43:21 np0005542249 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec  2 05:43:22 np0005542249 python3.9[44644]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:43:24 np0005542249 python3.9[44806]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:43:25 np0005542249 python3.9[44959]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 05:43:25 np0005542249 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  2 05:43:25 np0005542249 systemd[1]: Stopped Apply Kernel Variables.
Dec  2 05:43:25 np0005542249 systemd[1]: Stopping Apply Kernel Variables...
Dec  2 05:43:25 np0005542249 systemd[1]: Starting Apply Kernel Variables...
Dec  2 05:43:25 np0005542249 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  2 05:43:25 np0005542249 systemd[1]: Finished Apply Kernel Variables.
Dec  2 05:43:26 np0005542249 systemd[1]: session-9.scope: Deactivated successfully.
Dec  2 05:43:26 np0005542249 systemd[1]: session-9.scope: Consumed 2min 24.468s CPU time.
Dec  2 05:43:26 np0005542249 systemd-logind[787]: Session 9 logged out. Waiting for processes to exit.
Dec  2 05:43:26 np0005542249 systemd-logind[787]: Removed session 9.
Dec  2 05:43:32 np0005542249 systemd-logind[787]: New session 10 of user zuul.
Dec  2 05:43:32 np0005542249 systemd[1]: Started Session 10 of User zuul.
Dec  2 05:43:33 np0005542249 python3.9[45142]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:43:34 np0005542249 python3.9[45298]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec  2 05:43:35 np0005542249 python3.9[45451]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  2 05:43:36 np0005542249 python3.9[45609]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  2 05:43:37 np0005542249 python3.9[45769]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 05:43:38 np0005542249 python3.9[45853]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  2 05:43:41 np0005542249 python3.9[46017]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 05:43:53 np0005542249 kernel: SELinux:  Converting 2730 SID table entries...
Dec  2 05:43:53 np0005542249 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 05:43:53 np0005542249 kernel: SELinux:  policy capability open_perms=1
Dec  2 05:43:53 np0005542249 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 05:43:53 np0005542249 kernel: SELinux:  policy capability always_check_network=0
Dec  2 05:43:53 np0005542249 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 05:43:53 np0005542249 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 05:43:53 np0005542249 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 05:43:53 np0005542249 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec  2 05:43:53 np0005542249 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec  2 05:43:54 np0005542249 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  2 05:43:54 np0005542249 systemd[1]: Starting man-db-cache-update.service...
Dec  2 05:43:55 np0005542249 systemd[1]: Reloading.
Dec  2 05:43:55 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:43:55 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:43:55 np0005542249 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  2 05:43:55 np0005542249 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  2 05:43:55 np0005542249 systemd[1]: Finished man-db-cache-update.service.
Dec  2 05:43:55 np0005542249 systemd[1]: run-raf38d21e66ac48e494a6e6cbb9d26b0d.service: Deactivated successfully.
Dec  2 05:43:56 np0005542249 python3.9[47115]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  2 05:43:57 np0005542249 systemd[1]: Reloading.
Dec  2 05:43:57 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:43:57 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:43:57 np0005542249 systemd[1]: Starting Open vSwitch Database Unit...
Dec  2 05:43:57 np0005542249 chown[47157]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec  2 05:43:57 np0005542249 ovs-ctl[47162]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec  2 05:43:57 np0005542249 ovs-ctl[47162]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec  2 05:43:57 np0005542249 ovs-ctl[47162]: Starting ovsdb-server [  OK  ]
Dec  2 05:43:57 np0005542249 ovs-vsctl[47211]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec  2 05:43:57 np0005542249 ovs-vsctl[47231]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec  2 05:43:57 np0005542249 ovs-ctl[47162]: Configuring Open vSwitch system IDs [  OK  ]
Dec  2 05:43:57 np0005542249 ovs-ctl[47162]: Enabling remote OVSDB managers [  OK  ]
Dec  2 05:43:57 np0005542249 ovs-vsctl[47237]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  2 05:43:57 np0005542249 systemd[1]: Started Open vSwitch Database Unit.
Dec  2 05:43:57 np0005542249 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec  2 05:43:57 np0005542249 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec  2 05:43:57 np0005542249 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec  2 05:43:57 np0005542249 kernel: openvswitch: Open vSwitch switching datapath
Dec  2 05:43:57 np0005542249 ovs-ctl[47281]: Inserting openvswitch module [  OK  ]
Dec  2 05:43:57 np0005542249 ovs-ctl[47250]: Starting ovs-vswitchd [  OK  ]
Dec  2 05:43:57 np0005542249 ovs-vsctl[47298]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  2 05:43:57 np0005542249 ovs-ctl[47250]: Enabling remote OVSDB managers [  OK  ]
Dec  2 05:43:57 np0005542249 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec  2 05:43:57 np0005542249 systemd[1]: Starting Open vSwitch...
Dec  2 05:43:57 np0005542249 systemd[1]: Finished Open vSwitch.
Dec  2 05:43:58 np0005542249 python3.9[47450]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:43:59 np0005542249 python3.9[47602]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec  2 05:44:00 np0005542249 kernel: SELinux:  Converting 2744 SID table entries...
Dec  2 05:44:00 np0005542249 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 05:44:00 np0005542249 kernel: SELinux:  policy capability open_perms=1
Dec  2 05:44:00 np0005542249 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 05:44:00 np0005542249 kernel: SELinux:  policy capability always_check_network=0
Dec  2 05:44:00 np0005542249 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 05:44:00 np0005542249 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 05:44:00 np0005542249 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 05:44:01 np0005542249 python3.9[47757]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:44:02 np0005542249 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec  2 05:44:02 np0005542249 python3.9[47915]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 05:44:04 np0005542249 python3.9[48068]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:44:06 np0005542249 python3.9[48355]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  2 05:44:07 np0005542249 python3.9[48505]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 05:44:08 np0005542249 python3.9[48659]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 05:44:09 np0005542249 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  2 05:44:09 np0005542249 systemd[1]: Starting man-db-cache-update.service...
Dec  2 05:44:09 np0005542249 systemd[1]: Reloading.
Dec  2 05:44:10 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:44:10 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:44:10 np0005542249 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  2 05:44:10 np0005542249 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  2 05:44:10 np0005542249 systemd[1]: Finished man-db-cache-update.service.
Dec  2 05:44:10 np0005542249 systemd[1]: run-r3427fe1d88924a689daa593defafa78c.service: Deactivated successfully.
Dec  2 05:44:11 np0005542249 python3.9[48976]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 05:44:11 np0005542249 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  2 05:44:11 np0005542249 systemd[1]: Stopped Network Manager Wait Online.
Dec  2 05:44:11 np0005542249 systemd[1]: Stopping Network Manager Wait Online...
Dec  2 05:44:11 np0005542249 NetworkManager[7197]: <info>  [1764672251.3953] caught SIGTERM, shutting down normally.
Dec  2 05:44:11 np0005542249 systemd[1]: Stopping Network Manager...
Dec  2 05:44:11 np0005542249 NetworkManager[7197]: <info>  [1764672251.3977] dhcp4 (eth0): canceled DHCP transaction
Dec  2 05:44:11 np0005542249 NetworkManager[7197]: <info>  [1764672251.3977] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  2 05:44:11 np0005542249 NetworkManager[7197]: <info>  [1764672251.3977] dhcp4 (eth0): state changed no lease
Dec  2 05:44:11 np0005542249 NetworkManager[7197]: <info>  [1764672251.3982] manager: NetworkManager state is now CONNECTED_SITE
Dec  2 05:44:11 np0005542249 NetworkManager[7197]: <info>  [1764672251.4044] exiting (success)
Dec  2 05:44:11 np0005542249 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  2 05:44:11 np0005542249 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  2 05:44:11 np0005542249 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  2 05:44:11 np0005542249 systemd[1]: Stopped Network Manager.
Dec  2 05:44:11 np0005542249 systemd[1]: NetworkManager.service: Consumed 15.360s CPU time, 4.1M memory peak, read 0B from disk, written 12.0K to disk.
Dec  2 05:44:11 np0005542249 systemd[1]: Starting Network Manager...
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.4895] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:34cc4f94-0800-49e6-880f-a0b8f85957c9)
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.4896] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.4963] manager[0x563eb353a090]: monitoring kernel firmware directory '/lib/firmware'.
Dec  2 05:44:11 np0005542249 systemd[1]: Starting Hostname Service...
Dec  2 05:44:11 np0005542249 systemd[1]: Started Hostname Service.
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6107] hostname: hostname: using hostnamed
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6108] hostname: static hostname changed from (none) to "compute-0"
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6116] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6123] manager[0x563eb353a090]: rfkill: Wi-Fi hardware radio set enabled
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6123] manager[0x563eb353a090]: rfkill: WWAN hardware radio set enabled
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6161] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6176] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6177] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6178] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6178] manager: Networking is enabled by state file
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6182] settings: Loaded settings plugin: keyfile (internal)
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6188] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6231] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6245] dhcp: init: Using DHCP client 'internal'
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6251] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6259] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6269] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6283] device (lo): Activation: starting connection 'lo' (7b4d0754-52f9-443a-a1c7-3b9ed959a3a9)
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6295] device (eth0): carrier: link connected
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6304] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6312] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6312] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6319] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6326] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6333] device (eth1): carrier: link connected
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6338] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6343] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (d9158bdb-1886-5d5e-a36e-fd98f73b7772) (indicated)
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6344] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6349] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6357] device (eth1): Activation: starting connection 'ci-private-network' (d9158bdb-1886-5d5e-a36e-fd98f73b7772)
Dec  2 05:44:11 np0005542249 systemd[1]: Started Network Manager.
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6368] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6386] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6392] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6396] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6400] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6404] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6407] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6411] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6418] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6435] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6441] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6471] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6483] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6489] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6491] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6495] device (lo): Activation: successful, device activated.
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6508] dhcp4 (eth0): state changed new lease, address=38.102.83.233
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6515] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6589] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6596] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6602] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6605] manager: NetworkManager state is now CONNECTED_LOCAL
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6608] device (eth1): Activation: successful, device activated.
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6622] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6623] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6626] manager: NetworkManager state is now CONNECTED_SITE
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6629] device (eth0): Activation: successful, device activated.
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6634] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  2 05:44:11 np0005542249 systemd[1]: Starting Network Manager Wait Online...
Dec  2 05:44:11 np0005542249 NetworkManager[48987]: <info>  [1764672251.6636] manager: startup complete
Dec  2 05:44:11 np0005542249 systemd[1]: Finished Network Manager Wait Online.
Dec  2 05:44:12 np0005542249 python3.9[49202]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 05:44:19 np0005542249 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  2 05:44:19 np0005542249 systemd[1]: Starting man-db-cache-update.service...
Dec  2 05:44:19 np0005542249 systemd[1]: Reloading.
Dec  2 05:44:19 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:44:19 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:44:19 np0005542249 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  2 05:44:20 np0005542249 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  2 05:44:20 np0005542249 systemd[1]: Finished man-db-cache-update.service.
Dec  2 05:44:20 np0005542249 systemd[1]: run-rb80893473f7d43ddae018f60b5954084.service: Deactivated successfully.
Dec  2 05:44:21 np0005542249 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  2 05:44:21 np0005542249 python3.9[49661]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 05:44:22 np0005542249 python3.9[49813]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:44:23 np0005542249 python3.9[49967]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:44:24 np0005542249 python3.9[50119]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:44:25 np0005542249 python3.9[50271]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:44:25 np0005542249 python3.9[50423]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:44:26 np0005542249 python3.9[50575]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:44:27 np0005542249 python3.9[50698]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764672265.9429286-229-132866412890057/.source _original_basename=._d6ekd8e follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:44:27 np0005542249 python3.9[50850]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:44:28 np0005542249 python3.9[51002]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec  2 05:44:29 np0005542249 python3.9[51154]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:44:31 np0005542249 python3.9[51581]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec  2 05:44:32 np0005542249 ansible-async_wrapper.py[51756]: Invoked with j38882736242 300 /home/zuul/.ansible/tmp/ansible-tmp-1764672271.9495418-295-201303532720008/AnsiballZ_edpm_os_net_config.py _
Dec  2 05:44:32 np0005542249 ansible-async_wrapper.py[51759]: Starting module and watcher
Dec  2 05:44:32 np0005542249 ansible-async_wrapper.py[51759]: Start watching 51760 (300)
Dec  2 05:44:32 np0005542249 ansible-async_wrapper.py[51760]: Start module (51760)
Dec  2 05:44:32 np0005542249 ansible-async_wrapper.py[51756]: Return async_wrapper task started.
Dec  2 05:44:33 np0005542249 python3.9[51761]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec  2 05:44:34 np0005542249 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec  2 05:44:34 np0005542249 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec  2 05:44:34 np0005542249 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec  2 05:44:34 np0005542249 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec  2 05:44:34 np0005542249 kernel: cfg80211: failed to load regulatory.db
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.2550] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51762 uid=0 result="success"
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.2577] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51762 uid=0 result="success"
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3202] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3204] audit: op="connection-add" uuid="39f0e571-66eb-4013-8c36-9853d4140748" name="br-ex-br" pid=51762 uid=0 result="success"
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3219] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3221] audit: op="connection-add" uuid="41e45e90-fcf7-4c8e-803a-05534de54087" name="br-ex-port" pid=51762 uid=0 result="success"
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3233] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3235] audit: op="connection-add" uuid="461407fe-7153-411e-b968-ba0628f12791" name="eth1-port" pid=51762 uid=0 result="success"
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3246] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3248] audit: op="connection-add" uuid="fde3a96a-ec07-4759-85c8-dec0c7456bc8" name="vlan20-port" pid=51762 uid=0 result="success"
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3259] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3261] audit: op="connection-add" uuid="dcaf73be-ae8e-408b-8af2-9d3bda1334dd" name="vlan21-port" pid=51762 uid=0 result="success"
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3272] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3274] audit: op="connection-add" uuid="3fd24f55-6188-44e2-b1be-4e378de5121b" name="vlan22-port" pid=51762 uid=0 result="success"
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3285] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3286] audit: op="connection-add" uuid="4e9733e3-193f-491e-bc00-ba8fe6bf6657" name="vlan23-port" pid=51762 uid=0 result="success"
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3305] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp" pid=51762 uid=0 result="success"
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3321] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3323] audit: op="connection-add" uuid="aab8fe73-d159-48a1-af81-d10803d5bc39" name="br-ex-if" pid=51762 uid=0 result="success"
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3390] audit: op="connection-update" uuid="d9158bdb-1886-5d5e-a36e-fd98f73b7772" name="ci-private-network" args="ovs-interface.type,ipv4.addresses,ipv4.method,ipv4.dns,ipv4.routes,ipv4.never-default,ipv4.routing-rules,ipv6.addresses,ipv6.addr-gen-mode,ipv6.dns,ipv6.routes,ipv6.method,ipv6.routing-rules,connection.master,connection.timestamp,connection.controller,connection.port-type,connection.slave-type,ovs-external-ids.data" pid=51762 uid=0 result="success"
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3406] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3408] audit: op="connection-add" uuid="117e0b3b-57a1-40d3-8e87-2d5be930ebf8" name="vlan20-if" pid=51762 uid=0 result="success"
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3425] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3427] audit: op="connection-add" uuid="a88726f4-5c7b-4921-b6be-368487724a81" name="vlan21-if" pid=51762 uid=0 result="success"
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3444] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3446] audit: op="connection-add" uuid="07663303-a068-4b15-ac62-6de56bf037c8" name="vlan22-if" pid=51762 uid=0 result="success"
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3462] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3464] audit: op="connection-add" uuid="b57d0dfd-0763-4190-9e22-b267b21ce78c" name="vlan23-if" pid=51762 uid=0 result="success"
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3479] audit: op="connection-delete" uuid="b7a7d1c3-8714-37fa-b787-ecc28ef4bc4a" name="Wired connection 1" pid=51762 uid=0 result="success"
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3495] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3507] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3511] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (39f0e571-66eb-4013-8c36-9853d4140748)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3512] audit: op="connection-activate" uuid="39f0e571-66eb-4013-8c36-9853d4140748" name="br-ex-br" pid=51762 uid=0 result="success"
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3514] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3519] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3523] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (41e45e90-fcf7-4c8e-803a-05534de54087)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3526] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3531] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3536] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (461407fe-7153-411e-b968-ba0628f12791)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3539] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3546] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3550] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (fde3a96a-ec07-4759-85c8-dec0c7456bc8)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3552] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3558] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3562] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (dcaf73be-ae8e-408b-8af2-9d3bda1334dd)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3564] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3570] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3574] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (3fd24f55-6188-44e2-b1be-4e378de5121b)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3576] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3582] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3585] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (4e9733e3-193f-491e-bc00-ba8fe6bf6657)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3586] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3589] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3591] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3597] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3601] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3605] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (aab8fe73-d159-48a1-af81-d10803d5bc39)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3606] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3609] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3612] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3613] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3615] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3625] device (eth1): disconnecting for new activation request.
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3626] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3628] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3630] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3632] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3634] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3638] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3643] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (117e0b3b-57a1-40d3-8e87-2d5be930ebf8)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3644] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3648] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3651] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3653] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3655] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3660] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3664] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (a88726f4-5c7b-4921-b6be-368487724a81)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3665] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3669] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3672] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3673] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3677] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3683] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3687] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (07663303-a068-4b15-ac62-6de56bf037c8)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3688] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3691] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3694] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3696] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3698] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3703] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3708] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (b57d0dfd-0763-4190-9e22-b267b21ce78c)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3709] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3713] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3715] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3717] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3719] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3733] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority" pid=51762 uid=0 result="success"
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3736] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3739] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3741] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3748] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3752] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3755] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3759] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3761] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3765] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 kernel: ovs-system: entered promiscuous mode
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3770] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3773] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3774] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3778] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3782] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3784] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3786] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3790] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3794] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3797] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3798] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 systemd-udevd[51768]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 05:44:35 np0005542249 kernel: Timeout policy base is empty
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3803] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3806] dhcp4 (eth0): canceled DHCP transaction
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3807] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3807] dhcp4 (eth0): state changed no lease
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3808] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3819] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3821] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51762 uid=0 result="fail" reason="Device is not activated"
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3856] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3862] dhcp4 (eth0): state changed new lease, address=38.102.83.233
Dec  2 05:44:35 np0005542249 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3895] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3905] device (eth1): disconnecting for new activation request.
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3905] audit: op="connection-activate" uuid="d9158bdb-1886-5d5e-a36e-fd98f73b7772" name="ci-private-network" pid=51762 uid=0 result="success"
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3907] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3924] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.3952] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51762 uid=0 result="success"
Dec  2 05:44:35 np0005542249 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4098] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec  2 05:44:35 np0005542249 kernel: br-ex: entered promiscuous mode
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4246] device (eth1): Activation: starting connection 'ci-private-network' (d9158bdb-1886-5d5e-a36e-fd98f73b7772)
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4256] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4258] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4267] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4269] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4271] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4272] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4274] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4276] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4277] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4287] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4298] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4301] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4305] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4309] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4312] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4315] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4318] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4322] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4326] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4329] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4332] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4336] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4339] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4343] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4348] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4357] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 kernel: vlan22: entered promiscuous mode
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4395] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec  2 05:44:35 np0005542249 systemd-udevd[51767]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4417] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4426] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4427] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4431] device (eth1): Activation: successful, device activated.
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4445] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4446] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4451] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  2 05:44:35 np0005542249 kernel: vlan21: entered promiscuous mode
Dec  2 05:44:35 np0005542249 kernel: vlan23: entered promiscuous mode
Dec  2 05:44:35 np0005542249 systemd-udevd[51766]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4568] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4586] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4591] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4615] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4627] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4630] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4638] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4645] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4647] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4653] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  2 05:44:35 np0005542249 kernel: vlan20: entered promiscuous mode
Dec  2 05:44:35 np0005542249 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4753] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4769] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4803] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4804] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4812] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4852] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4864] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4885] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4889] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 05:44:35 np0005542249 NetworkManager[48987]: <info>  [1764672275.4899] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  2 05:44:36 np0005542249 python3.9[52120]: ansible-ansible.legacy.async_status Invoked with jid=j38882736242.51756 mode=status _async_dir=/root/.ansible_async
Dec  2 05:44:36 np0005542249 NetworkManager[48987]: <info>  [1764672276.6684] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51762 uid=0 result="success"
Dec  2 05:44:36 np0005542249 NetworkManager[48987]: <info>  [1764672276.8777] checkpoint[0x563eb3510950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec  2 05:44:36 np0005542249 NetworkManager[48987]: <info>  [1764672276.8780] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51762 uid=0 result="success"
Dec  2 05:44:37 np0005542249 NetworkManager[48987]: <info>  [1764672277.1882] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51762 uid=0 result="success"
Dec  2 05:44:37 np0005542249 NetworkManager[48987]: <info>  [1764672277.1893] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51762 uid=0 result="success"
Dec  2 05:44:37 np0005542249 NetworkManager[48987]: <info>  [1764672277.3929] audit: op="networking-control" arg="global-dns-configuration" pid=51762 uid=0 result="success"
Dec  2 05:44:37 np0005542249 NetworkManager[48987]: <info>  [1764672277.3971] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec  2 05:44:37 np0005542249 NetworkManager[48987]: <info>  [1764672277.4013] audit: op="networking-control" arg="global-dns-configuration" pid=51762 uid=0 result="success"
Dec  2 05:44:37 np0005542249 NetworkManager[48987]: <info>  [1764672277.4034] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51762 uid=0 result="success"
Dec  2 05:44:37 np0005542249 NetworkManager[48987]: <info>  [1764672277.5469] checkpoint[0x563eb3510a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec  2 05:44:37 np0005542249 NetworkManager[48987]: <info>  [1764672277.5481] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51762 uid=0 result="success"
Dec  2 05:44:37 np0005542249 ansible-async_wrapper.py[51760]: Module complete (51760)
Dec  2 05:44:37 np0005542249 ansible-async_wrapper.py[51759]: Done in kid B.
Dec  2 05:44:40 np0005542249 python3.9[52226]: ansible-ansible.legacy.async_status Invoked with jid=j38882736242.51756 mode=status _async_dir=/root/.ansible_async
Dec  2 05:44:40 np0005542249 python3.9[52326]: ansible-ansible.legacy.async_status Invoked with jid=j38882736242.51756 mode=cleanup _async_dir=/root/.ansible_async
Dec  2 05:44:41 np0005542249 python3.9[52478]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:44:41 np0005542249 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  2 05:44:41 np0005542249 python3.9[52604]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764672280.9197986-322-276492678853682/.source.returncode _original_basename=.76m70yfg follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:44:42 np0005542249 python3.9[52756]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:44:43 np0005542249 python3.9[52879]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764672282.2489033-338-206847802140528/.source.cfg _original_basename=.wuca8kom follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:44:44 np0005542249 python3.9[53032]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 05:44:44 np0005542249 systemd[1]: Reloading Network Manager...
Dec  2 05:44:44 np0005542249 NetworkManager[48987]: <info>  [1764672284.4323] audit: op="reload" arg="0" pid=53036 uid=0 result="success"
Dec  2 05:44:44 np0005542249 NetworkManager[48987]: <info>  [1764672284.4331] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec  2 05:44:44 np0005542249 systemd[1]: Reloaded Network Manager.
Dec  2 05:44:44 np0005542249 systemd[1]: session-10.scope: Deactivated successfully.
Dec  2 05:44:44 np0005542249 systemd[1]: session-10.scope: Consumed 52.231s CPU time.
Dec  2 05:44:44 np0005542249 systemd-logind[787]: Session 10 logged out. Waiting for processes to exit.
Dec  2 05:44:44 np0005542249 systemd-logind[787]: Removed session 10.
Dec  2 05:44:51 np0005542249 systemd-logind[787]: New session 11 of user zuul.
Dec  2 05:44:51 np0005542249 systemd[1]: Started Session 11 of User zuul.
Dec  2 05:44:52 np0005542249 python3.9[53220]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:44:53 np0005542249 python3.9[53374]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 05:44:54 np0005542249 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  2 05:44:54 np0005542249 python3.9[53568]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:44:54 np0005542249 systemd[1]: session-11.scope: Deactivated successfully.
Dec  2 05:44:54 np0005542249 systemd[1]: session-11.scope: Consumed 2.744s CPU time.
Dec  2 05:44:54 np0005542249 systemd-logind[787]: Session 11 logged out. Waiting for processes to exit.
Dec  2 05:44:54 np0005542249 systemd-logind[787]: Removed session 11.
Dec  2 05:45:01 np0005542249 systemd-logind[787]: New session 12 of user zuul.
Dec  2 05:45:01 np0005542249 systemd[1]: Started Session 12 of User zuul.
Dec  2 05:45:02 np0005542249 python3.9[53750]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:45:03 np0005542249 python3.9[53905]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:45:04 np0005542249 python3.9[54061]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 05:45:05 np0005542249 python3.9[54145]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 05:45:08 np0005542249 python3.9[54299]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 05:45:09 np0005542249 python3.9[54494]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:45:10 np0005542249 python3.9[54646]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:45:10 np0005542249 systemd[1]: var-lib-containers-storage-overlay-compat3229028657-merged.mount: Deactivated successfully.
Dec  2 05:45:10 np0005542249 podman[54647]: 2025-12-02 10:45:10.490451855 +0000 UTC m=+0.065009187 system refresh
Dec  2 05:45:11 np0005542249 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 05:45:11 np0005542249 python3.9[54809]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:45:12 np0005542249 python3.9[54932]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764672310.7403412-79-162156774263329/.source.json follow=False _original_basename=podman_network_config.j2 checksum=6a02ba5b5b949424a9a8585a871d0417202c117d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:45:13 np0005542249 python3.9[55084]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:45:13 np0005542249 python3.9[55207]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764672312.630935-94-945905784649/.source.conf follow=False _original_basename=registries.conf.j2 checksum=1983ff7e7b6a6f7e1516978c4a4c65be23e75b30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:45:14 np0005542249 python3.9[55359]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:45:15 np0005542249 python3.9[55511]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:45:16 np0005542249 python3.9[55663]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:45:17 np0005542249 python3.9[55815]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:45:17 np0005542249 python3.9[55967]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 05:45:20 np0005542249 python3.9[56120]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:45:21 np0005542249 python3.9[56274]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 05:45:21 np0005542249 python3.9[56426]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 05:45:22 np0005542249 python3.9[56578]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:45:23 np0005542249 python3.9[56731]: ansible-service_facts Invoked
Dec  2 05:45:23 np0005542249 network[56748]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  2 05:45:23 np0005542249 network[56749]: 'network-scripts' will be removed from distribution in near future.
Dec  2 05:45:23 np0005542249 network[56750]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  2 05:45:28 np0005542249 python3.9[57202]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 05:45:31 np0005542249 python3.9[57355]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec  2 05:45:32 np0005542249 python3.9[57507]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:45:33 np0005542249 python3.9[57632]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764672332.1345756-238-80696587402521/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:45:34 np0005542249 python3.9[57786]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:45:35 np0005542249 python3.9[57911]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764672333.864125-253-137696970420462/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:45:36 np0005542249 python3.9[58065]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:45:37 np0005542249 python3.9[58219]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 05:45:38 np0005542249 python3.9[58303]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 05:45:40 np0005542249 python3.9[58457]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 05:45:40 np0005542249 python3.9[58541]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 05:45:40 np0005542249 chronyd[791]: chronyd exiting
Dec  2 05:45:40 np0005542249 systemd[1]: Stopping NTP client/server...
Dec  2 05:45:40 np0005542249 systemd[1]: chronyd.service: Deactivated successfully.
Dec  2 05:45:40 np0005542249 systemd[1]: Stopped NTP client/server.
Dec  2 05:45:40 np0005542249 systemd[1]: Starting NTP client/server...
Dec  2 05:45:40 np0005542249 chronyd[58549]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  2 05:45:40 np0005542249 chronyd[58549]: Frequency -26.278 +/- 0.172 ppm read from /var/lib/chrony/drift
Dec  2 05:45:40 np0005542249 chronyd[58549]: Loaded seccomp filter (level 2)
Dec  2 05:45:40 np0005542249 systemd[1]: Started NTP client/server.
Dec  2 05:45:41 np0005542249 systemd[1]: session-12.scope: Deactivated successfully.
Dec  2 05:45:41 np0005542249 systemd[1]: session-12.scope: Consumed 29.891s CPU time.
Dec  2 05:45:41 np0005542249 systemd-logind[787]: Session 12 logged out. Waiting for processes to exit.
Dec  2 05:45:41 np0005542249 systemd-logind[787]: Removed session 12.
Dec  2 05:45:46 np0005542249 systemd-logind[787]: New session 13 of user zuul.
Dec  2 05:45:46 np0005542249 systemd[1]: Started Session 13 of User zuul.
Dec  2 05:45:47 np0005542249 python3.9[58730]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:45:47 np0005542249 python3.9[58882]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:45:48 np0005542249 python3.9[59005]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764672347.3067524-34-114785989392759/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:45:49 np0005542249 systemd-logind[787]: Session 13 logged out. Waiting for processes to exit.
Dec  2 05:45:49 np0005542249 systemd[1]: session-13.scope: Deactivated successfully.
Dec  2 05:45:49 np0005542249 systemd[1]: session-13.scope: Consumed 1.823s CPU time.
Dec  2 05:45:49 np0005542249 systemd-logind[787]: Removed session 13.
Dec  2 05:45:54 np0005542249 systemd-logind[787]: New session 14 of user zuul.
Dec  2 05:45:54 np0005542249 systemd[1]: Started Session 14 of User zuul.
Dec  2 05:45:55 np0005542249 python3.9[59183]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:45:56 np0005542249 python3.9[59339]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:45:57 np0005542249 python3.9[59514]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:45:58 np0005542249 python3.9[59637]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764672357.0951967-41-917183293587/.source.json _original_basename=.rviquv2i follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:45:59 np0005542249 python3.9[59789]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:46:00 np0005542249 python3.9[59912]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764672359.0237918-64-43604946491201/.source _original_basename=.bfgc481x follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:46:01 np0005542249 python3.9[60064]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:46:01 np0005542249 python3.9[60216]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:46:02 np0005542249 python3.9[60339]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764672361.2974694-88-188213987190088/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:46:03 np0005542249 python3.9[60491]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:46:03 np0005542249 python3.9[60614]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764672362.6021857-88-126729894649674/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:46:04 np0005542249 python3.9[60766]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:46:05 np0005542249 python3.9[60918]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:46:06 np0005542249 python3.9[61041]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764672364.7056372-125-163255844587817/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:46:06 np0005542249 python3.9[61193]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:46:07 np0005542249 python3.9[61316]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764672366.2324076-140-135520438435616/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:46:08 np0005542249 python3.9[61468]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 05:46:08 np0005542249 systemd[1]: Reloading.
Dec  2 05:46:08 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:46:08 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:46:08 np0005542249 systemd[1]: Reloading.
Dec  2 05:46:08 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:46:08 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:46:09 np0005542249 systemd[1]: Starting EDPM Container Shutdown...
Dec  2 05:46:09 np0005542249 systemd[1]: Finished EDPM Container Shutdown.
Dec  2 05:46:09 np0005542249 python3.9[61695]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:46:10 np0005542249 python3.9[61818]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764672369.2269747-163-208981244627431/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:46:11 np0005542249 python3.9[61970]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:46:11 np0005542249 python3.9[62093]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764672370.709922-178-248565505703922/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:46:12 np0005542249 python3.9[62245]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 05:46:12 np0005542249 systemd[1]: Reloading.
Dec  2 05:46:12 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:46:12 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:46:13 np0005542249 systemd[1]: Reloading.
Dec  2 05:46:13 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:46:13 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:46:13 np0005542249 systemd[1]: Starting Create netns directory...
Dec  2 05:46:13 np0005542249 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  2 05:46:13 np0005542249 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  2 05:46:13 np0005542249 systemd[1]: Finished Create netns directory.
Dec  2 05:46:14 np0005542249 python3.9[62471]: ansible-ansible.builtin.service_facts Invoked
Dec  2 05:46:14 np0005542249 network[62488]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  2 05:46:14 np0005542249 network[62489]: 'network-scripts' will be removed from distribution in near future.
Dec  2 05:46:14 np0005542249 network[62490]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  2 05:46:18 np0005542249 python3.9[62752]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 05:46:18 np0005542249 systemd[1]: Reloading.
Dec  2 05:46:18 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:46:18 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:46:18 np0005542249 systemd[1]: Stopping IPv4 firewall with iptables...
Dec  2 05:46:19 np0005542249 iptables.init[62792]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec  2 05:46:19 np0005542249 iptables.init[62792]: iptables: Flushing firewall rules: [  OK  ]
Dec  2 05:46:19 np0005542249 systemd[1]: iptables.service: Deactivated successfully.
Dec  2 05:46:19 np0005542249 systemd[1]: Stopped IPv4 firewall with iptables.
Dec  2 05:46:20 np0005542249 python3.9[62988]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 05:46:22 np0005542249 python3.9[63142]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 05:46:22 np0005542249 systemd[1]: Reloading.
Dec  2 05:46:22 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:46:22 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:46:22 np0005542249 systemd[1]: Starting Netfilter Tables...
Dec  2 05:46:22 np0005542249 systemd[1]: Finished Netfilter Tables.
Dec  2 05:46:23 np0005542249 python3.9[63334]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:46:24 np0005542249 python3.9[63487]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:46:24 np0005542249 python3.9[63612]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764672383.8071196-247-207370890164104/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:46:26 np0005542249 python3.9[63765]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 05:46:26 np0005542249 systemd[1]: Reloading OpenSSH server daemon...
Dec  2 05:46:26 np0005542249 systemd[1]: Reloaded OpenSSH server daemon.
Dec  2 05:46:26 np0005542249 python3.9[63921]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:46:27 np0005542249 python3.9[64073]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:46:28 np0005542249 python3.9[64196]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764672387.1075985-278-37579400869458/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:46:29 np0005542249 python3.9[64348]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  2 05:46:29 np0005542249 systemd[1]: Starting Time & Date Service...
Dec  2 05:46:29 np0005542249 systemd[1]: Started Time & Date Service.
Dec  2 05:46:30 np0005542249 python3.9[64504]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:46:30 np0005542249 python3.9[64656]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:46:31 np0005542249 python3.9[64779]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764672390.3140557-313-41478903841244/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:46:32 np0005542249 python3.9[64931]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:46:33 np0005542249 python3.9[65054]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764672391.6679487-328-214449486096446/.source.yaml _original_basename=.qrntw78d follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:46:33 np0005542249 python3.9[65206]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:46:34 np0005542249 python3.9[65329]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764672393.3214707-343-92568338688703/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:46:35 np0005542249 python3.9[65481]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:46:36 np0005542249 python3.9[65634]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:46:37 np0005542249 python3[65787]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  2 05:46:37 np0005542249 python3.9[65939]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:46:38 np0005542249 python3.9[66062]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764672397.2629693-382-34239715979952/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:46:39 np0005542249 python3.9[66214]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:46:39 np0005542249 python3.9[66337]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764672398.7025583-397-82324834377326/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:46:40 np0005542249 python3.9[66489]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:46:41 np0005542249 python3.9[66612]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764672400.0928967-412-160179141936732/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:46:42 np0005542249 python3.9[66764]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:46:42 np0005542249 python3.9[66887]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764672401.6611178-427-185023042789917/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:46:43 np0005542249 python3.9[67039]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:46:44 np0005542249 python3.9[67162]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764672403.086611-442-24539960214612/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:46:45 np0005542249 python3.9[67314]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:46:45 np0005542249 python3.9[67466]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:46:46 np0005542249 python3.9[67625]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:46:47 np0005542249 python3.9[67778]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:46:48 np0005542249 python3.9[67930]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:46:49 np0005542249 python3.9[68082]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  2 05:46:50 np0005542249 python3.9[68235]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  2 05:46:50 np0005542249 systemd-logind[787]: Session 14 logged out. Waiting for processes to exit.
Dec  2 05:46:50 np0005542249 systemd[1]: session-14.scope: Deactivated successfully.
Dec  2 05:46:50 np0005542249 systemd[1]: session-14.scope: Consumed 42.209s CPU time.
Dec  2 05:46:50 np0005542249 systemd-logind[787]: Removed session 14.
Dec  2 05:46:56 np0005542249 systemd-logind[787]: New session 15 of user zuul.
Dec  2 05:46:56 np0005542249 systemd[1]: Started Session 15 of User zuul.
Dec  2 05:46:57 np0005542249 python3.9[68416]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  2 05:46:58 np0005542249 python3.9[68568]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 05:46:59 np0005542249 python3.9[68720]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:46:59 np0005542249 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  2 05:47:00 np0005542249 python3.9[68874]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDO5YUsR3fxYV3/TLEvn2kvFiCsy0ibUl13I6MuVQRBRgKNtM/tOYqhU31vVY2EcGpB8b5ao4DERWVEInN01roi+g/wpHWb+f0/6nAbkJWbbDJ1clCd8jPymGAPak/cDMU0ovZHrQIfOCX/49oaIuAKDUkTe54rO4FW+BGD4GHqYEQADga4n/O4EGAcD1anPVb+GuuXOGssT1joWGD9Evx8h0280Y8+/hyYVrwZPzAvk1G6/Y70ZnR/Zy/KVzOXbxyD6wHRPlAhho2bW+ygitvUaism/b+gzWlPoZAgR98v436doBlCIx3m+NKfPw+RcDdiyyyQDJQkE+fK/qBvLrDl+MJ7ORZqlOaQdIvZoX5LFH6mblXEHXtugVcGoThKFYVq4pEbKBDdGsguFNBxcmAhAfAoUxaDOel7ejpg/UNFGuxCwjD7y55H9fps5JLnWBLfTRz2L9nXhcbqBGwseTrdOOUZs9eD0tBEScux1PVMxojtddX8/T2YE6UEV8IfpfE=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICoMMbun3HGPT9l61gMTTQXqOB2JMpSQRQyFehEDSUF2#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFR3Tt/mM6Z+ZOHrRq1Cxrr5ZNsDxS+Oz4ZFS7R6FnxIu2gcOh19i4U5/YwvyNQQ11yS8zrPmp7cstiLzUkJoxs=#012 create=True mode=0644 path=/tmp/ansible.50ggprz2 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:47:01 np0005542249 python3.9[69026]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.50ggprz2' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:47:02 np0005542249 python3.9[69180]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.50ggprz2 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:47:02 np0005542249 systemd[1]: session-15.scope: Deactivated successfully.
Dec  2 05:47:02 np0005542249 systemd[1]: session-15.scope: Consumed 3.986s CPU time.
Dec  2 05:47:02 np0005542249 systemd-logind[787]: Session 15 logged out. Waiting for processes to exit.
Dec  2 05:47:02 np0005542249 systemd-logind[787]: Removed session 15.
Dec  2 05:47:07 np0005542249 systemd-logind[787]: New session 16 of user zuul.
Dec  2 05:47:07 np0005542249 systemd[1]: Started Session 16 of User zuul.
Dec  2 05:47:09 np0005542249 python3.9[69358]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:47:10 np0005542249 python3.9[69514]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  2 05:47:11 np0005542249 python3.9[69668]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 05:47:12 np0005542249 python3.9[69821]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:47:13 np0005542249 python3.9[69974]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 05:47:14 np0005542249 python3.9[70128]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:47:15 np0005542249 python3.9[70283]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:47:15 np0005542249 systemd[1]: session-16.scope: Deactivated successfully.
Dec  2 05:47:15 np0005542249 systemd[1]: session-16.scope: Consumed 5.595s CPU time.
Dec  2 05:47:15 np0005542249 systemd-logind[787]: Session 16 logged out. Waiting for processes to exit.
Dec  2 05:47:15 np0005542249 systemd-logind[787]: Removed session 16.
Dec  2 05:47:22 np0005542249 systemd-logind[787]: New session 17 of user zuul.
Dec  2 05:47:22 np0005542249 systemd[1]: Started Session 17 of User zuul.
Dec  2 05:47:23 np0005542249 python3.9[70462]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:47:25 np0005542249 python3.9[70618]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 05:47:26 np0005542249 python3.9[70702]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  2 05:47:28 np0005542249 python3.9[70853]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:47:29 np0005542249 python3.9[71004]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  2 05:47:30 np0005542249 python3.9[71154]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 05:47:30 np0005542249 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 05:47:30 np0005542249 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 05:47:31 np0005542249 python3.9[71305]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 05:47:31 np0005542249 systemd[1]: session-17.scope: Deactivated successfully.
Dec  2 05:47:31 np0005542249 systemd[1]: session-17.scope: Consumed 6.612s CPU time.
Dec  2 05:47:31 np0005542249 systemd-logind[787]: Session 17 logged out. Waiting for processes to exit.
Dec  2 05:47:31 np0005542249 systemd-logind[787]: Removed session 17.
Dec  2 05:47:39 np0005542249 systemd-logind[787]: New session 18 of user zuul.
Dec  2 05:47:39 np0005542249 systemd[1]: Started Session 18 of User zuul.
Dec  2 05:47:46 np0005542249 python3[72072]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:47:47 np0005542249 python3[72167]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  2 05:47:49 np0005542249 python3[72194]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  2 05:47:49 np0005542249 python3[72220]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:47:49 np0005542249 kernel: loop: module loaded
Dec  2 05:47:49 np0005542249 kernel: loop3: detected capacity change from 0 to 41943040
Dec  2 05:47:50 np0005542249 python3[72255]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:47:50 np0005542249 lvm[72258]: PV /dev/loop3 not used.
Dec  2 05:47:50 np0005542249 lvm[72260]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  2 05:47:50 np0005542249 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec  2 05:47:50 np0005542249 lvm[72266]:  1 logical volume(s) in volume group "ceph_vg0" now active
Dec  2 05:47:50 np0005542249 lvm[72270]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  2 05:47:50 np0005542249 lvm[72270]: VG ceph_vg0 finished
Dec  2 05:47:50 np0005542249 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec  2 05:47:51 np0005542249 python3[72348]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:47:51 np0005542249 chronyd[58549]: Selected source 167.160.187.12 (pool.ntp.org)
Dec  2 05:47:51 np0005542249 python3[72421]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764672470.756086-36286-204552669001302/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:47:52 np0005542249 python3[72471]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 05:47:52 np0005542249 systemd[1]: Reloading.
Dec  2 05:47:52 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:47:52 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:47:52 np0005542249 systemd[1]: Starting Ceph OSD losetup...
Dec  2 05:47:52 np0005542249 bash[72512]: /dev/loop3: [64513]:4194940 (/var/lib/ceph-osd-0.img)
Dec  2 05:47:52 np0005542249 systemd[1]: Finished Ceph OSD losetup.
Dec  2 05:47:52 np0005542249 lvm[72513]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  2 05:47:52 np0005542249 lvm[72513]: VG ceph_vg0 finished
Dec  2 05:47:53 np0005542249 python3[72539]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  2 05:47:54 np0005542249 python3[72566]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  2 05:47:55 np0005542249 python3[72592]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:47:55 np0005542249 kernel: loop4: detected capacity change from 0 to 41943040
Dec  2 05:47:55 np0005542249 python3[72624]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:47:55 np0005542249 lvm[72627]: PV /dev/loop4 not used.
Dec  2 05:47:55 np0005542249 lvm[72629]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  2 05:47:55 np0005542249 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Dec  2 05:47:55 np0005542249 lvm[72637]:  1 logical volume(s) in volume group "ceph_vg1" now active
Dec  2 05:47:55 np0005542249 lvm[72640]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  2 05:47:55 np0005542249 lvm[72640]: VG ceph_vg1 finished
Dec  2 05:47:55 np0005542249 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Dec  2 05:47:56 np0005542249 python3[72718]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:47:56 np0005542249 python3[72791]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764672475.9684372-36313-265226400986016/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:47:57 np0005542249 python3[72841]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 05:47:57 np0005542249 systemd[1]: Reloading.
Dec  2 05:47:57 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:47:57 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:47:57 np0005542249 systemd[1]: Starting Ceph OSD losetup...
Dec  2 05:47:57 np0005542249 bash[72882]: /dev/loop4: [64513]:4327923 (/var/lib/ceph-osd-1.img)
Dec  2 05:47:57 np0005542249 systemd[1]: Finished Ceph OSD losetup.
Dec  2 05:47:57 np0005542249 lvm[72883]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  2 05:47:57 np0005542249 lvm[72883]: VG ceph_vg1 finished
Dec  2 05:47:57 np0005542249 python3[72909]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  2 05:47:59 np0005542249 python3[72936]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  2 05:47:59 np0005542249 python3[72962]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:47:59 np0005542249 kernel: loop5: detected capacity change from 0 to 41943040
Dec  2 05:48:00 np0005542249 python3[72994]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:48:00 np0005542249 lvm[72997]: PV /dev/loop5 not used.
Dec  2 05:48:00 np0005542249 lvm[72999]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  2 05:48:00 np0005542249 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Dec  2 05:48:00 np0005542249 lvm[73010]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  2 05:48:00 np0005542249 lvm[73010]: VG ceph_vg2 finished
Dec  2 05:48:00 np0005542249 lvm[73008]:  1 logical volume(s) in volume group "ceph_vg2" now active
Dec  2 05:48:00 np0005542249 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Dec  2 05:48:01 np0005542249 python3[73088]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:48:01 np0005542249 python3[73161]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764672480.6329303-36340-133103846982823/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:48:01 np0005542249 python3[73211]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 05:48:01 np0005542249 systemd[1]: Reloading.
Dec  2 05:48:02 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:48:02 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:48:02 np0005542249 systemd[1]: Starting Ceph OSD losetup...
Dec  2 05:48:02 np0005542249 bash[73252]: /dev/loop5: [64513]:4327932 (/var/lib/ceph-osd-2.img)
Dec  2 05:48:02 np0005542249 systemd[1]: Finished Ceph OSD losetup.
Dec  2 05:48:02 np0005542249 lvm[73253]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  2 05:48:02 np0005542249 lvm[73253]: VG ceph_vg2 finished
Dec  2 05:48:04 np0005542249 python3[73277]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:48:06 np0005542249 irqbalance[782]: Cannot change IRQ 33 affinity: Operation not permitted
Dec  2 05:48:06 np0005542249 irqbalance[782]: IRQ 33 affinity is now unmanaged
Dec  2 05:48:06 np0005542249 python3[73370]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  2 05:48:08 np0005542249 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  2 05:48:08 np0005542249 systemd[1]: Starting man-db-cache-update.service...
Dec  2 05:48:09 np0005542249 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  2 05:48:09 np0005542249 systemd[1]: Finished man-db-cache-update.service.
Dec  2 05:48:09 np0005542249 systemd[1]: run-r7b2434bf51644ffdaca4246d8a47ab48.service: Deactivated successfully.
Dec  2 05:48:09 np0005542249 python3[73481]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  2 05:48:09 np0005542249 python3[73509]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:48:10 np0005542249 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 05:48:10 np0005542249 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 05:48:10 np0005542249 python3[73574]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:48:11 np0005542249 python3[73600]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:48:11 np0005542249 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 05:48:11 np0005542249 python3[73678]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:48:12 np0005542249 python3[73751]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764672491.4465866-36487-74605613907567/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:48:12 np0005542249 python3[73853]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:48:13 np0005542249 python3[73926]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764672492.6797757-36505-155973733460396/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:48:13 np0005542249 python3[73976]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  2 05:48:14 np0005542249 python3[74004]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  2 05:48:14 np0005542249 python3[74032]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  2 05:48:14 np0005542249 python3[74060]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:48:14 np0005542249 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 05:48:15 np0005542249 systemd[1]: Created slice User Slice of UID 42477.
Dec  2 05:48:15 np0005542249 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  2 05:48:15 np0005542249 systemd-logind[787]: New session 19 of user ceph-admin.
Dec  2 05:48:15 np0005542249 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  2 05:48:15 np0005542249 systemd[1]: Starting User Manager for UID 42477...
Dec  2 05:48:15 np0005542249 systemd[74080]: Queued start job for default target Main User Target.
Dec  2 05:48:15 np0005542249 systemd[74080]: Created slice User Application Slice.
Dec  2 05:48:15 np0005542249 systemd[74080]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  2 05:48:15 np0005542249 systemd[74080]: Started Daily Cleanup of User's Temporary Directories.
Dec  2 05:48:15 np0005542249 systemd[74080]: Reached target Paths.
Dec  2 05:48:15 np0005542249 systemd[74080]: Reached target Timers.
Dec  2 05:48:15 np0005542249 systemd[74080]: Starting D-Bus User Message Bus Socket...
Dec  2 05:48:15 np0005542249 systemd[74080]: Starting Create User's Volatile Files and Directories...
Dec  2 05:48:15 np0005542249 systemd[74080]: Finished Create User's Volatile Files and Directories.
Dec  2 05:48:15 np0005542249 systemd[74080]: Listening on D-Bus User Message Bus Socket.
Dec  2 05:48:15 np0005542249 systemd[74080]: Reached target Sockets.
Dec  2 05:48:15 np0005542249 systemd[74080]: Reached target Basic System.
Dec  2 05:48:15 np0005542249 systemd[74080]: Reached target Main User Target.
Dec  2 05:48:15 np0005542249 systemd[74080]: Startup finished in 154ms.
Dec  2 05:48:15 np0005542249 systemd[1]: Started User Manager for UID 42477.
Dec  2 05:48:15 np0005542249 systemd[1]: Started Session 19 of User ceph-admin.
Dec  2 05:48:15 np0005542249 systemd[1]: session-19.scope: Deactivated successfully.
Dec  2 05:48:15 np0005542249 systemd-logind[787]: Session 19 logged out. Waiting for processes to exit.
Dec  2 05:48:15 np0005542249 systemd-logind[787]: Removed session 19.
Dec  2 05:48:18 np0005542249 systemd[1]: var-lib-containers-storage-overlay-compat3153260424-lower\x2dmapped.mount: Deactivated successfully.
Dec  2 05:48:25 np0005542249 systemd[1]: Stopping User Manager for UID 42477...
Dec  2 05:48:25 np0005542249 systemd[74080]: Activating special unit Exit the Session...
Dec  2 05:48:25 np0005542249 systemd[74080]: Stopped target Main User Target.
Dec  2 05:48:25 np0005542249 systemd[74080]: Stopped target Basic System.
Dec  2 05:48:25 np0005542249 systemd[74080]: Stopped target Paths.
Dec  2 05:48:25 np0005542249 systemd[74080]: Stopped target Sockets.
Dec  2 05:48:25 np0005542249 systemd[74080]: Stopped target Timers.
Dec  2 05:48:25 np0005542249 systemd[74080]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec  2 05:48:25 np0005542249 systemd[74080]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  2 05:48:25 np0005542249 systemd[74080]: Closed D-Bus User Message Bus Socket.
Dec  2 05:48:25 np0005542249 systemd[74080]: Stopped Create User's Volatile Files and Directories.
Dec  2 05:48:25 np0005542249 systemd[74080]: Removed slice User Application Slice.
Dec  2 05:48:25 np0005542249 systemd[74080]: Reached target Shutdown.
Dec  2 05:48:25 np0005542249 systemd[74080]: Finished Exit the Session.
Dec  2 05:48:25 np0005542249 systemd[74080]: Reached target Exit the Session.
Dec  2 05:48:25 np0005542249 systemd[1]: user@42477.service: Deactivated successfully.
Dec  2 05:48:25 np0005542249 systemd[1]: Stopped User Manager for UID 42477.
Dec  2 05:48:25 np0005542249 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec  2 05:48:25 np0005542249 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec  2 05:48:25 np0005542249 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec  2 05:48:25 np0005542249 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec  2 05:48:25 np0005542249 systemd[1]: Removed slice User Slice of UID 42477.
Dec  2 05:48:31 np0005542249 podman[74133]: 2025-12-02 10:48:31.527846247 +0000 UTC m=+15.915996067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:48:31 np0005542249 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 05:48:31 np0005542249 podman[74193]: 2025-12-02 10:48:31.599737448 +0000 UTC m=+0.044786770 container create 795f729da4aeaed53bff8831dc5ea9c7812d420a495e2578796c1df9feb4f1c9 (image=quay.io/ceph/ceph:v18, name=epic_spence, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  2 05:48:31 np0005542249 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec  2 05:48:31 np0005542249 systemd[1]: Started libpod-conmon-795f729da4aeaed53bff8831dc5ea9c7812d420a495e2578796c1df9feb4f1c9.scope.
Dec  2 05:48:31 np0005542249 podman[74193]: 2025-12-02 10:48:31.578907435 +0000 UTC m=+0.023956807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:48:31 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:48:31 np0005542249 podman[74193]: 2025-12-02 10:48:31.714532626 +0000 UTC m=+0.159581968 container init 795f729da4aeaed53bff8831dc5ea9c7812d420a495e2578796c1df9feb4f1c9 (image=quay.io/ceph/ceph:v18, name=epic_spence, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:48:31 np0005542249 podman[74193]: 2025-12-02 10:48:31.722716556 +0000 UTC m=+0.167765888 container start 795f729da4aeaed53bff8831dc5ea9c7812d420a495e2578796c1df9feb4f1c9 (image=quay.io/ceph/ceph:v18, name=epic_spence, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:48:31 np0005542249 podman[74193]: 2025-12-02 10:48:31.726636572 +0000 UTC m=+0.171685914 container attach 795f729da4aeaed53bff8831dc5ea9c7812d420a495e2578796c1df9feb4f1c9 (image=quay.io/ceph/ceph:v18, name=epic_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:48:32 np0005542249 epic_spence[74209]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Dec  2 05:48:32 np0005542249 systemd[1]: libpod-795f729da4aeaed53bff8831dc5ea9c7812d420a495e2578796c1df9feb4f1c9.scope: Deactivated successfully.
Dec  2 05:48:32 np0005542249 podman[74214]: 2025-12-02 10:48:32.06663951 +0000 UTC m=+0.029266622 container died 795f729da4aeaed53bff8831dc5ea9c7812d420a495e2578796c1df9feb4f1c9 (image=quay.io/ceph/ceph:v18, name=epic_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Dec  2 05:48:32 np0005542249 systemd[1]: var-lib-containers-storage-overlay-8a2d51564e803fa9ebd71c9da8ac0e7805c6c689aaa86805affb627ddcc86ea0-merged.mount: Deactivated successfully.
Dec  2 05:48:32 np0005542249 podman[74214]: 2025-12-02 10:48:32.118424217 +0000 UTC m=+0.081051289 container remove 795f729da4aeaed53bff8831dc5ea9c7812d420a495e2578796c1df9feb4f1c9 (image=quay.io/ceph/ceph:v18, name=epic_spence, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:48:32 np0005542249 systemd[1]: libpod-conmon-795f729da4aeaed53bff8831dc5ea9c7812d420a495e2578796c1df9feb4f1c9.scope: Deactivated successfully.
Dec  2 05:48:32 np0005542249 podman[74229]: 2025-12-02 10:48:32.215700023 +0000 UTC m=+0.061148011 container create dd86a06a5725c317e44d092926514b021e4cb611df267236c851c695b13887c9 (image=quay.io/ceph/ceph:v18, name=thirsty_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:48:32 np0005542249 systemd[1]: Started libpod-conmon-dd86a06a5725c317e44d092926514b021e4cb611df267236c851c695b13887c9.scope.
Dec  2 05:48:32 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:48:32 np0005542249 podman[74229]: 2025-12-02 10:48:32.19185441 +0000 UTC m=+0.037302388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:48:32 np0005542249 podman[74229]: 2025-12-02 10:48:32.291241011 +0000 UTC m=+0.136688969 container init dd86a06a5725c317e44d092926514b021e4cb611df267236c851c695b13887c9 (image=quay.io/ceph/ceph:v18, name=thirsty_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  2 05:48:32 np0005542249 podman[74229]: 2025-12-02 10:48:32.297584573 +0000 UTC m=+0.143032521 container start dd86a06a5725c317e44d092926514b021e4cb611df267236c851c695b13887c9 (image=quay.io/ceph/ceph:v18, name=thirsty_gagarin, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  2 05:48:32 np0005542249 podman[74229]: 2025-12-02 10:48:32.301286253 +0000 UTC m=+0.146734191 container attach dd86a06a5725c317e44d092926514b021e4cb611df267236c851c695b13887c9 (image=quay.io/ceph/ceph:v18, name=thirsty_gagarin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:48:32 np0005542249 thirsty_gagarin[74246]: 167 167
Dec  2 05:48:32 np0005542249 systemd[1]: libpod-dd86a06a5725c317e44d092926514b021e4cb611df267236c851c695b13887c9.scope: Deactivated successfully.
Dec  2 05:48:32 np0005542249 podman[74229]: 2025-12-02 10:48:32.303481062 +0000 UTC m=+0.148929110 container died dd86a06a5725c317e44d092926514b021e4cb611df267236c851c695b13887c9 (image=quay.io/ceph/ceph:v18, name=thirsty_gagarin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Dec  2 05:48:32 np0005542249 podman[74229]: 2025-12-02 10:48:32.34416121 +0000 UTC m=+0.189609158 container remove dd86a06a5725c317e44d092926514b021e4cb611df267236c851c695b13887c9 (image=quay.io/ceph/ceph:v18, name=thirsty_gagarin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  2 05:48:32 np0005542249 systemd[1]: libpod-conmon-dd86a06a5725c317e44d092926514b021e4cb611df267236c851c695b13887c9.scope: Deactivated successfully.
Dec  2 05:48:32 np0005542249 podman[74263]: 2025-12-02 10:48:32.430778158 +0000 UTC m=+0.060994207 container create 0b0e34d65c279512a850a70d2a786e72596ec9a976a2dcd22033a0738c1d7b17 (image=quay.io/ceph/ceph:v18, name=strange_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:48:32 np0005542249 systemd[1]: Started libpod-conmon-0b0e34d65c279512a850a70d2a786e72596ec9a976a2dcd22033a0738c1d7b17.scope.
Dec  2 05:48:32 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:48:32 np0005542249 podman[74263]: 2025-12-02 10:48:32.407553572 +0000 UTC m=+0.037769661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:48:32 np0005542249 podman[74263]: 2025-12-02 10:48:32.512526545 +0000 UTC m=+0.142742674 container init 0b0e34d65c279512a850a70d2a786e72596ec9a976a2dcd22033a0738c1d7b17 (image=quay.io/ceph/ceph:v18, name=strange_brattain, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:48:32 np0005542249 podman[74263]: 2025-12-02 10:48:32.521547408 +0000 UTC m=+0.151763497 container start 0b0e34d65c279512a850a70d2a786e72596ec9a976a2dcd22033a0738c1d7b17 (image=quay.io/ceph/ceph:v18, name=strange_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  2 05:48:32 np0005542249 podman[74263]: 2025-12-02 10:48:32.525726511 +0000 UTC m=+0.155942590 container attach 0b0e34d65c279512a850a70d2a786e72596ec9a976a2dcd22033a0738c1d7b17 (image=quay.io/ceph/ceph:v18, name=strange_brattain, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  2 05:48:32 np0005542249 strange_brattain[74279]: AQAAxC5pBr8/IRAADuKh4ifq2vScpCOS+ccqIg==
Dec  2 05:48:32 np0005542249 systemd[1]: libpod-0b0e34d65c279512a850a70d2a786e72596ec9a976a2dcd22033a0738c1d7b17.scope: Deactivated successfully.
Dec  2 05:48:32 np0005542249 podman[74263]: 2025-12-02 10:48:32.563985293 +0000 UTC m=+0.194201412 container died 0b0e34d65c279512a850a70d2a786e72596ec9a976a2dcd22033a0738c1d7b17 (image=quay.io/ceph/ceph:v18, name=strange_brattain, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  2 05:48:32 np0005542249 systemd[1]: var-lib-containers-storage-overlay-eeeab9097dd5f507ecbdae799f4713d37317222a41a1a50cbfad7de201d30e77-merged.mount: Deactivated successfully.
Dec  2 05:48:32 np0005542249 podman[74263]: 2025-12-02 10:48:32.608235588 +0000 UTC m=+0.238451657 container remove 0b0e34d65c279512a850a70d2a786e72596ec9a976a2dcd22033a0738c1d7b17 (image=quay.io/ceph/ceph:v18, name=strange_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:48:32 np0005542249 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 05:48:32 np0005542249 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 05:48:32 np0005542249 systemd[1]: libpod-conmon-0b0e34d65c279512a850a70d2a786e72596ec9a976a2dcd22033a0738c1d7b17.scope: Deactivated successfully.
Dec  2 05:48:32 np0005542249 podman[74298]: 2025-12-02 10:48:32.700059016 +0000 UTC m=+0.059660341 container create a949c6dd1fa9f156afa3c28280e00a7726b58089fc51bc1582b11948de3569a9 (image=quay.io/ceph/ceph:v18, name=busy_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  2 05:48:32 np0005542249 systemd[1]: Started libpod-conmon-a949c6dd1fa9f156afa3c28280e00a7726b58089fc51bc1582b11948de3569a9.scope.
Dec  2 05:48:32 np0005542249 podman[74298]: 2025-12-02 10:48:32.674418374 +0000 UTC m=+0.034019719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:48:32 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:48:32 np0005542249 podman[74298]: 2025-12-02 10:48:32.787328492 +0000 UTC m=+0.146929837 container init a949c6dd1fa9f156afa3c28280e00a7726b58089fc51bc1582b11948de3569a9 (image=quay.io/ceph/ceph:v18, name=busy_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  2 05:48:32 np0005542249 podman[74298]: 2025-12-02 10:48:32.796801668 +0000 UTC m=+0.156402973 container start a949c6dd1fa9f156afa3c28280e00a7726b58089fc51bc1582b11948de3569a9 (image=quay.io/ceph/ceph:v18, name=busy_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  2 05:48:32 np0005542249 podman[74298]: 2025-12-02 10:48:32.800628121 +0000 UTC m=+0.160229456 container attach a949c6dd1fa9f156afa3c28280e00a7726b58089fc51bc1582b11948de3569a9 (image=quay.io/ceph/ceph:v18, name=busy_lichterman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:48:32 np0005542249 busy_lichterman[74314]: AQAAxC5pfVejMRAAesVOgkoMlKICUSxIwDaHkg==
Dec  2 05:48:32 np0005542249 systemd[1]: libpod-a949c6dd1fa9f156afa3c28280e00a7726b58089fc51bc1582b11948de3569a9.scope: Deactivated successfully.
Dec  2 05:48:32 np0005542249 podman[74298]: 2025-12-02 10:48:32.836803187 +0000 UTC m=+0.196404492 container died a949c6dd1fa9f156afa3c28280e00a7726b58089fc51bc1582b11948de3569a9 (image=quay.io/ceph/ceph:v18, name=busy_lichterman, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  2 05:48:32 np0005542249 podman[74298]: 2025-12-02 10:48:32.879670185 +0000 UTC m=+0.239271490 container remove a949c6dd1fa9f156afa3c28280e00a7726b58089fc51bc1582b11948de3569a9 (image=quay.io/ceph/ceph:v18, name=busy_lichterman, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  2 05:48:32 np0005542249 systemd[1]: libpod-conmon-a949c6dd1fa9f156afa3c28280e00a7726b58089fc51bc1582b11948de3569a9.scope: Deactivated successfully.
Dec  2 05:48:32 np0005542249 podman[74335]: 2025-12-02 10:48:32.955263415 +0000 UTC m=+0.057069952 container create 29be4ca74346d8f835418c5dcb97887ea899d29b7b33b657e3314e2411702d6c (image=quay.io/ceph/ceph:v18, name=confident_pasteur, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  2 05:48:33 np0005542249 systemd[1]: Started libpod-conmon-29be4ca74346d8f835418c5dcb97887ea899d29b7b33b657e3314e2411702d6c.scope.
Dec  2 05:48:33 np0005542249 podman[74335]: 2025-12-02 10:48:32.926490148 +0000 UTC m=+0.028296715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:48:33 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:48:33 np0005542249 podman[74335]: 2025-12-02 10:48:33.034143574 +0000 UTC m=+0.135950121 container init 29be4ca74346d8f835418c5dcb97887ea899d29b7b33b657e3314e2411702d6c (image=quay.io/ceph/ceph:v18, name=confident_pasteur, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:48:33 np0005542249 podman[74335]: 2025-12-02 10:48:33.043682491 +0000 UTC m=+0.145489028 container start 29be4ca74346d8f835418c5dcb97887ea899d29b7b33b657e3314e2411702d6c (image=quay.io/ceph/ceph:v18, name=confident_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  2 05:48:33 np0005542249 podman[74335]: 2025-12-02 10:48:33.047869224 +0000 UTC m=+0.149675771 container attach 29be4ca74346d8f835418c5dcb97887ea899d29b7b33b657e3314e2411702d6c (image=quay.io/ceph/ceph:v18, name=confident_pasteur, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  2 05:48:33 np0005542249 confident_pasteur[74351]: AQABxC5p5ojkBBAA+f1v+mtMrxOSgQVKgNqhkw==
Dec  2 05:48:33 np0005542249 systemd[1]: libpod-29be4ca74346d8f835418c5dcb97887ea899d29b7b33b657e3314e2411702d6c.scope: Deactivated successfully.
Dec  2 05:48:33 np0005542249 podman[74335]: 2025-12-02 10:48:33.088392198 +0000 UTC m=+0.190198695 container died 29be4ca74346d8f835418c5dcb97887ea899d29b7b33b657e3314e2411702d6c (image=quay.io/ceph/ceph:v18, name=confident_pasteur, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:48:33 np0005542249 podman[74335]: 2025-12-02 10:48:33.126562578 +0000 UTC m=+0.228369075 container remove 29be4ca74346d8f835418c5dcb97887ea899d29b7b33b657e3314e2411702d6c (image=quay.io/ceph/ceph:v18, name=confident_pasteur, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:48:33 np0005542249 systemd[1]: libpod-conmon-29be4ca74346d8f835418c5dcb97887ea899d29b7b33b657e3314e2411702d6c.scope: Deactivated successfully.
Dec  2 05:48:33 np0005542249 podman[74368]: 2025-12-02 10:48:33.182355134 +0000 UTC m=+0.033754602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:48:33 np0005542249 podman[74368]: 2025-12-02 10:48:33.448844987 +0000 UTC m=+0.300244395 container create 565a40a7e831fb40f79f44ceaea3c45488084c2ef0b6b32cb34ca745b96649f5 (image=quay.io/ceph/ceph:v18, name=clever_williamson, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:48:33 np0005542249 systemd[1]: Started libpod-conmon-565a40a7e831fb40f79f44ceaea3c45488084c2ef0b6b32cb34ca745b96649f5.scope.
Dec  2 05:48:33 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:48:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03e49c2ef9bcb4b170d23d2ccce2e048e93ca8f4f51ea0e3c08f0d5d9068920e/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:33 np0005542249 podman[74368]: 2025-12-02 10:48:33.539423522 +0000 UTC m=+0.390822980 container init 565a40a7e831fb40f79f44ceaea3c45488084c2ef0b6b32cb34ca745b96649f5 (image=quay.io/ceph/ceph:v18, name=clever_williamson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:48:33 np0005542249 podman[74368]: 2025-12-02 10:48:33.545064464 +0000 UTC m=+0.396463842 container start 565a40a7e831fb40f79f44ceaea3c45488084c2ef0b6b32cb34ca745b96649f5 (image=quay.io/ceph/ceph:v18, name=clever_williamson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:48:33 np0005542249 podman[74368]: 2025-12-02 10:48:33.548917149 +0000 UTC m=+0.400316567 container attach 565a40a7e831fb40f79f44ceaea3c45488084c2ef0b6b32cb34ca745b96649f5 (image=quay.io/ceph/ceph:v18, name=clever_williamson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  2 05:48:33 np0005542249 clever_williamson[74388]: /usr/bin/monmaptool: monmap file /tmp/monmap
Dec  2 05:48:33 np0005542249 clever_williamson[74388]: setting min_mon_release = pacific
Dec  2 05:48:33 np0005542249 clever_williamson[74388]: /usr/bin/monmaptool: set fsid to 95bc4eaa-1a14-59bf-acf2-4b3da055547d
Dec  2 05:48:33 np0005542249 clever_williamson[74388]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Dec  2 05:48:33 np0005542249 systemd[1]: libpod-565a40a7e831fb40f79f44ceaea3c45488084c2ef0b6b32cb34ca745b96649f5.scope: Deactivated successfully.
Dec  2 05:48:33 np0005542249 podman[74368]: 2025-12-02 10:48:33.600284434 +0000 UTC m=+0.451683862 container died 565a40a7e831fb40f79f44ceaea3c45488084c2ef0b6b32cb34ca745b96649f5 (image=quay.io/ceph/ceph:v18, name=clever_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:48:33 np0005542249 systemd[1]: var-lib-containers-storage-overlay-03e49c2ef9bcb4b170d23d2ccce2e048e93ca8f4f51ea0e3c08f0d5d9068920e-merged.mount: Deactivated successfully.
Dec  2 05:48:33 np0005542249 podman[74368]: 2025-12-02 10:48:33.647411027 +0000 UTC m=+0.498810415 container remove 565a40a7e831fb40f79f44ceaea3c45488084c2ef0b6b32cb34ca745b96649f5 (image=quay.io/ceph/ceph:v18, name=clever_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:48:33 np0005542249 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 05:48:33 np0005542249 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 05:48:33 np0005542249 systemd[1]: libpod-conmon-565a40a7e831fb40f79f44ceaea3c45488084c2ef0b6b32cb34ca745b96649f5.scope: Deactivated successfully.
Dec  2 05:48:33 np0005542249 podman[74407]: 2025-12-02 10:48:33.717792686 +0000 UTC m=+0.045491468 container create 74dd3db7574307e0d452f515484cc513a760f6e23b15b009bdb0fade7736b363 (image=quay.io/ceph/ceph:v18, name=suspicious_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  2 05:48:33 np0005542249 systemd[1]: Started libpod-conmon-74dd3db7574307e0d452f515484cc513a760f6e23b15b009bdb0fade7736b363.scope.
Dec  2 05:48:33 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:48:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8deb46cd79cb0325c434f6d2b4feeceb27958c58174b6ae43ee825bd2a1b24cf/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8deb46cd79cb0325c434f6d2b4feeceb27958c58174b6ae43ee825bd2a1b24cf/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8deb46cd79cb0325c434f6d2b4feeceb27958c58174b6ae43ee825bd2a1b24cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8deb46cd79cb0325c434f6d2b4feeceb27958c58174b6ae43ee825bd2a1b24cf/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:33 np0005542249 podman[74407]: 2025-12-02 10:48:33.697473608 +0000 UTC m=+0.025172420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:48:33 np0005542249 podman[74407]: 2025-12-02 10:48:33.798395672 +0000 UTC m=+0.126094474 container init 74dd3db7574307e0d452f515484cc513a760f6e23b15b009bdb0fade7736b363 (image=quay.io/ceph/ceph:v18, name=suspicious_ishizaka, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  2 05:48:33 np0005542249 podman[74407]: 2025-12-02 10:48:33.803985133 +0000 UTC m=+0.131683905 container start 74dd3db7574307e0d452f515484cc513a760f6e23b15b009bdb0fade7736b363 (image=quay.io/ceph/ceph:v18, name=suspicious_ishizaka, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:48:33 np0005542249 podman[74407]: 2025-12-02 10:48:33.807241621 +0000 UTC m=+0.134940413 container attach 74dd3db7574307e0d452f515484cc513a760f6e23b15b009bdb0fade7736b363 (image=quay.io/ceph/ceph:v18, name=suspicious_ishizaka, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  2 05:48:33 np0005542249 systemd[1]: libpod-74dd3db7574307e0d452f515484cc513a760f6e23b15b009bdb0fade7736b363.scope: Deactivated successfully.
Dec  2 05:48:33 np0005542249 podman[74450]: 2025-12-02 10:48:33.928819512 +0000 UTC m=+0.027220615 container died 74dd3db7574307e0d452f515484cc513a760f6e23b15b009bdb0fade7736b363 (image=quay.io/ceph/ceph:v18, name=suspicious_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:48:33 np0005542249 podman[74450]: 2025-12-02 10:48:33.962679086 +0000 UTC m=+0.061080169 container remove 74dd3db7574307e0d452f515484cc513a760f6e23b15b009bdb0fade7736b363 (image=quay.io/ceph/ceph:v18, name=suspicious_ishizaka, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  2 05:48:33 np0005542249 systemd[1]: libpod-conmon-74dd3db7574307e0d452f515484cc513a760f6e23b15b009bdb0fade7736b363.scope: Deactivated successfully.
Dec  2 05:48:34 np0005542249 systemd[1]: Reloading.
Dec  2 05:48:34 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:48:34 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:48:34 np0005542249 systemd[1]: Reloading.
Dec  2 05:48:34 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:48:34 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:48:34 np0005542249 systemd[1]: Reached target All Ceph clusters and services.
Dec  2 05:48:34 np0005542249 systemd[1]: Reloading.
Dec  2 05:48:34 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:48:34 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:48:34 np0005542249 systemd[1]: Reached target Ceph cluster 95bc4eaa-1a14-59bf-acf2-4b3da055547d.
Dec  2 05:48:34 np0005542249 systemd[1]: Reloading.
Dec  2 05:48:35 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:48:35 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:48:35 np0005542249 systemd[1]: Reloading.
Dec  2 05:48:35 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:48:35 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:48:35 np0005542249 systemd[1]: Created slice Slice /system/ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d.
Dec  2 05:48:35 np0005542249 systemd[1]: Reached target System Time Set.
Dec  2 05:48:35 np0005542249 systemd[1]: Reached target System Time Synchronized.
Dec  2 05:48:35 np0005542249 systemd[1]: Starting Ceph mon.compute-0 for 95bc4eaa-1a14-59bf-acf2-4b3da055547d...
Dec  2 05:48:35 np0005542249 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 05:48:35 np0005542249 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 05:48:35 np0005542249 podman[74704]: 2025-12-02 10:48:35.780710038 +0000 UTC m=+0.045259573 container create 9109878f075b6a0322b21b1e1e7e5ccd50ae2c5a260a3c59132af31c259cd0fe (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:48:35 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16a23467ac52e0f8ea6a1135292d663776d625009fbb3bf60c4dbd2ed2987d4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:35 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16a23467ac52e0f8ea6a1135292d663776d625009fbb3bf60c4dbd2ed2987d4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:35 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16a23467ac52e0f8ea6a1135292d663776d625009fbb3bf60c4dbd2ed2987d4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:35 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16a23467ac52e0f8ea6a1135292d663776d625009fbb3bf60c4dbd2ed2987d4f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:35 np0005542249 podman[74704]: 2025-12-02 10:48:35.844060647 +0000 UTC m=+0.108610202 container init 9109878f075b6a0322b21b1e1e7e5ccd50ae2c5a260a3c59132af31c259cd0fe (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Dec  2 05:48:35 np0005542249 podman[74704]: 2025-12-02 10:48:35.856701669 +0000 UTC m=+0.121251204 container start 9109878f075b6a0322b21b1e1e7e5ccd50ae2c5a260a3c59132af31c259cd0fe (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:48:35 np0005542249 podman[74704]: 2025-12-02 10:48:35.762414404 +0000 UTC m=+0.026963959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:48:35 np0005542249 bash[74704]: 9109878f075b6a0322b21b1e1e7e5ccd50ae2c5a260a3c59132af31c259cd0fe
Dec  2 05:48:35 np0005542249 systemd[1]: Started Ceph mon.compute-0 for 95bc4eaa-1a14-59bf-acf2-4b3da055547d.
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: set uid:gid to 167:167 (ceph:ceph)
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: pidfile_write: ignore empty --pid-file
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: load: jerasure load: lrc 
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: RocksDB version: 7.9.2
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: Git sha 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: DB SUMMARY
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: DB Session ID:  5SC0VGU810A11GMCKGD6
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: CURRENT file:  CURRENT
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: IDENTITY file:  IDENTITY
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                         Options.error_if_exists: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                       Options.create_if_missing: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                         Options.paranoid_checks: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                                     Options.env: 0x55d8fbcd8c40
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                                      Options.fs: PosixFileSystem
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                                Options.info_log: 0x55d8fc7bce80
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                Options.max_file_opening_threads: 16
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                              Options.statistics: (nil)
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                               Options.use_fsync: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                       Options.max_log_file_size: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                         Options.allow_fallocate: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                        Options.use_direct_reads: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:          Options.create_missing_column_families: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                              Options.db_log_dir: 
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                                 Options.wal_dir: 
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                   Options.advise_random_on_open: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                    Options.write_buffer_manager: 0x55d8fc7ccb40
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                            Options.rate_limiter: (nil)
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                  Options.unordered_write: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                               Options.row_cache: None
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                              Options.wal_filter: None
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:             Options.allow_ingest_behind: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:             Options.two_write_queues: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:             Options.manual_wal_flush: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:             Options.wal_compression: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:             Options.atomic_flush: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                 Options.log_readahead_size: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:             Options.allow_data_in_errors: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:             Options.db_host_id: __hostname__
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:             Options.max_background_jobs: 2
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:             Options.max_background_compactions: -1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:             Options.max_subcompactions: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:             Options.max_total_wal_size: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                          Options.max_open_files: -1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                          Options.bytes_per_sync: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:       Options.compaction_readahead_size: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                  Options.max_background_flushes: -1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: Compression algorithms supported:
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: #011kZSTD supported: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: #011kXpressCompression supported: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: #011kBZip2Compression supported: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: #011kLZ4Compression supported: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: #011kZlibCompression supported: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: #011kSnappyCompression supported: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:           Options.merge_operator: 
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:        Options.compaction_filter: None
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d8fc7bca80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d8fc7b51f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:        Options.write_buffer_size: 33554432
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:  Options.max_write_buffer_number: 2
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:          Options.compression: NoCompression
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:             Options.num_levels: 7
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e48d9b43-c5ab-4d63-a013-45c19571f3aa
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672515935608, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672515938601, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672515, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "5SC0VGU810A11GMCKGD6", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672515938829, "job": 1, "event": "recovery_finished"}
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55d8fc7dee00
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: DB pointer 0x55d8fc868000
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d8fc7b51f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@-1(???) e0 preinit fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(probing) e0 win_standalone_election
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(probing) e1 win_standalone_election
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: paxos.0).electionLogic(2) init, last seen epoch 2
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-12-02T10:48:33.843181Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader).mds e1 new map
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: log_channel(cluster) log [DBG] : fsmap 
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  2 05:48:35 np0005542249 ceph-mon[74724]: mkfs 95bc4eaa-1a14-59bf-acf2-4b3da055547d
Dec  2 05:48:36 np0005542249 podman[74725]: 2025-12-02 10:48:36.000509491 +0000 UTC m=+0.087106963 container create 20d24dea020702121d562060c1d66ce1a1605ffbb87fb1e81d77c353cfbbb055 (image=quay.io/ceph/ceph:v18, name=nifty_fermat, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  2 05:48:36 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Dec  2 05:48:36 np0005542249 ceph-mon[74724]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec  2 05:48:36 np0005542249 ceph-mon[74724]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec  2 05:48:36 np0005542249 ceph-mon[74724]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  2 05:48:36 np0005542249 systemd[1]: Started libpod-conmon-20d24dea020702121d562060c1d66ce1a1605ffbb87fb1e81d77c353cfbbb055.scope.
Dec  2 05:48:36 np0005542249 podman[74725]: 2025-12-02 10:48:35.966572364 +0000 UTC m=+0.053169776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:48:36 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:48:36 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c5c9793646d93a64acd9e097c34439c9500f7c0b150e18bff0f83882ee3c28/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:36 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c5c9793646d93a64acd9e097c34439c9500f7c0b150e18bff0f83882ee3c28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:36 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c5c9793646d93a64acd9e097c34439c9500f7c0b150e18bff0f83882ee3c28/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:36 np0005542249 podman[74725]: 2025-12-02 10:48:36.109355708 +0000 UTC m=+0.195953180 container init 20d24dea020702121d562060c1d66ce1a1605ffbb87fb1e81d77c353cfbbb055 (image=quay.io/ceph/ceph:v18, name=nifty_fermat, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:48:36 np0005542249 podman[74725]: 2025-12-02 10:48:36.125312149 +0000 UTC m=+0.211909571 container start 20d24dea020702121d562060c1d66ce1a1605ffbb87fb1e81d77c353cfbbb055 (image=quay.io/ceph/ceph:v18, name=nifty_fermat, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:48:36 np0005542249 podman[74725]: 2025-12-02 10:48:36.129194274 +0000 UTC m=+0.215791716 container attach 20d24dea020702121d562060c1d66ce1a1605ffbb87fb1e81d77c353cfbbb055 (image=quay.io/ceph/ceph:v18, name=nifty_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:48:36 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Dec  2 05:48:36 np0005542249 ceph-mon[74724]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1324613057' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  2 05:48:36 np0005542249 nifty_fermat[74779]:  cluster:
Dec  2 05:48:36 np0005542249 nifty_fermat[74779]:    id:     95bc4eaa-1a14-59bf-acf2-4b3da055547d
Dec  2 05:48:36 np0005542249 nifty_fermat[74779]:    health: HEALTH_OK
Dec  2 05:48:36 np0005542249 nifty_fermat[74779]: 
Dec  2 05:48:36 np0005542249 nifty_fermat[74779]:  services:
Dec  2 05:48:36 np0005542249 nifty_fermat[74779]:    mon: 1 daemons, quorum compute-0 (age 0.555325s)
Dec  2 05:48:36 np0005542249 nifty_fermat[74779]:    mgr: no daemons active
Dec  2 05:48:36 np0005542249 nifty_fermat[74779]:    osd: 0 osds: 0 up, 0 in
Dec  2 05:48:36 np0005542249 nifty_fermat[74779]: 
Dec  2 05:48:36 np0005542249 nifty_fermat[74779]:  data:
Dec  2 05:48:36 np0005542249 nifty_fermat[74779]:    pools:   0 pools, 0 pgs
Dec  2 05:48:36 np0005542249 nifty_fermat[74779]:    objects: 0 objects, 0 B
Dec  2 05:48:36 np0005542249 nifty_fermat[74779]:    usage:   0 B used, 0 B / 0 B avail
Dec  2 05:48:36 np0005542249 nifty_fermat[74779]:    pgs:     
Dec  2 05:48:36 np0005542249 nifty_fermat[74779]: 
Dec  2 05:48:36 np0005542249 systemd[1]: libpod-20d24dea020702121d562060c1d66ce1a1605ffbb87fb1e81d77c353cfbbb055.scope: Deactivated successfully.
Dec  2 05:48:36 np0005542249 podman[74725]: 2025-12-02 10:48:36.551983606 +0000 UTC m=+0.638581058 container died 20d24dea020702121d562060c1d66ce1a1605ffbb87fb1e81d77c353cfbbb055 (image=quay.io/ceph/ceph:v18, name=nifty_fermat, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  2 05:48:36 np0005542249 systemd[1]: var-lib-containers-storage-overlay-41c5c9793646d93a64acd9e097c34439c9500f7c0b150e18bff0f83882ee3c28-merged.mount: Deactivated successfully.
Dec  2 05:48:36 np0005542249 podman[74725]: 2025-12-02 10:48:36.744750308 +0000 UTC m=+0.831347760 container remove 20d24dea020702121d562060c1d66ce1a1605ffbb87fb1e81d77c353cfbbb055 (image=quay.io/ceph/ceph:v18, name=nifty_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  2 05:48:36 np0005542249 systemd[1]: libpod-conmon-20d24dea020702121d562060c1d66ce1a1605ffbb87fb1e81d77c353cfbbb055.scope: Deactivated successfully.
Dec  2 05:48:36 np0005542249 podman[74817]: 2025-12-02 10:48:36.845121078 +0000 UTC m=+0.069853677 container create 05862179a7cf838793d936eadc4c979eca4cd5e0ea492de286b5ace0a7a62278 (image=quay.io/ceph/ceph:v18, name=nostalgic_rosalind, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:48:36 np0005542249 systemd[1]: Started libpod-conmon-05862179a7cf838793d936eadc4c979eca4cd5e0ea492de286b5ace0a7a62278.scope.
Dec  2 05:48:36 np0005542249 podman[74817]: 2025-12-02 10:48:36.817262725 +0000 UTC m=+0.041995394 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:48:36 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:48:36 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2708c39718e7ab8cb783d4c975fa905c85e83daaf94bb463698855058c75572f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:36 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2708c39718e7ab8cb783d4c975fa905c85e83daaf94bb463698855058c75572f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:36 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2708c39718e7ab8cb783d4c975fa905c85e83daaf94bb463698855058c75572f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:36 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2708c39718e7ab8cb783d4c975fa905c85e83daaf94bb463698855058c75572f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:36 np0005542249 podman[74817]: 2025-12-02 10:48:36.93971912 +0000 UTC m=+0.164451699 container init 05862179a7cf838793d936eadc4c979eca4cd5e0ea492de286b5ace0a7a62278 (image=quay.io/ceph/ceph:v18, name=nostalgic_rosalind, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:48:36 np0005542249 podman[74817]: 2025-12-02 10:48:36.947045359 +0000 UTC m=+0.171777948 container start 05862179a7cf838793d936eadc4c979eca4cd5e0ea492de286b5ace0a7a62278 (image=quay.io/ceph/ceph:v18, name=nostalgic_rosalind, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:48:36 np0005542249 podman[74817]: 2025-12-02 10:48:36.951226022 +0000 UTC m=+0.175958621 container attach 05862179a7cf838793d936eadc4c979eca4cd5e0ea492de286b5ace0a7a62278 (image=quay.io/ceph/ceph:v18, name=nostalgic_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Dec  2 05:48:37 np0005542249 ceph-mon[74724]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  2 05:48:37 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Dec  2 05:48:37 np0005542249 ceph-mon[74724]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4246539685' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  2 05:48:37 np0005542249 ceph-mon[74724]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4246539685' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  2 05:48:37 np0005542249 nostalgic_rosalind[74834]: 
Dec  2 05:48:37 np0005542249 nostalgic_rosalind[74834]: [global]
Dec  2 05:48:37 np0005542249 nostalgic_rosalind[74834]: #011fsid = 95bc4eaa-1a14-59bf-acf2-4b3da055547d
Dec  2 05:48:37 np0005542249 nostalgic_rosalind[74834]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec  2 05:48:37 np0005542249 nostalgic_rosalind[74834]: #011osd_crush_chooseleaf_type = 0
Dec  2 05:48:37 np0005542249 systemd[1]: libpod-05862179a7cf838793d936eadc4c979eca4cd5e0ea492de286b5ace0a7a62278.scope: Deactivated successfully.
Dec  2 05:48:37 np0005542249 podman[74817]: 2025-12-02 10:48:37.34500026 +0000 UTC m=+0.569732849 container died 05862179a7cf838793d936eadc4c979eca4cd5e0ea492de286b5ace0a7a62278 (image=quay.io/ceph/ceph:v18, name=nostalgic_rosalind, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:48:37 np0005542249 systemd[1]: var-lib-containers-storage-overlay-2708c39718e7ab8cb783d4c975fa905c85e83daaf94bb463698855058c75572f-merged.mount: Deactivated successfully.
Dec  2 05:48:37 np0005542249 podman[74817]: 2025-12-02 10:48:37.402879832 +0000 UTC m=+0.627612441 container remove 05862179a7cf838793d936eadc4c979eca4cd5e0ea492de286b5ace0a7a62278 (image=quay.io/ceph/ceph:v18, name=nostalgic_rosalind, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:48:37 np0005542249 systemd[1]: libpod-conmon-05862179a7cf838793d936eadc4c979eca4cd5e0ea492de286b5ace0a7a62278.scope: Deactivated successfully.
Dec  2 05:48:37 np0005542249 podman[74873]: 2025-12-02 10:48:37.49058815 +0000 UTC m=+0.059879268 container create 92801c1abba6297046d9fa1b5675adbbc5ead930bdf050699e40659916fa0105 (image=quay.io/ceph/ceph:v18, name=reverent_albattani, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:48:37 np0005542249 systemd[1]: Started libpod-conmon-92801c1abba6297046d9fa1b5675adbbc5ead930bdf050699e40659916fa0105.scope.
Dec  2 05:48:37 np0005542249 podman[74873]: 2025-12-02 10:48:37.465937684 +0000 UTC m=+0.035228862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:48:37 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:48:37 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a5d55dfb1d34c1df9c94e534af3f6948b234c1cf9b802704fd3a1cc1c572e5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:37 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a5d55dfb1d34c1df9c94e534af3f6948b234c1cf9b802704fd3a1cc1c572e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:37 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a5d55dfb1d34c1df9c94e534af3f6948b234c1cf9b802704fd3a1cc1c572e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:37 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a5d55dfb1d34c1df9c94e534af3f6948b234c1cf9b802704fd3a1cc1c572e5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:37 np0005542249 podman[74873]: 2025-12-02 10:48:37.611340719 +0000 UTC m=+0.180631897 container init 92801c1abba6297046d9fa1b5675adbbc5ead930bdf050699e40659916fa0105 (image=quay.io/ceph/ceph:v18, name=reverent_albattani, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:48:37 np0005542249 podman[74873]: 2025-12-02 10:48:37.620756923 +0000 UTC m=+0.190048031 container start 92801c1abba6297046d9fa1b5675adbbc5ead930bdf050699e40659916fa0105 (image=quay.io/ceph/ceph:v18, name=reverent_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  2 05:48:37 np0005542249 podman[74873]: 2025-12-02 10:48:37.624766941 +0000 UTC m=+0.194058109 container attach 92801c1abba6297046d9fa1b5675adbbc5ead930bdf050699e40659916fa0105 (image=quay.io/ceph/ceph:v18, name=reverent_albattani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  2 05:48:38 np0005542249 ceph-mon[74724]: from='client.? 192.168.122.100:0/4246539685' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  2 05:48:38 np0005542249 ceph-mon[74724]: from='client.? 192.168.122.100:0/4246539685' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  2 05:48:38 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:48:38 np0005542249 ceph-mon[74724]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3788602854' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:48:38 np0005542249 systemd[1]: libpod-92801c1abba6297046d9fa1b5675adbbc5ead930bdf050699e40659916fa0105.scope: Deactivated successfully.
Dec  2 05:48:38 np0005542249 podman[74873]: 2025-12-02 10:48:38.048654283 +0000 UTC m=+0.617945381 container died 92801c1abba6297046d9fa1b5675adbbc5ead930bdf050699e40659916fa0105 (image=quay.io/ceph/ceph:v18, name=reverent_albattani, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:48:38 np0005542249 systemd[1]: var-lib-containers-storage-overlay-79a5d55dfb1d34c1df9c94e534af3f6948b234c1cf9b802704fd3a1cc1c572e5-merged.mount: Deactivated successfully.
Dec  2 05:48:38 np0005542249 podman[74873]: 2025-12-02 10:48:38.096303259 +0000 UTC m=+0.665594387 container remove 92801c1abba6297046d9fa1b5675adbbc5ead930bdf050699e40659916fa0105 (image=quay.io/ceph/ceph:v18, name=reverent_albattani, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:48:38 np0005542249 systemd[1]: libpod-conmon-92801c1abba6297046d9fa1b5675adbbc5ead930bdf050699e40659916fa0105.scope: Deactivated successfully.
Dec  2 05:48:38 np0005542249 systemd[1]: Stopping Ceph mon.compute-0 for 95bc4eaa-1a14-59bf-acf2-4b3da055547d...
Dec  2 05:48:38 np0005542249 ceph-mon[74724]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec  2 05:48:38 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec  2 05:48:38 np0005542249 ceph-mon[74724]: mon.compute-0@0(leader) e1 shutdown
Dec  2 05:48:38 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0[74720]: 2025-12-02T10:48:38.387+0000 7f316f535640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec  2 05:48:38 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0[74720]: 2025-12-02T10:48:38.387+0000 7f316f535640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec  2 05:48:38 np0005542249 ceph-mon[74724]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  2 05:48:38 np0005542249 ceph-mon[74724]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  2 05:48:38 np0005542249 podman[74958]: 2025-12-02 10:48:38.552096011 +0000 UTC m=+0.224927543 container died 9109878f075b6a0322b21b1e1e7e5ccd50ae2c5a260a3c59132af31c259cd0fe (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:48:38 np0005542249 systemd[1]: var-lib-containers-storage-overlay-16a23467ac52e0f8ea6a1135292d663776d625009fbb3bf60c4dbd2ed2987d4f-merged.mount: Deactivated successfully.
Dec  2 05:48:38 np0005542249 podman[74958]: 2025-12-02 10:48:38.588340969 +0000 UTC m=+0.261172481 container remove 9109878f075b6a0322b21b1e1e7e5ccd50ae2c5a260a3c59132af31c259cd0fe (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  2 05:48:38 np0005542249 bash[74958]: ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0
Dec  2 05:48:38 np0005542249 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 05:48:38 np0005542249 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 05:48:38 np0005542249 systemd[1]: ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d@mon.compute-0.service: Deactivated successfully.
Dec  2 05:48:38 np0005542249 systemd[1]: Stopped Ceph mon.compute-0 for 95bc4eaa-1a14-59bf-acf2-4b3da055547d.
Dec  2 05:48:38 np0005542249 systemd[1]: ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d@mon.compute-0.service: Consumed 1.194s CPU time.
Dec  2 05:48:38 np0005542249 systemd[1]: Starting Ceph mon.compute-0 for 95bc4eaa-1a14-59bf-acf2-4b3da055547d...
Dec  2 05:48:38 np0005542249 podman[75061]: 2025-12-02 10:48:38.998763287 +0000 UTC m=+0.048483840 container create cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:48:39 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e858e10f33e19e7fcdf7f29f1f633b3ebf4b082c09b6e65db48d426cac3af8ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:39 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e858e10f33e19e7fcdf7f29f1f633b3ebf4b082c09b6e65db48d426cac3af8ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:39 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e858e10f33e19e7fcdf7f29f1f633b3ebf4b082c09b6e65db48d426cac3af8ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:39 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e858e10f33e19e7fcdf7f29f1f633b3ebf4b082c09b6e65db48d426cac3af8ff/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:39 np0005542249 podman[75061]: 2025-12-02 10:48:39.06814494 +0000 UTC m=+0.117865503 container init cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  2 05:48:39 np0005542249 podman[75061]: 2025-12-02 10:48:38.974116461 +0000 UTC m=+0.023837024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:48:39 np0005542249 podman[75061]: 2025-12-02 10:48:39.082346783 +0000 UTC m=+0.132067336 container start cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:48:39 np0005542249 bash[75061]: cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b
Dec  2 05:48:39 np0005542249 systemd[1]: Started Ceph mon.compute-0 for 95bc4eaa-1a14-59bf-acf2-4b3da055547d.
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: set uid:gid to 167:167 (ceph:ceph)
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: pidfile_write: ignore empty --pid-file
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: load: jerasure load: lrc 
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: RocksDB version: 7.9.2
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: Git sha 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: DB SUMMARY
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: DB Session ID:  FJAG8GF4HHVLV7YXGWEG
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: CURRENT file:  CURRENT
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: IDENTITY file:  IDENTITY
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 54564 ; 
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                         Options.error_if_exists: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                       Options.create_if_missing: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                         Options.paranoid_checks: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                                     Options.env: 0x560e29a79c40
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                                      Options.fs: PosixFileSystem
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                                Options.info_log: 0x560e2b4ef040
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                Options.max_file_opening_threads: 16
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                              Options.statistics: (nil)
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                               Options.use_fsync: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                       Options.max_log_file_size: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                         Options.allow_fallocate: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                        Options.use_direct_reads: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:          Options.create_missing_column_families: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                              Options.db_log_dir: 
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                                 Options.wal_dir: 
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                   Options.advise_random_on_open: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                    Options.write_buffer_manager: 0x560e2b4feb40
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                            Options.rate_limiter: (nil)
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                  Options.unordered_write: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                               Options.row_cache: None
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                              Options.wal_filter: None
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:             Options.allow_ingest_behind: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:             Options.two_write_queues: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:             Options.manual_wal_flush: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:             Options.wal_compression: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:             Options.atomic_flush: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                 Options.log_readahead_size: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:             Options.allow_data_in_errors: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:             Options.db_host_id: __hostname__
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:             Options.max_background_jobs: 2
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:             Options.max_background_compactions: -1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:             Options.max_subcompactions: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:             Options.max_total_wal_size: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                          Options.max_open_files: -1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                          Options.bytes_per_sync: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:       Options.compaction_readahead_size: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                  Options.max_background_flushes: -1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: Compression algorithms supported:
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: #011kZSTD supported: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: #011kXpressCompression supported: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: #011kBZip2Compression supported: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: #011kLZ4Compression supported: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: #011kZlibCompression supported: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: #011kSnappyCompression supported: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:           Options.merge_operator: 
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:        Options.compaction_filter: None
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560e2b4eec40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560e2b4e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:        Options.write_buffer_size: 33554432
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:  Options.max_write_buffer_number: 2
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:          Options.compression: NoCompression
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:             Options.num_levels: 7
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e48d9b43-c5ab-4d63-a013-45c19571f3aa
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672519142201, "job": 1, "event": "recovery_started", "wal_files": [9]}
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672519145618, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 54153, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 52695, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3023, "raw_average_key_size": 30, "raw_value_size": 50297, "raw_average_value_size": 502, "num_data_blocks": 8, "num_entries": 100, "num_filter_entries": 100, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672519, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672519145760, "job": 1, "event": "recovery_finished"}
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560e2b510e00
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: DB pointer 0x560e2b59a000
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   54.78 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     18.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0   54.78 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     18.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     18.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 2.85 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 2.85 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560e2b4e71f0#2 capacity: 512.00 MB usage: 1.73 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(2,0.95 KB,0.000181794%)#012#012** File Read Latency Histogram By Level [default] **
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: mon.compute-0@-1(???) e1 preinit fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: mon.compute-0@-1(???).mds e1 new map
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(probing) e1 win_standalone_election
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : fsmap 
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec  2 05:48:39 np0005542249 podman[75082]: 2025-12-02 10:48:39.175134557 +0000 UTC m=+0.055607652 container create 017130dcedc1317f38b931a6a454b4f09e7090330762dbc75f4497470c824e40 (image=quay.io/ceph/ceph:v18, name=hardcore_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:48:39 np0005542249 systemd[1]: Started libpod-conmon-017130dcedc1317f38b931a6a454b4f09e7090330762dbc75f4497470c824e40.scope.
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  2 05:48:39 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:48:39 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e035d5ee715457d788f95e6c79d347e0f71979b7548aa4180291577e8692372/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:39 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e035d5ee715457d788f95e6c79d347e0f71979b7548aa4180291577e8692372/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:39 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e035d5ee715457d788f95e6c79d347e0f71979b7548aa4180291577e8692372/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:39 np0005542249 podman[75082]: 2025-12-02 10:48:39.15560195 +0000 UTC m=+0.036075135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:48:39 np0005542249 podman[75082]: 2025-12-02 10:48:39.26193159 +0000 UTC m=+0.142404755 container init 017130dcedc1317f38b931a6a454b4f09e7090330762dbc75f4497470c824e40 (image=quay.io/ceph/ceph:v18, name=hardcore_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  2 05:48:39 np0005542249 podman[75082]: 2025-12-02 10:48:39.276119873 +0000 UTC m=+0.156592988 container start 017130dcedc1317f38b931a6a454b4f09e7090330762dbc75f4497470c824e40 (image=quay.io/ceph/ceph:v18, name=hardcore_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:48:39 np0005542249 podman[75082]: 2025-12-02 10:48:39.279985878 +0000 UTC m=+0.160459063 container attach 017130dcedc1317f38b931a6a454b4f09e7090330762dbc75f4497470c824e40 (image=quay.io/ceph/ceph:v18, name=hardcore_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Dec  2 05:48:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Dec  2 05:48:39 np0005542249 systemd[1]: libpod-017130dcedc1317f38b931a6a454b4f09e7090330762dbc75f4497470c824e40.scope: Deactivated successfully.
Dec  2 05:48:39 np0005542249 podman[75082]: 2025-12-02 10:48:39.705946295 +0000 UTC m=+0.586419380 container died 017130dcedc1317f38b931a6a454b4f09e7090330762dbc75f4497470c824e40 (image=quay.io/ceph/ceph:v18, name=hardcore_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:48:39 np0005542249 systemd[1]: var-lib-containers-storage-overlay-2e035d5ee715457d788f95e6c79d347e0f71979b7548aa4180291577e8692372-merged.mount: Deactivated successfully.
Dec  2 05:48:39 np0005542249 podman[75082]: 2025-12-02 10:48:39.947372212 +0000 UTC m=+0.827845327 container remove 017130dcedc1317f38b931a6a454b4f09e7090330762dbc75f4497470c824e40 (image=quay.io/ceph/ceph:v18, name=hardcore_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  2 05:48:39 np0005542249 systemd[1]: libpod-conmon-017130dcedc1317f38b931a6a454b4f09e7090330762dbc75f4497470c824e40.scope: Deactivated successfully.
Dec  2 05:48:40 np0005542249 podman[75174]: 2025-12-02 10:48:40.028653546 +0000 UTC m=+0.053234268 container create 30da6f9254e6d76f78db4907f80201e47f37b5b9ece6ced858c3c3072b54f9ee (image=quay.io/ceph/ceph:v18, name=magical_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:48:40 np0005542249 systemd[1]: Started libpod-conmon-30da6f9254e6d76f78db4907f80201e47f37b5b9ece6ced858c3c3072b54f9ee.scope.
Dec  2 05:48:40 np0005542249 podman[75174]: 2025-12-02 10:48:40.004806182 +0000 UTC m=+0.029386924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:48:40 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:48:40 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a32da79fa5152f4715fa3bec69eddb20c4017cf5e50f1e935e32df9731d9fd88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:40 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a32da79fa5152f4715fa3bec69eddb20c4017cf5e50f1e935e32df9731d9fd88/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:40 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a32da79fa5152f4715fa3bec69eddb20c4017cf5e50f1e935e32df9731d9fd88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:40 np0005542249 podman[75174]: 2025-12-02 10:48:40.128060869 +0000 UTC m=+0.152641671 container init 30da6f9254e6d76f78db4907f80201e47f37b5b9ece6ced858c3c3072b54f9ee (image=quay.io/ceph/ceph:v18, name=magical_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  2 05:48:40 np0005542249 podman[75174]: 2025-12-02 10:48:40.136938838 +0000 UTC m=+0.161519530 container start 30da6f9254e6d76f78db4907f80201e47f37b5b9ece6ced858c3c3072b54f9ee (image=quay.io/ceph/ceph:v18, name=magical_ishizaka, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  2 05:48:40 np0005542249 podman[75174]: 2025-12-02 10:48:40.140626298 +0000 UTC m=+0.165207030 container attach 30da6f9254e6d76f78db4907f80201e47f37b5b9ece6ced858c3c3072b54f9ee (image=quay.io/ceph/ceph:v18, name=magical_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  2 05:48:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Dec  2 05:48:40 np0005542249 systemd[1]: libpod-30da6f9254e6d76f78db4907f80201e47f37b5b9ece6ced858c3c3072b54f9ee.scope: Deactivated successfully.
Dec  2 05:48:40 np0005542249 podman[75174]: 2025-12-02 10:48:40.549832343 +0000 UTC m=+0.574413025 container died 30da6f9254e6d76f78db4907f80201e47f37b5b9ece6ced858c3c3072b54f9ee (image=quay.io/ceph/ceph:v18, name=magical_ishizaka, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:48:40 np0005542249 systemd[1]: var-lib-containers-storage-overlay-a32da79fa5152f4715fa3bec69eddb20c4017cf5e50f1e935e32df9731d9fd88-merged.mount: Deactivated successfully.
Dec  2 05:48:40 np0005542249 podman[75174]: 2025-12-02 10:48:40.591314153 +0000 UTC m=+0.615894835 container remove 30da6f9254e6d76f78db4907f80201e47f37b5b9ece6ced858c3c3072b54f9ee (image=quay.io/ceph/ceph:v18, name=magical_ishizaka, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  2 05:48:40 np0005542249 systemd[1]: libpod-conmon-30da6f9254e6d76f78db4907f80201e47f37b5b9ece6ced858c3c3072b54f9ee.scope: Deactivated successfully.
Dec  2 05:48:40 np0005542249 systemd[1]: Reloading.
Dec  2 05:48:40 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:48:40 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:48:40 np0005542249 systemd[1]: Reloading.
Dec  2 05:48:41 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:48:41 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:48:41 np0005542249 systemd[1]: Starting Ceph mgr.compute-0.ntxcvs for 95bc4eaa-1a14-59bf-acf2-4b3da055547d...
Dec  2 05:48:41 np0005542249 podman[75353]: 2025-12-02 10:48:41.399737843 +0000 UTC m=+0.041000208 container create e03605b236b56660f5174f63911714aeac7aabe1e5dbe09eb60fe0086b6a7093 (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Dec  2 05:48:41 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eccb66f16a26faf4a5e1787ba91edb6752a8a5ed028e8bbe7b08e51fb99492b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:41 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eccb66f16a26faf4a5e1787ba91edb6752a8a5ed028e8bbe7b08e51fb99492b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:41 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eccb66f16a26faf4a5e1787ba91edb6752a8a5ed028e8bbe7b08e51fb99492b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:41 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eccb66f16a26faf4a5e1787ba91edb6752a8a5ed028e8bbe7b08e51fb99492b6/merged/var/lib/ceph/mgr/ceph-compute-0.ntxcvs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:41 np0005542249 podman[75353]: 2025-12-02 10:48:41.465900648 +0000 UTC m=+0.107163053 container init e03605b236b56660f5174f63911714aeac7aabe1e5dbe09eb60fe0086b6a7093 (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  2 05:48:41 np0005542249 podman[75353]: 2025-12-02 10:48:41.37925997 +0000 UTC m=+0.020522355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:48:41 np0005542249 podman[75353]: 2025-12-02 10:48:41.478430917 +0000 UTC m=+0.119693292 container start e03605b236b56660f5174f63911714aeac7aabe1e5dbe09eb60fe0086b6a7093 (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  2 05:48:41 np0005542249 bash[75353]: e03605b236b56660f5174f63911714aeac7aabe1e5dbe09eb60fe0086b6a7093
Dec  2 05:48:41 np0005542249 systemd[1]: Started Ceph mgr.compute-0.ntxcvs for 95bc4eaa-1a14-59bf-acf2-4b3da055547d.
Dec  2 05:48:41 np0005542249 ceph-mgr[75372]: set uid:gid to 167:167 (ceph:ceph)
Dec  2 05:48:41 np0005542249 ceph-mgr[75372]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Dec  2 05:48:41 np0005542249 ceph-mgr[75372]: pidfile_write: ignore empty --pid-file
Dec  2 05:48:41 np0005542249 podman[75373]: 2025-12-02 10:48:41.55784465 +0000 UTC m=+0.046097065 container create 44d3a254a781d3d7a4e2f95a8b8fa6ba96958946df8a49908c16acd6f8f64e4d (image=quay.io/ceph/ceph:v18, name=eloquent_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:48:41 np0005542249 systemd[1]: Started libpod-conmon-44d3a254a781d3d7a4e2f95a8b8fa6ba96958946df8a49908c16acd6f8f64e4d.scope.
Dec  2 05:48:41 np0005542249 podman[75373]: 2025-12-02 10:48:41.537684266 +0000 UTC m=+0.025936731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:48:41 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'alerts'
Dec  2 05:48:41 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:48:41 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2c6d412869a27e2457f947ed8be79e82fa99469651a2c91da63b4d7804c776f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:41 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2c6d412869a27e2457f947ed8be79e82fa99469651a2c91da63b4d7804c776f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:41 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2c6d412869a27e2457f947ed8be79e82fa99469651a2c91da63b4d7804c776f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:41 np0005542249 ceph-mgr[75372]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  2 05:48:41 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'balancer'
Dec  2 05:48:41 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:48:41.934+0000 7f12d8dd4140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  2 05:48:42 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:48:42.180+0000 7f12d8dd4140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  2 05:48:42 np0005542249 ceph-mgr[75372]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  2 05:48:42 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'cephadm'
Dec  2 05:48:42 np0005542249 podman[75373]: 2025-12-02 10:48:42.546646689 +0000 UTC m=+1.034899214 container init 44d3a254a781d3d7a4e2f95a8b8fa6ba96958946df8a49908c16acd6f8f64e4d (image=quay.io/ceph/ceph:v18, name=eloquent_shamir, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:48:42 np0005542249 podman[75373]: 2025-12-02 10:48:42.56075097 +0000 UTC m=+1.049003385 container start 44d3a254a781d3d7a4e2f95a8b8fa6ba96958946df8a49908c16acd6f8f64e4d (image=quay.io/ceph/ceph:v18, name=eloquent_shamir, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  2 05:48:42 np0005542249 podman[75373]: 2025-12-02 10:48:42.66115815 +0000 UTC m=+1.149410665 container attach 44d3a254a781d3d7a4e2f95a8b8fa6ba96958946df8a49908c16acd6f8f64e4d (image=quay.io/ceph/ceph:v18, name=eloquent_shamir, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  2 05:48:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  2 05:48:42 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3397869837' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]: 
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]: {
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:    "fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:    "health": {
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "status": "HEALTH_OK",
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "checks": {},
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "mutes": []
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:    },
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:    "election_epoch": 5,
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:    "quorum": [
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        0
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:    ],
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:    "quorum_names": [
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "compute-0"
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:    ],
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:    "quorum_age": 3,
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:    "monmap": {
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "epoch": 1,
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "min_mon_release_name": "reef",
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "num_mons": 1
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:    },
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:    "osdmap": {
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "epoch": 1,
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "num_osds": 0,
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "num_up_osds": 0,
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "osd_up_since": 0,
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "num_in_osds": 0,
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "osd_in_since": 0,
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "num_remapped_pgs": 0
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:    },
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:    "pgmap": {
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "pgs_by_state": [],
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "num_pgs": 0,
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "num_pools": 0,
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "num_objects": 0,
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "data_bytes": 0,
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "bytes_used": 0,
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "bytes_avail": 0,
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "bytes_total": 0
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:    },
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:    "fsmap": {
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "epoch": 1,
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "by_rank": [],
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "up:standby": 0
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:    },
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:    "mgrmap": {
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "available": false,
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "num_standbys": 0,
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "modules": [
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:            "iostat",
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:            "nfs",
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:            "restful"
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        ],
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "services": {}
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:    },
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:    "servicemap": {
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "epoch": 1,
Dec  2 05:48:42 np0005542249 eloquent_shamir[75414]:        "modified": "2025-12-02T10:48:35.983755+0000",
Dec  2 05:48:43 np0005542249 eloquent_shamir[75414]:        "services": {}
Dec  2 05:48:43 np0005542249 eloquent_shamir[75414]:    },
Dec  2 05:48:43 np0005542249 eloquent_shamir[75414]:    "progress_events": {}
Dec  2 05:48:43 np0005542249 eloquent_shamir[75414]: }
Dec  2 05:48:43 np0005542249 systemd[1]: libpod-44d3a254a781d3d7a4e2f95a8b8fa6ba96958946df8a49908c16acd6f8f64e4d.scope: Deactivated successfully.
Dec  2 05:48:43 np0005542249 conmon[75414]: conmon 44d3a254a781d3d7a4e2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-44d3a254a781d3d7a4e2f95a8b8fa6ba96958946df8a49908c16acd6f8f64e4d.scope/container/memory.events
Dec  2 05:48:43 np0005542249 podman[75373]: 2025-12-02 10:48:43.017596111 +0000 UTC m=+1.505848536 container died 44d3a254a781d3d7a4e2f95a8b8fa6ba96958946df8a49908c16acd6f8f64e4d (image=quay.io/ceph/ceph:v18, name=eloquent_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:48:43 np0005542249 systemd[1]: var-lib-containers-storage-overlay-c2c6d412869a27e2457f947ed8be79e82fa99469651a2c91da63b4d7804c776f-merged.mount: Deactivated successfully.
Dec  2 05:48:43 np0005542249 podman[75373]: 2025-12-02 10:48:43.441985725 +0000 UTC m=+1.930238140 container remove 44d3a254a781d3d7a4e2f95a8b8fa6ba96958946df8a49908c16acd6f8f64e4d (image=quay.io/ceph/ceph:v18, name=eloquent_shamir, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  2 05:48:43 np0005542249 systemd[1]: libpod-conmon-44d3a254a781d3d7a4e2f95a8b8fa6ba96958946df8a49908c16acd6f8f64e4d.scope: Deactivated successfully.
Dec  2 05:48:44 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'crash'
Dec  2 05:48:44 np0005542249 ceph-mgr[75372]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  2 05:48:44 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'dashboard'
Dec  2 05:48:44 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:48:44.373+0000 7f12d8dd4140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  2 05:48:45 np0005542249 podman[75463]: 2025-12-02 10:48:45.538768671 +0000 UTC m=+0.063227928 container create 16bd0c5e6fc2d7e20a0426eebba84d8eac9373c7f6b5fc40aecdb2fe18eae621 (image=quay.io/ceph/ceph:v18, name=elegant_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:48:45 np0005542249 systemd[1]: Started libpod-conmon-16bd0c5e6fc2d7e20a0426eebba84d8eac9373c7f6b5fc40aecdb2fe18eae621.scope.
Dec  2 05:48:45 np0005542249 podman[75463]: 2025-12-02 10:48:45.508679389 +0000 UTC m=+0.033138696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:48:45 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:48:45 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fb125ae9790df68d5da269d49cab6e407aa068b9ccfb4bbf70998897af5aee8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:45 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fb125ae9790df68d5da269d49cab6e407aa068b9ccfb4bbf70998897af5aee8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:45 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fb125ae9790df68d5da269d49cab6e407aa068b9ccfb4bbf70998897af5aee8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:45 np0005542249 podman[75463]: 2025-12-02 10:48:45.628839292 +0000 UTC m=+0.153298549 container init 16bd0c5e6fc2d7e20a0426eebba84d8eac9373c7f6b5fc40aecdb2fe18eae621 (image=quay.io/ceph/ceph:v18, name=elegant_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:48:45 np0005542249 podman[75463]: 2025-12-02 10:48:45.638136253 +0000 UTC m=+0.162595480 container start 16bd0c5e6fc2d7e20a0426eebba84d8eac9373c7f6b5fc40aecdb2fe18eae621 (image=quay.io/ceph/ceph:v18, name=elegant_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:48:45 np0005542249 podman[75463]: 2025-12-02 10:48:45.641252507 +0000 UTC m=+0.165711794 container attach 16bd0c5e6fc2d7e20a0426eebba84d8eac9373c7f6b5fc40aecdb2fe18eae621 (image=quay.io/ceph/ceph:v18, name=elegant_kalam, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:48:45 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'devicehealth'
Dec  2 05:48:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  2 05:48:46 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/147227416' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]: 
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]: {
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:    "fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:    "health": {
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "status": "HEALTH_OK",
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "checks": {},
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "mutes": []
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:    },
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:    "election_epoch": 5,
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:    "quorum": [
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        0
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:    ],
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:    "quorum_names": [
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "compute-0"
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:    ],
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:    "quorum_age": 6,
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:    "monmap": {
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "epoch": 1,
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "min_mon_release_name": "reef",
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "num_mons": 1
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:    },
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:    "osdmap": {
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "epoch": 1,
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "num_osds": 0,
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "num_up_osds": 0,
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "osd_up_since": 0,
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "num_in_osds": 0,
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "osd_in_since": 0,
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "num_remapped_pgs": 0
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:    },
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:    "pgmap": {
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "pgs_by_state": [],
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "num_pgs": 0,
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "num_pools": 0,
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "num_objects": 0,
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "data_bytes": 0,
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "bytes_used": 0,
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "bytes_avail": 0,
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "bytes_total": 0
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:    },
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:    "fsmap": {
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "epoch": 1,
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "by_rank": [],
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "up:standby": 0
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:    },
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:    "mgrmap": {
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "available": false,
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "num_standbys": 0,
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "modules": [
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:            "iostat",
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:            "nfs",
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:            "restful"
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        ],
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "services": {}
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:    },
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:    "servicemap": {
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "epoch": 1,
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "modified": "2025-12-02T10:48:35.983755+0000",
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:        "services": {}
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:    },
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]:    "progress_events": {}
Dec  2 05:48:46 np0005542249 elegant_kalam[75479]: }
Dec  2 05:48:46 np0005542249 systemd[1]: libpod-16bd0c5e6fc2d7e20a0426eebba84d8eac9373c7f6b5fc40aecdb2fe18eae621.scope: Deactivated successfully.
Dec  2 05:48:46 np0005542249 conmon[75479]: conmon 16bd0c5e6fc2d7e20a04 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-16bd0c5e6fc2d7e20a0426eebba84d8eac9373c7f6b5fc40aecdb2fe18eae621.scope/container/memory.events
Dec  2 05:48:46 np0005542249 podman[75463]: 2025-12-02 10:48:46.080941045 +0000 UTC m=+0.605400262 container died 16bd0c5e6fc2d7e20a0426eebba84d8eac9373c7f6b5fc40aecdb2fe18eae621 (image=quay.io/ceph/ceph:v18, name=elegant_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  2 05:48:46 np0005542249 systemd[1]: var-lib-containers-storage-overlay-3fb125ae9790df68d5da269d49cab6e407aa068b9ccfb4bbf70998897af5aee8-merged.mount: Deactivated successfully.
Dec  2 05:48:46 np0005542249 podman[75463]: 2025-12-02 10:48:46.118863588 +0000 UTC m=+0.643322805 container remove 16bd0c5e6fc2d7e20a0426eebba84d8eac9373c7f6b5fc40aecdb2fe18eae621 (image=quay.io/ceph/ceph:v18, name=elegant_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:48:46 np0005542249 systemd[1]: libpod-conmon-16bd0c5e6fc2d7e20a0426eebba84d8eac9373c7f6b5fc40aecdb2fe18eae621.scope: Deactivated successfully.
Dec  2 05:48:46 np0005542249 ceph-mgr[75372]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  2 05:48:46 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:48:46.137+0000 7f12d8dd4140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  2 05:48:46 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'diskprediction_local'
Dec  2 05:48:46 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  2 05:48:46 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  2 05:48:46 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]:  from numpy import show_config as show_numpy_config
Dec  2 05:48:46 np0005542249 ceph-mgr[75372]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  2 05:48:46 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:48:46.685+0000 7f12d8dd4140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  2 05:48:46 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'influx'
Dec  2 05:48:46 np0005542249 ceph-mgr[75372]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  2 05:48:46 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'insights'
Dec  2 05:48:46 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:48:46.924+0000 7f12d8dd4140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  2 05:48:47 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'iostat'
Dec  2 05:48:47 np0005542249 ceph-mgr[75372]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  2 05:48:47 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'k8sevents'
Dec  2 05:48:47 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:48:47.411+0000 7f12d8dd4140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  2 05:48:48 np0005542249 podman[75517]: 2025-12-02 10:48:48.225023597 +0000 UTC m=+0.076227219 container create dd297bde4b85033816584a56119822b04713d9d6d6a967c457775b2bfc9f0dfc (image=quay.io/ceph/ceph:v18, name=heuristic_jemison, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:48:48 np0005542249 systemd[1]: Started libpod-conmon-dd297bde4b85033816584a56119822b04713d9d6d6a967c457775b2bfc9f0dfc.scope.
Dec  2 05:48:48 np0005542249 podman[75517]: 2025-12-02 10:48:48.192825698 +0000 UTC m=+0.044029370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:48:48 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:48:48 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9169232a59368a9e2506f60da21560f974a4ae8d701b1ef35f958ef52bca05d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:48 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9169232a59368a9e2506f60da21560f974a4ae8d701b1ef35f958ef52bca05d4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:48 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9169232a59368a9e2506f60da21560f974a4ae8d701b1ef35f958ef52bca05d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:48 np0005542249 podman[75517]: 2025-12-02 10:48:48.32560227 +0000 UTC m=+0.176805972 container init dd297bde4b85033816584a56119822b04713d9d6d6a967c457775b2bfc9f0dfc (image=quay.io/ceph/ceph:v18, name=heuristic_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  2 05:48:48 np0005542249 podman[75517]: 2025-12-02 10:48:48.335378434 +0000 UTC m=+0.186582016 container start dd297bde4b85033816584a56119822b04713d9d6d6a967c457775b2bfc9f0dfc (image=quay.io/ceph/ceph:v18, name=heuristic_jemison, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:48:48 np0005542249 podman[75517]: 2025-12-02 10:48:48.340881803 +0000 UTC m=+0.192085425 container attach dd297bde4b85033816584a56119822b04713d9d6d6a967c457775b2bfc9f0dfc (image=quay.io/ceph/ceph:v18, name=heuristic_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  2 05:48:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  2 05:48:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2220746996' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]: 
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]: {
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:    "fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:    "health": {
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "status": "HEALTH_OK",
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "checks": {},
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "mutes": []
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:    },
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:    "election_epoch": 5,
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:    "quorum": [
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        0
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:    ],
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:    "quorum_names": [
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "compute-0"
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:    ],
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:    "quorum_age": 9,
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:    "monmap": {
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "epoch": 1,
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "min_mon_release_name": "reef",
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "num_mons": 1
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:    },
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:    "osdmap": {
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "epoch": 1,
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "num_osds": 0,
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "num_up_osds": 0,
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "osd_up_since": 0,
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "num_in_osds": 0,
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "osd_in_since": 0,
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "num_remapped_pgs": 0
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:    },
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:    "pgmap": {
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "pgs_by_state": [],
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "num_pgs": 0,
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "num_pools": 0,
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "num_objects": 0,
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "data_bytes": 0,
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "bytes_used": 0,
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "bytes_avail": 0,
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "bytes_total": 0
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:    },
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:    "fsmap": {
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "epoch": 1,
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "by_rank": [],
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "up:standby": 0
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:    },
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:    "mgrmap": {
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "available": false,
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "num_standbys": 0,
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "modules": [
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:            "iostat",
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:            "nfs",
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:            "restful"
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        ],
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "services": {}
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:    },
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:    "servicemap": {
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "epoch": 1,
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "modified": "2025-12-02T10:48:35.983755+0000",
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:        "services": {}
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:    },
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]:    "progress_events": {}
Dec  2 05:48:48 np0005542249 heuristic_jemison[75533]: }
Dec  2 05:48:48 np0005542249 systemd[1]: libpod-dd297bde4b85033816584a56119822b04713d9d6d6a967c457775b2bfc9f0dfc.scope: Deactivated successfully.
Dec  2 05:48:48 np0005542249 podman[75517]: 2025-12-02 10:48:48.788526106 +0000 UTC m=+0.639729718 container died dd297bde4b85033816584a56119822b04713d9d6d6a967c457775b2bfc9f0dfc (image=quay.io/ceph/ceph:v18, name=heuristic_jemison, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  2 05:48:48 np0005542249 systemd[1]: var-lib-containers-storage-overlay-9169232a59368a9e2506f60da21560f974a4ae8d701b1ef35f958ef52bca05d4-merged.mount: Deactivated successfully.
Dec  2 05:48:48 np0005542249 podman[75517]: 2025-12-02 10:48:48.84387638 +0000 UTC m=+0.695079972 container remove dd297bde4b85033816584a56119822b04713d9d6d6a967c457775b2bfc9f0dfc (image=quay.io/ceph/ceph:v18, name=heuristic_jemison, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec  2 05:48:48 np0005542249 systemd[1]: libpod-conmon-dd297bde4b85033816584a56119822b04713d9d6d6a967c457775b2bfc9f0dfc.scope: Deactivated successfully.
Dec  2 05:48:49 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'localpool'
Dec  2 05:48:49 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'mds_autoscaler'
Dec  2 05:48:50 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'mirroring'
Dec  2 05:48:50 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'nfs'
Dec  2 05:48:50 np0005542249 podman[75570]: 2025-12-02 10:48:50.927815618 +0000 UTC m=+0.053328150 container create 2fb10d9ec3d707a3f2fad591c2cd900084889bacc39a392711e0dde2d6faa116 (image=quay.io/ceph/ceph:v18, name=epic_rosalind, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  2 05:48:50 np0005542249 systemd[1]: Started libpod-conmon-2fb10d9ec3d707a3f2fad591c2cd900084889bacc39a392711e0dde2d6faa116.scope.
Dec  2 05:48:50 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:48:50 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d1892b1b0a8fe83da4c4c338a626b9e68bf80fb49a9f114d8ace62d27543f9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:50 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d1892b1b0a8fe83da4c4c338a626b9e68bf80fb49a9f114d8ace62d27543f9b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:50 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d1892b1b0a8fe83da4c4c338a626b9e68bf80fb49a9f114d8ace62d27543f9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:51 np0005542249 podman[75570]: 2025-12-02 10:48:51.00347204 +0000 UTC m=+0.128984602 container init 2fb10d9ec3d707a3f2fad591c2cd900084889bacc39a392711e0dde2d6faa116 (image=quay.io/ceph/ceph:v18, name=epic_rosalind, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  2 05:48:51 np0005542249 podman[75570]: 2025-12-02 10:48:50.912563657 +0000 UTC m=+0.038076239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:48:51 np0005542249 podman[75570]: 2025-12-02 10:48:51.010383827 +0000 UTC m=+0.135896359 container start 2fb10d9ec3d707a3f2fad591c2cd900084889bacc39a392711e0dde2d6faa116 (image=quay.io/ceph/ceph:v18, name=epic_rosalind, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:48:51 np0005542249 ceph-mgr[75372]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  2 05:48:51 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'orchestrator'
Dec  2 05:48:51 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:48:51.162+0000 7f12d8dd4140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  2 05:48:51 np0005542249 podman[75570]: 2025-12-02 10:48:51.287268091 +0000 UTC m=+0.412780663 container attach 2fb10d9ec3d707a3f2fad591c2cd900084889bacc39a392711e0dde2d6faa116 (image=quay.io/ceph/ceph:v18, name=epic_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  2 05:48:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  2 05:48:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3092761256' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]: 
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]: {
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:    "fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:    "health": {
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "status": "HEALTH_OK",
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "checks": {},
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "mutes": []
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:    },
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:    "election_epoch": 5,
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:    "quorum": [
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        0
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:    ],
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:    "quorum_names": [
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "compute-0"
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:    ],
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:    "quorum_age": 12,
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:    "monmap": {
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "epoch": 1,
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "min_mon_release_name": "reef",
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "num_mons": 1
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:    },
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:    "osdmap": {
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "epoch": 1,
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "num_osds": 0,
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "num_up_osds": 0,
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "osd_up_since": 0,
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "num_in_osds": 0,
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "osd_in_since": 0,
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "num_remapped_pgs": 0
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:    },
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:    "pgmap": {
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "pgs_by_state": [],
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "num_pgs": 0,
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "num_pools": 0,
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "num_objects": 0,
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "data_bytes": 0,
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "bytes_used": 0,
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "bytes_avail": 0,
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "bytes_total": 0
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:    },
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:    "fsmap": {
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "epoch": 1,
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "by_rank": [],
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "up:standby": 0
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:    },
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:    "mgrmap": {
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "available": false,
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "num_standbys": 0,
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "modules": [
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:            "iostat",
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:            "nfs",
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:            "restful"
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        ],
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "services": {}
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:    },
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:    "servicemap": {
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "epoch": 1,
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "modified": "2025-12-02T10:48:35.983755+0000",
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:        "services": {}
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:    },
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]:    "progress_events": {}
Dec  2 05:48:51 np0005542249 epic_rosalind[75585]: }
Dec  2 05:48:51 np0005542249 systemd[1]: libpod-2fb10d9ec3d707a3f2fad591c2cd900084889bacc39a392711e0dde2d6faa116.scope: Deactivated successfully.
Dec  2 05:48:51 np0005542249 podman[75570]: 2025-12-02 10:48:51.433363424 +0000 UTC m=+0.558875956 container died 2fb10d9ec3d707a3f2fad591c2cd900084889bacc39a392711e0dde2d6faa116 (image=quay.io/ceph/ceph:v18, name=epic_rosalind, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:48:51 np0005542249 systemd[1]: var-lib-containers-storage-overlay-0d1892b1b0a8fe83da4c4c338a626b9e68bf80fb49a9f114d8ace62d27543f9b-merged.mount: Deactivated successfully.
Dec  2 05:48:51 np0005542249 podman[75570]: 2025-12-02 10:48:51.491523984 +0000 UTC m=+0.617036556 container remove 2fb10d9ec3d707a3f2fad591c2cd900084889bacc39a392711e0dde2d6faa116 (image=quay.io/ceph/ceph:v18, name=epic_rosalind, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:48:51 np0005542249 systemd[1]: libpod-conmon-2fb10d9ec3d707a3f2fad591c2cd900084889bacc39a392711e0dde2d6faa116.scope: Deactivated successfully.
Dec  2 05:48:51 np0005542249 ceph-mgr[75372]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  2 05:48:51 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'osd_perf_query'
Dec  2 05:48:51 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:48:51.801+0000 7f12d8dd4140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  2 05:48:52 np0005542249 ceph-mgr[75372]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  2 05:48:52 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'osd_support'
Dec  2 05:48:52 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:48:52.064+0000 7f12d8dd4140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  2 05:48:52 np0005542249 ceph-mgr[75372]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  2 05:48:52 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'pg_autoscaler'
Dec  2 05:48:52 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:48:52.299+0000 7f12d8dd4140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  2 05:48:52 np0005542249 ceph-mgr[75372]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  2 05:48:52 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'progress'
Dec  2 05:48:52 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:48:52.564+0000 7f12d8dd4140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  2 05:48:52 np0005542249 ceph-mgr[75372]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  2 05:48:52 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'prometheus'
Dec  2 05:48:52 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:48:52.808+0000 7f12d8dd4140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  2 05:48:53 np0005542249 podman[75625]: 2025-12-02 10:48:53.559408528 +0000 UTC m=+0.042908328 container create 4b05f940ad138edcfebe14675a486934ef60bd884ba9c26b45b3333106a1f4b9 (image=quay.io/ceph/ceph:v18, name=priceless_ganguly, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  2 05:48:53 np0005542249 systemd[1]: Started libpod-conmon-4b05f940ad138edcfebe14675a486934ef60bd884ba9c26b45b3333106a1f4b9.scope.
Dec  2 05:48:53 np0005542249 podman[75625]: 2025-12-02 10:48:53.540363275 +0000 UTC m=+0.023863085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:48:53 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:48:53 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0945c85d5c490996daa38b26101540dc711051511613cbbaf88c133192444124/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:53 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0945c85d5c490996daa38b26101540dc711051511613cbbaf88c133192444124/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:53 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0945c85d5c490996daa38b26101540dc711051511613cbbaf88c133192444124/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:53 np0005542249 ceph-mgr[75372]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  2 05:48:53 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:48:53.919+0000 7f12d8dd4140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  2 05:48:53 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'rbd_support'
Dec  2 05:48:54 np0005542249 podman[75625]: 2025-12-02 10:48:54.094346518 +0000 UTC m=+0.577846388 container init 4b05f940ad138edcfebe14675a486934ef60bd884ba9c26b45b3333106a1f4b9 (image=quay.io/ceph/ceph:v18, name=priceless_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:48:54 np0005542249 podman[75625]: 2025-12-02 10:48:54.104923773 +0000 UTC m=+0.588423563 container start 4b05f940ad138edcfebe14675a486934ef60bd884ba9c26b45b3333106a1f4b9 (image=quay.io/ceph/ceph:v18, name=priceless_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  2 05:48:54 np0005542249 ceph-mgr[75372]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  2 05:48:54 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'restful'
Dec  2 05:48:54 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:48:54.218+0000 7f12d8dd4140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  2 05:48:54 np0005542249 podman[75625]: 2025-12-02 10:48:54.302450084 +0000 UTC m=+0.785949914 container attach 4b05f940ad138edcfebe14675a486934ef60bd884ba9c26b45b3333106a1f4b9 (image=quay.io/ceph/ceph:v18, name=priceless_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  2 05:48:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  2 05:48:54 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4226795221' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]: 
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]: {
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:    "fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:    "health": {
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "status": "HEALTH_OK",
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "checks": {},
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "mutes": []
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:    },
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:    "election_epoch": 5,
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:    "quorum": [
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        0
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:    ],
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:    "quorum_names": [
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "compute-0"
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:    ],
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:    "quorum_age": 15,
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:    "monmap": {
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "epoch": 1,
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "min_mon_release_name": "reef",
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "num_mons": 1
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:    },
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:    "osdmap": {
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "epoch": 1,
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "num_osds": 0,
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "num_up_osds": 0,
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "osd_up_since": 0,
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "num_in_osds": 0,
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "osd_in_since": 0,
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "num_remapped_pgs": 0
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:    },
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:    "pgmap": {
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "pgs_by_state": [],
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "num_pgs": 0,
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "num_pools": 0,
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "num_objects": 0,
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "data_bytes": 0,
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "bytes_used": 0,
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "bytes_avail": 0,
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "bytes_total": 0
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:    },
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:    "fsmap": {
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "epoch": 1,
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "by_rank": [],
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "up:standby": 0
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:    },
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:    "mgrmap": {
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "available": false,
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "num_standbys": 0,
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "modules": [
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:            "iostat",
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:            "nfs",
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:            "restful"
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        ],
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "services": {}
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:    },
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:    "servicemap": {
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "epoch": 1,
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "modified": "2025-12-02T10:48:35.983755+0000",
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:        "services": {}
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:    },
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]:    "progress_events": {}
Dec  2 05:48:54 np0005542249 priceless_ganguly[75642]: }
Dec  2 05:48:54 np0005542249 systemd[1]: libpod-4b05f940ad138edcfebe14675a486934ef60bd884ba9c26b45b3333106a1f4b9.scope: Deactivated successfully.
Dec  2 05:48:54 np0005542249 podman[75668]: 2025-12-02 10:48:54.579287817 +0000 UTC m=+0.028218883 container died 4b05f940ad138edcfebe14675a486934ef60bd884ba9c26b45b3333106a1f4b9 (image=quay.io/ceph/ceph:v18, name=priceless_ganguly, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:48:54 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'rgw'
Dec  2 05:48:55 np0005542249 systemd[1]: var-lib-containers-storage-overlay-0945c85d5c490996daa38b26101540dc711051511613cbbaf88c133192444124-merged.mount: Deactivated successfully.
Dec  2 05:48:55 np0005542249 podman[75668]: 2025-12-02 10:48:55.083840565 +0000 UTC m=+0.532771541 container remove 4b05f940ad138edcfebe14675a486934ef60bd884ba9c26b45b3333106a1f4b9 (image=quay.io/ceph/ceph:v18, name=priceless_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  2 05:48:55 np0005542249 systemd[1]: libpod-conmon-4b05f940ad138edcfebe14675a486934ef60bd884ba9c26b45b3333106a1f4b9.scope: Deactivated successfully.
Dec  2 05:48:55 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:48:55.645+0000 7f12d8dd4140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  2 05:48:55 np0005542249 ceph-mgr[75372]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  2 05:48:55 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'rook'
Dec  2 05:48:57 np0005542249 podman[75683]: 2025-12-02 10:48:57.215112376 +0000 UTC m=+0.095443948 container create fe3946f4161126be6ada87d9ab5c3573b7ab633903892948dbf242173e205ca2 (image=quay.io/ceph/ceph:v18, name=elated_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Dec  2 05:48:57 np0005542249 podman[75683]: 2025-12-02 10:48:57.153184201 +0000 UTC m=+0.033515793 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:48:57 np0005542249 systemd[1]: Started libpod-conmon-fe3946f4161126be6ada87d9ab5c3573b7ab633903892948dbf242173e205ca2.scope.
Dec  2 05:48:57 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:48:57 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5dd415d00f5a529802db84cdc9d3f076b6f1f9aec1e3634ed6f439fe2d4fb6f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:57 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5dd415d00f5a529802db84cdc9d3f076b6f1f9aec1e3634ed6f439fe2d4fb6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:57 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5dd415d00f5a529802db84cdc9d3f076b6f1f9aec1e3634ed6f439fe2d4fb6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:48:57 np0005542249 podman[75683]: 2025-12-02 10:48:57.284137694 +0000 UTC m=+0.164469286 container init fe3946f4161126be6ada87d9ab5c3573b7ab633903892948dbf242173e205ca2 (image=quay.io/ceph/ceph:v18, name=elated_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:48:57 np0005542249 podman[75683]: 2025-12-02 10:48:57.29219457 +0000 UTC m=+0.172526142 container start fe3946f4161126be6ada87d9ab5c3573b7ab633903892948dbf242173e205ca2 (image=quay.io/ceph/ceph:v18, name=elated_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  2 05:48:57 np0005542249 podman[75683]: 2025-12-02 10:48:57.298159891 +0000 UTC m=+0.178491493 container attach fe3946f4161126be6ada87d9ab5c3573b7ab633903892948dbf242173e205ca2 (image=quay.io/ceph/ceph:v18, name=elated_dirac, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:48:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  2 05:48:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2572182526' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  2 05:48:57 np0005542249 elated_dirac[75699]: 
Dec  2 05:48:57 np0005542249 elated_dirac[75699]: {
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:    "fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:    "health": {
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "status": "HEALTH_OK",
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "checks": {},
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "mutes": []
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:    },
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:    "election_epoch": 5,
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:    "quorum": [
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        0
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:    ],
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:    "quorum_names": [
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "compute-0"
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:    ],
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:    "quorum_age": 18,
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:    "monmap": {
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "epoch": 1,
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "min_mon_release_name": "reef",
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "num_mons": 1
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:    },
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:    "osdmap": {
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "epoch": 1,
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "num_osds": 0,
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "num_up_osds": 0,
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "osd_up_since": 0,
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "num_in_osds": 0,
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "osd_in_since": 0,
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "num_remapped_pgs": 0
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:    },
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:    "pgmap": {
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "pgs_by_state": [],
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "num_pgs": 0,
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "num_pools": 0,
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "num_objects": 0,
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "data_bytes": 0,
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "bytes_used": 0,
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "bytes_avail": 0,
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "bytes_total": 0
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:    },
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:    "fsmap": {
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "epoch": 1,
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "by_rank": [],
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "up:standby": 0
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:    },
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:    "mgrmap": {
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "available": false,
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "num_standbys": 0,
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "modules": [
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:            "iostat",
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:            "nfs",
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:            "restful"
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        ],
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "services": {}
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:    },
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:    "servicemap": {
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "epoch": 1,
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "modified": "2025-12-02T10:48:35.983755+0000",
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:        "services": {}
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:    },
Dec  2 05:48:57 np0005542249 elated_dirac[75699]:    "progress_events": {}
Dec  2 05:48:57 np0005542249 elated_dirac[75699]: }
Dec  2 05:48:57 np0005542249 systemd[1]: libpod-fe3946f4161126be6ada87d9ab5c3573b7ab633903892948dbf242173e205ca2.scope: Deactivated successfully.
Dec  2 05:48:57 np0005542249 podman[75683]: 2025-12-02 10:48:57.680695243 +0000 UTC m=+0.561026825 container died fe3946f4161126be6ada87d9ab5c3573b7ab633903892948dbf242173e205ca2 (image=quay.io/ceph/ceph:v18, name=elated_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:48:57 np0005542249 systemd[1]: var-lib-containers-storage-overlay-d5dd415d00f5a529802db84cdc9d3f076b6f1f9aec1e3634ed6f439fe2d4fb6f-merged.mount: Deactivated successfully.
Dec  2 05:48:57 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:48:57.771+0000 7f12d8dd4140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  2 05:48:57 np0005542249 ceph-mgr[75372]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  2 05:48:57 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'selftest'
Dec  2 05:48:57 np0005542249 podman[75683]: 2025-12-02 10:48:57.775667719 +0000 UTC m=+0.655999291 container remove fe3946f4161126be6ada87d9ab5c3573b7ab633903892948dbf242173e205ca2 (image=quay.io/ceph/ceph:v18, name=elated_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  2 05:48:57 np0005542249 systemd[1]: libpod-conmon-fe3946f4161126be6ada87d9ab5c3573b7ab633903892948dbf242173e205ca2.scope: Deactivated successfully.
Dec  2 05:48:58 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:48:58.018+0000 7f12d8dd4140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  2 05:48:58 np0005542249 ceph-mgr[75372]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  2 05:48:58 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'snap_schedule'
Dec  2 05:48:58 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:48:58.275+0000 7f12d8dd4140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  2 05:48:58 np0005542249 ceph-mgr[75372]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  2 05:48:58 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'stats'
Dec  2 05:48:58 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'status'
Dec  2 05:48:58 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:48:58.784+0000 7f12d8dd4140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  2 05:48:58 np0005542249 ceph-mgr[75372]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  2 05:48:58 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'telegraf'
Dec  2 05:48:59 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:48:59.021+0000 7f12d8dd4140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  2 05:48:59 np0005542249 ceph-mgr[75372]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  2 05:48:59 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'telemetry'
Dec  2 05:48:59 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:48:59.648+0000 7f12d8dd4140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  2 05:48:59 np0005542249 ceph-mgr[75372]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  2 05:48:59 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'test_orchestrator'
Dec  2 05:48:59 np0005542249 podman[75738]: 2025-12-02 10:48:59.838724064 +0000 UTC m=+0.031248101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:00 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:00.361+0000 7f12d8dd4140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  2 05:49:00 np0005542249 ceph-mgr[75372]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  2 05:49:00 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'volumes'
Dec  2 05:49:00 np0005542249 podman[75738]: 2025-12-02 10:49:00.957777132 +0000 UTC m=+1.150301079 container create 56e1cf51f18b9a0c2f7f9dca078f8bcb96315caef09a3e0699a999e341be8245 (image=quay.io/ceph/ceph:v18, name=youthful_driscoll, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  2 05:49:01 np0005542249 systemd[1]: Started libpod-conmon-56e1cf51f18b9a0c2f7f9dca078f8bcb96315caef09a3e0699a999e341be8245.scope.
Dec  2 05:49:01 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:01 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e298da8ec7ea179d76babf97dd0302630ea60884bce7a43e12781792e62b2f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:01 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e298da8ec7ea179d76babf97dd0302630ea60884bce7a43e12781792e62b2f5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:01 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e298da8ec7ea179d76babf97dd0302630ea60884bce7a43e12781792e62b2f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:01 np0005542249 podman[75738]: 2025-12-02 10:49:01.063523767 +0000 UTC m=+1.256047754 container init 56e1cf51f18b9a0c2f7f9dca078f8bcb96315caef09a3e0699a999e341be8245 (image=quay.io/ceph/ceph:v18, name=youthful_driscoll, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:49:01 np0005542249 podman[75738]: 2025-12-02 10:49:01.073049424 +0000 UTC m=+1.265573361 container start 56e1cf51f18b9a0c2f7f9dca078f8bcb96315caef09a3e0699a999e341be8245 (image=quay.io/ceph/ceph:v18, name=youthful_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Dec  2 05:49:01 np0005542249 podman[75738]: 2025-12-02 10:49:01.07698908 +0000 UTC m=+1.269513067 container attach 56e1cf51f18b9a0c2f7f9dca078f8bcb96315caef09a3e0699a999e341be8245 (image=quay.io/ceph/ceph:v18, name=youthful_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Dec  2 05:49:01 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:01.077+0000 7f12d8dd4140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'zabbix'
Dec  2 05:49:01 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:01.338+0000 7f12d8dd4140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: ms_deliver_dispatch: unhandled message 0x56117fd931e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ntxcvs
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: mgr handle_mgr_map Activating!
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: mgr handle_mgr_map I am now activating
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.ntxcvs(active, starting, since 0.0146505s)
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2308651155' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).mds e1 all = 1
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2308651155' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2308651155' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2308651155' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ntxcvs", "id": "compute-0.ntxcvs"} v 0) v1
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2308651155' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ntxcvs", "id": "compute-0.ntxcvs"}]: dispatch
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: balancer
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: crash
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [balancer INFO root] Starting
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: log_channel(cluster) log [INF] : Manager daemon compute-0.ntxcvs is now available
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: devicehealth
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [devicehealth INFO root] Starting
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: iostat
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: nfs
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_10:49:01
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [balancer INFO root] No pools available
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: orchestrator
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: pg_autoscaler
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: progress
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [progress INFO root] Loading...
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [progress INFO root] No stored events to load
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [progress INFO root] Loaded [] historic events
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [progress INFO root] Loaded OSDMap, ready.
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] recovery thread starting
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] starting setup
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: rbd_support
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: restful
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: status
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [restful INFO root] server_addr: :: server_port: 8003
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ntxcvs/mirror_snapshot_schedule"} v 0) v1
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2308651155' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ntxcvs/mirror_snapshot_schedule"}]: dispatch
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: telemetry
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [restful WARNING root] server not running: no certificate configured
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] PerfHandler: starting
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TaskHandler: starting
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ntxcvs/trash_purge_schedule"} v 0) v1
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2308651155' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ntxcvs/trash_purge_schedule"}]: dispatch
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2308651155' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] setup complete
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: Activating manager daemon compute-0.ntxcvs
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: Manager daemon compute-0.ntxcvs is now available
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: from='mgr.14102 192.168.122.100:0/2308651155' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ntxcvs/mirror_snapshot_schedule"}]: dispatch
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: from='mgr.14102 192.168.122.100:0/2308651155' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ntxcvs/trash_purge_schedule"}]: dispatch
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Dec  2 05:49:01 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: volumes
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2308651155' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2308651155' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]: 
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]: {
Dec  2 05:49:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/232233124' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:    "fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:    "health": {
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "status": "HEALTH_OK",
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "checks": {},
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "mutes": []
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:    },
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:    "election_epoch": 5,
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:    "quorum": [
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        0
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:    ],
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:    "quorum_names": [
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "compute-0"
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:    ],
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:    "quorum_age": 22,
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:    "monmap": {
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "epoch": 1,
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "min_mon_release_name": "reef",
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "num_mons": 1
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:    },
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:    "osdmap": {
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "epoch": 1,
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "num_osds": 0,
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "num_up_osds": 0,
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "osd_up_since": 0,
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "num_in_osds": 0,
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "osd_in_since": 0,
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "num_remapped_pgs": 0
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:    },
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:    "pgmap": {
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "pgs_by_state": [],
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "num_pgs": 0,
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "num_pools": 0,
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "num_objects": 0,
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "data_bytes": 0,
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "bytes_used": 0,
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "bytes_avail": 0,
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "bytes_total": 0
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:    },
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:    "fsmap": {
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "epoch": 1,
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "by_rank": [],
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "up:standby": 0
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:    },
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:    "mgrmap": {
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "available": false,
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "num_standbys": 0,
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "modules": [
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:            "iostat",
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:            "nfs",
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:            "restful"
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        ],
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "services": {}
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:    },
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:    "servicemap": {
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "epoch": 1,
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "modified": "2025-12-02T10:48:35.983755+0000",
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:        "services": {}
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:    },
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]:    "progress_events": {}
Dec  2 05:49:01 np0005542249 youthful_driscoll[75755]: }
Dec  2 05:49:01 np0005542249 systemd[1]: libpod-56e1cf51f18b9a0c2f7f9dca078f8bcb96315caef09a3e0699a999e341be8245.scope: Deactivated successfully.
Dec  2 05:49:01 np0005542249 podman[75738]: 2025-12-02 10:49:01.497185695 +0000 UTC m=+1.689709632 container died 56e1cf51f18b9a0c2f7f9dca078f8bcb96315caef09a3e0699a999e341be8245 (image=quay.io/ceph/ceph:v18, name=youthful_driscoll, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:49:01 np0005542249 systemd[1]: var-lib-containers-storage-overlay-3e298da8ec7ea179d76babf97dd0302630ea60884bce7a43e12781792e62b2f5-merged.mount: Deactivated successfully.
Dec  2 05:49:01 np0005542249 podman[75738]: 2025-12-02 10:49:01.545509385 +0000 UTC m=+1.738033332 container remove 56e1cf51f18b9a0c2f7f9dca078f8bcb96315caef09a3e0699a999e341be8245 (image=quay.io/ceph/ceph:v18, name=youthful_driscoll, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  2 05:49:01 np0005542249 systemd[1]: libpod-conmon-56e1cf51f18b9a0c2f7f9dca078f8bcb96315caef09a3e0699a999e341be8245.scope: Deactivated successfully.
Dec  2 05:49:02 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.ntxcvs(active, since 1.03562s)
Dec  2 05:49:02 np0005542249 ceph-mon[75081]: from='mgr.14102 192.168.122.100:0/2308651155' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:02 np0005542249 ceph-mon[75081]: from='mgr.14102 192.168.122.100:0/2308651155' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:02 np0005542249 ceph-mon[75081]: from='mgr.14102 192.168.122.100:0/2308651155' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:03 np0005542249 ceph-mgr[75372]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  2 05:49:03 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.ntxcvs(active, since 2s)
Dec  2 05:49:03 np0005542249 podman[75872]: 2025-12-02 10:49:03.65498747 +0000 UTC m=+0.073521590 container create 0f6ab01af2fcf03541b05c85525aa945212bc07fc5b4fcefa27511cacbabef44 (image=quay.io/ceph/ceph:v18, name=trusting_yalow, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  2 05:49:03 np0005542249 systemd[1]: Started libpod-conmon-0f6ab01af2fcf03541b05c85525aa945212bc07fc5b4fcefa27511cacbabef44.scope.
Dec  2 05:49:03 np0005542249 podman[75872]: 2025-12-02 10:49:03.625581849 +0000 UTC m=+0.044116069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:03 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:03 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/209ff286ae3fcb8c1e9380f0531dbaba5769c4319e5ec64f0d28380cc38529f4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:03 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/209ff286ae3fcb8c1e9380f0531dbaba5769c4319e5ec64f0d28380cc38529f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:03 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/209ff286ae3fcb8c1e9380f0531dbaba5769c4319e5ec64f0d28380cc38529f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:03 np0005542249 podman[75872]: 2025-12-02 10:49:03.751367903 +0000 UTC m=+0.169902083 container init 0f6ab01af2fcf03541b05c85525aa945212bc07fc5b4fcefa27511cacbabef44 (image=quay.io/ceph/ceph:v18, name=trusting_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:49:03 np0005542249 podman[75872]: 2025-12-02 10:49:03.758561767 +0000 UTC m=+0.177095907 container start 0f6ab01af2fcf03541b05c85525aa945212bc07fc5b4fcefa27511cacbabef44 (image=quay.io/ceph/ceph:v18, name=trusting_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:49:03 np0005542249 podman[75872]: 2025-12-02 10:49:03.762278366 +0000 UTC m=+0.180812606 container attach 0f6ab01af2fcf03541b05c85525aa945212bc07fc5b4fcefa27511cacbabef44 (image=quay.io/ceph/ceph:v18, name=trusting_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:49:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  2 05:49:04 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1082453075' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]: 
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]: {
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:    "fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:    "health": {
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "status": "HEALTH_OK",
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "checks": {},
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "mutes": []
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:    },
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:    "election_epoch": 5,
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:    "quorum": [
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        0
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:    ],
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:    "quorum_names": [
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "compute-0"
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:    ],
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:    "quorum_age": 25,
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:    "monmap": {
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "epoch": 1,
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "min_mon_release_name": "reef",
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "num_mons": 1
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:    },
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:    "osdmap": {
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "epoch": 1,
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "num_osds": 0,
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "num_up_osds": 0,
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "osd_up_since": 0,
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "num_in_osds": 0,
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "osd_in_since": 0,
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "num_remapped_pgs": 0
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:    },
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:    "pgmap": {
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "pgs_by_state": [],
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "num_pgs": 0,
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "num_pools": 0,
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "num_objects": 0,
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "data_bytes": 0,
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "bytes_used": 0,
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "bytes_avail": 0,
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "bytes_total": 0
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:    },
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:    "fsmap": {
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "epoch": 1,
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "by_rank": [],
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "up:standby": 0
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:    },
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:    "mgrmap": {
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "available": true,
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "num_standbys": 0,
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "modules": [
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:            "iostat",
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:            "nfs",
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:            "restful"
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        ],
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "services": {}
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:    },
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:    "servicemap": {
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "epoch": 1,
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "modified": "2025-12-02T10:48:35.983755+0000",
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:        "services": {}
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:    },
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]:    "progress_events": {}
Dec  2 05:49:04 np0005542249 trusting_yalow[75888]: }
Dec  2 05:49:04 np0005542249 systemd[1]: libpod-0f6ab01af2fcf03541b05c85525aa945212bc07fc5b4fcefa27511cacbabef44.scope: Deactivated successfully.
Dec  2 05:49:04 np0005542249 podman[75872]: 2025-12-02 10:49:04.375820544 +0000 UTC m=+0.794354704 container died 0f6ab01af2fcf03541b05c85525aa945212bc07fc5b4fcefa27511cacbabef44 (image=quay.io/ceph/ceph:v18, name=trusting_yalow, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:49:04 np0005542249 systemd[1]: var-lib-containers-storage-overlay-209ff286ae3fcb8c1e9380f0531dbaba5769c4319e5ec64f0d28380cc38529f4-merged.mount: Deactivated successfully.
Dec  2 05:49:04 np0005542249 podman[75872]: 2025-12-02 10:49:04.500085196 +0000 UTC m=+0.918619326 container remove 0f6ab01af2fcf03541b05c85525aa945212bc07fc5b4fcefa27511cacbabef44 (image=quay.io/ceph/ceph:v18, name=trusting_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  2 05:49:04 np0005542249 systemd[1]: libpod-conmon-0f6ab01af2fcf03541b05c85525aa945212bc07fc5b4fcefa27511cacbabef44.scope: Deactivated successfully.
Dec  2 05:49:04 np0005542249 podman[75928]: 2025-12-02 10:49:04.605437292 +0000 UTC m=+0.069153803 container create 3c1ec262b8a3c82162284b73f3734e2a7dcfc715f94180d883f4a8eeebc1c67d (image=quay.io/ceph/ceph:v18, name=cool_diffie, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:49:04 np0005542249 systemd[1]: Started libpod-conmon-3c1ec262b8a3c82162284b73f3734e2a7dcfc715f94180d883f4a8eeebc1c67d.scope.
Dec  2 05:49:04 np0005542249 podman[75928]: 2025-12-02 10:49:04.574551101 +0000 UTC m=+0.038267662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:04 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:04 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387267a73ccc4df68c8280da88ef64e6cda0fcb6632196e37eb27d5e236b9054/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:04 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387267a73ccc4df68c8280da88ef64e6cda0fcb6632196e37eb27d5e236b9054/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:04 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387267a73ccc4df68c8280da88ef64e6cda0fcb6632196e37eb27d5e236b9054/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:04 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387267a73ccc4df68c8280da88ef64e6cda0fcb6632196e37eb27d5e236b9054/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:04 np0005542249 podman[75928]: 2025-12-02 10:49:04.701309711 +0000 UTC m=+0.165026232 container init 3c1ec262b8a3c82162284b73f3734e2a7dcfc715f94180d883f4a8eeebc1c67d (image=quay.io/ceph/ceph:v18, name=cool_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:49:04 np0005542249 podman[75928]: 2025-12-02 10:49:04.711498974 +0000 UTC m=+0.175215485 container start 3c1ec262b8a3c82162284b73f3734e2a7dcfc715f94180d883f4a8eeebc1c67d (image=quay.io/ceph/ceph:v18, name=cool_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:49:04 np0005542249 podman[75928]: 2025-12-02 10:49:04.715274316 +0000 UTC m=+0.178990887 container attach 3c1ec262b8a3c82162284b73f3734e2a7dcfc715f94180d883f4a8eeebc1c67d (image=quay.io/ceph/ceph:v18, name=cool_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:49:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Dec  2 05:49:05 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1881080358' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  2 05:49:05 np0005542249 systemd[1]: libpod-3c1ec262b8a3c82162284b73f3734e2a7dcfc715f94180d883f4a8eeebc1c67d.scope: Deactivated successfully.
Dec  2 05:49:05 np0005542249 podman[75928]: 2025-12-02 10:49:05.257418282 +0000 UTC m=+0.721134823 container died 3c1ec262b8a3c82162284b73f3734e2a7dcfc715f94180d883f4a8eeebc1c67d (image=quay.io/ceph/ceph:v18, name=cool_diffie, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 05:49:05 np0005542249 systemd[1]: var-lib-containers-storage-overlay-387267a73ccc4df68c8280da88ef64e6cda0fcb6632196e37eb27d5e236b9054-merged.mount: Deactivated successfully.
Dec  2 05:49:05 np0005542249 podman[75928]: 2025-12-02 10:49:05.325132795 +0000 UTC m=+0.788849276 container remove 3c1ec262b8a3c82162284b73f3734e2a7dcfc715f94180d883f4a8eeebc1c67d (image=quay.io/ceph/ceph:v18, name=cool_diffie, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:49:05 np0005542249 systemd[1]: libpod-conmon-3c1ec262b8a3c82162284b73f3734e2a7dcfc715f94180d883f4a8eeebc1c67d.scope: Deactivated successfully.
Dec  2 05:49:05 np0005542249 ceph-mgr[75372]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  2 05:49:05 np0005542249 podman[75982]: 2025-12-02 10:49:05.391926151 +0000 UTC m=+0.046625615 container create cdc00722bbc335b2bbbeeff8a494e5628ea360af3106b331bc0d5e4f5507ed71 (image=quay.io/ceph/ceph:v18, name=sad_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:49:05 np0005542249 systemd[1]: Started libpod-conmon-cdc00722bbc335b2bbbeeff8a494e5628ea360af3106b331bc0d5e4f5507ed71.scope.
Dec  2 05:49:05 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:05 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e27bd03ce4ed1ece41d12b01a88c157aa70a032a6a22b28461b5e2f0354e50e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:05 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e27bd03ce4ed1ece41d12b01a88c157aa70a032a6a22b28461b5e2f0354e50e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:05 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e27bd03ce4ed1ece41d12b01a88c157aa70a032a6a22b28461b5e2f0354e50e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:05 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/1881080358' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  2 05:49:05 np0005542249 podman[75982]: 2025-12-02 10:49:05.371321088 +0000 UTC m=+0.026020582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:05 np0005542249 podman[75982]: 2025-12-02 10:49:05.47587357 +0000 UTC m=+0.130573054 container init cdc00722bbc335b2bbbeeff8a494e5628ea360af3106b331bc0d5e4f5507ed71 (image=quay.io/ceph/ceph:v18, name=sad_poincare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:49:05 np0005542249 podman[75982]: 2025-12-02 10:49:05.482494768 +0000 UTC m=+0.137194222 container start cdc00722bbc335b2bbbeeff8a494e5628ea360af3106b331bc0d5e4f5507ed71 (image=quay.io/ceph/ceph:v18, name=sad_poincare, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Dec  2 05:49:05 np0005542249 podman[75982]: 2025-12-02 10:49:05.486559518 +0000 UTC m=+0.141259052 container attach cdc00722bbc335b2bbbeeff8a494e5628ea360af3106b331bc0d5e4f5507ed71 (image=quay.io/ceph/ceph:v18, name=sad_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:49:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Dec  2 05:49:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3501605378' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec  2 05:49:06 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/3501605378' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec  2 05:49:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3501605378' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec  2 05:49:06 np0005542249 ceph-mgr[75372]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  2 05:49:06 np0005542249 ceph-mgr[75372]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec  2 05:49:06 np0005542249 ceph-mgr[75372]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec  2 05:49:06 np0005542249 ceph-mgr[75372]: mgr respawn  1: '-n'
Dec  2 05:49:06 np0005542249 ceph-mgr[75372]: mgr respawn  2: 'mgr.compute-0.ntxcvs'
Dec  2 05:49:06 np0005542249 ceph-mgr[75372]: mgr respawn  3: '-f'
Dec  2 05:49:06 np0005542249 ceph-mgr[75372]: mgr respawn  4: '--setuser'
Dec  2 05:49:06 np0005542249 ceph-mgr[75372]: mgr respawn  5: 'ceph'
Dec  2 05:49:06 np0005542249 ceph-mgr[75372]: mgr respawn  6: '--setgroup'
Dec  2 05:49:06 np0005542249 ceph-mgr[75372]: mgr respawn  7: 'ceph'
Dec  2 05:49:06 np0005542249 ceph-mgr[75372]: mgr respawn  8: '--default-log-to-file=false'
Dec  2 05:49:06 np0005542249 ceph-mgr[75372]: mgr respawn  9: '--default-log-to-journald=true'
Dec  2 05:49:06 np0005542249 ceph-mgr[75372]: mgr respawn  10: '--default-log-to-stderr=false'
Dec  2 05:49:06 np0005542249 ceph-mgr[75372]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec  2 05:49:06 np0005542249 ceph-mgr[75372]: mgr respawn  exe_path /proc/self/exe
Dec  2 05:49:06 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.ntxcvs(active, since 5s)
Dec  2 05:49:06 np0005542249 systemd[1]: libpod-cdc00722bbc335b2bbbeeff8a494e5628ea360af3106b331bc0d5e4f5507ed71.scope: Deactivated successfully.
Dec  2 05:49:06 np0005542249 podman[75982]: 2025-12-02 10:49:06.50619129 +0000 UTC m=+1.160890814 container died cdc00722bbc335b2bbbeeff8a494e5628ea360af3106b331bc0d5e4f5507ed71 (image=quay.io/ceph/ceph:v18, name=sad_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  2 05:49:06 np0005542249 systemd[1]: var-lib-containers-storage-overlay-9e27bd03ce4ed1ece41d12b01a88c157aa70a032a6a22b28461b5e2f0354e50e-merged.mount: Deactivated successfully.
Dec  2 05:49:06 np0005542249 podman[75982]: 2025-12-02 10:49:06.555554688 +0000 UTC m=+1.210254142 container remove cdc00722bbc335b2bbbeeff8a494e5628ea360af3106b331bc0d5e4f5507ed71 (image=quay.io/ceph/ceph:v18, name=sad_poincare, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  2 05:49:06 np0005542249 systemd[1]: libpod-conmon-cdc00722bbc335b2bbbeeff8a494e5628ea360af3106b331bc0d5e4f5507ed71.scope: Deactivated successfully.
Dec  2 05:49:06 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: ignoring --setuser ceph since I am not root
Dec  2 05:49:06 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: ignoring --setgroup ceph since I am not root
Dec  2 05:49:06 np0005542249 ceph-mgr[75372]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Dec  2 05:49:06 np0005542249 ceph-mgr[75372]: pidfile_write: ignore empty --pid-file
Dec  2 05:49:06 np0005542249 podman[76035]: 2025-12-02 10:49:06.604341301 +0000 UTC m=+0.026373041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:06 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'alerts'
Dec  2 05:49:06 np0005542249 podman[76035]: 2025-12-02 10:49:06.824492025 +0000 UTC m=+0.246523715 container create 467a683596fa7956c803ee152a5695e8c21ce62354851810e9d3691d8866b3ce (image=quay.io/ceph/ceph:v18, name=clever_jones, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:49:07 np0005542249 ceph-mgr[75372]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  2 05:49:07 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'balancer'
Dec  2 05:49:07 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:07.013+0000 7fc31b61a140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  2 05:49:07 np0005542249 systemd[1]: Started libpod-conmon-467a683596fa7956c803ee152a5695e8c21ce62354851810e9d3691d8866b3ce.scope.
Dec  2 05:49:07 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:07 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b7d22461a9d0e22cd681af80bf379ddc2ea4af373a356f218368ab96b1c0080/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:07 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b7d22461a9d0e22cd681af80bf379ddc2ea4af373a356f218368ab96b1c0080/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:07 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b7d22461a9d0e22cd681af80bf379ddc2ea4af373a356f218368ab96b1c0080/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:07 np0005542249 ceph-mgr[75372]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  2 05:49:07 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'cephadm'
Dec  2 05:49:07 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:07.268+0000 7fc31b61a140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  2 05:49:07 np0005542249 podman[76035]: 2025-12-02 10:49:07.419118702 +0000 UTC m=+0.841150442 container init 467a683596fa7956c803ee152a5695e8c21ce62354851810e9d3691d8866b3ce (image=quay.io/ceph/ceph:v18, name=clever_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:49:07 np0005542249 podman[76035]: 2025-12-02 10:49:07.425253768 +0000 UTC m=+0.847285478 container start 467a683596fa7956c803ee152a5695e8c21ce62354851810e9d3691d8866b3ce (image=quay.io/ceph/ceph:v18, name=clever_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  2 05:49:07 np0005542249 podman[76035]: 2025-12-02 10:49:07.459715065 +0000 UTC m=+0.881746775 container attach 467a683596fa7956c803ee152a5695e8c21ce62354851810e9d3691d8866b3ce (image=quay.io/ceph/ceph:v18, name=clever_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  2 05:49:07 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/3501605378' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec  2 05:49:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Dec  2 05:49:08 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2013853599' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec  2 05:49:08 np0005542249 clever_jones[76076]: {
Dec  2 05:49:08 np0005542249 clever_jones[76076]:    "epoch": 5,
Dec  2 05:49:08 np0005542249 clever_jones[76076]:    "available": true,
Dec  2 05:49:08 np0005542249 clever_jones[76076]:    "active_name": "compute-0.ntxcvs",
Dec  2 05:49:08 np0005542249 clever_jones[76076]:    "num_standby": 0
Dec  2 05:49:08 np0005542249 clever_jones[76076]: }
Dec  2 05:49:08 np0005542249 systemd[1]: libpod-467a683596fa7956c803ee152a5695e8c21ce62354851810e9d3691d8866b3ce.scope: Deactivated successfully.
Dec  2 05:49:08 np0005542249 podman[76035]: 2025-12-02 10:49:08.042068523 +0000 UTC m=+1.464100213 container died 467a683596fa7956c803ee152a5695e8c21ce62354851810e9d3691d8866b3ce (image=quay.io/ceph/ceph:v18, name=clever_jones, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:49:08 np0005542249 systemd[1]: var-lib-containers-storage-overlay-1b7d22461a9d0e22cd681af80bf379ddc2ea4af373a356f218368ab96b1c0080-merged.mount: Deactivated successfully.
Dec  2 05:49:08 np0005542249 podman[76035]: 2025-12-02 10:49:08.110530485 +0000 UTC m=+1.532562185 container remove 467a683596fa7956c803ee152a5695e8c21ce62354851810e9d3691d8866b3ce (image=quay.io/ceph/ceph:v18, name=clever_jones, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  2 05:49:08 np0005542249 systemd[1]: libpod-conmon-467a683596fa7956c803ee152a5695e8c21ce62354851810e9d3691d8866b3ce.scope: Deactivated successfully.
Dec  2 05:49:08 np0005542249 podman[76118]: 2025-12-02 10:49:08.176200062 +0000 UTC m=+0.047756436 container create e660fbc9aa31144b208be9044e7614b49772bde7c7f29e51629512f52cc4a07c (image=quay.io/ceph/ceph:v18, name=charming_moore, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  2 05:49:08 np0005542249 systemd[1]: Started libpod-conmon-e660fbc9aa31144b208be9044e7614b49772bde7c7f29e51629512f52cc4a07c.scope.
Dec  2 05:49:08 np0005542249 podman[76118]: 2025-12-02 10:49:08.152464273 +0000 UTC m=+0.024020667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:08 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:08 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ab27d451bbc30b51f927d9e14c2eeb405e3212a0039f6f002864311241ce18/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:08 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ab27d451bbc30b51f927d9e14c2eeb405e3212a0039f6f002864311241ce18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:08 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ab27d451bbc30b51f927d9e14c2eeb405e3212a0039f6f002864311241ce18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:08 np0005542249 podman[76118]: 2025-12-02 10:49:08.376595623 +0000 UTC m=+0.248152057 container init e660fbc9aa31144b208be9044e7614b49772bde7c7f29e51629512f52cc4a07c (image=quay.io/ceph/ceph:v18, name=charming_moore, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  2 05:49:08 np0005542249 podman[76118]: 2025-12-02 10:49:08.382555613 +0000 UTC m=+0.254112007 container start e660fbc9aa31144b208be9044e7614b49772bde7c7f29e51629512f52cc4a07c (image=quay.io/ceph/ceph:v18, name=charming_moore, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  2 05:49:08 np0005542249 podman[76118]: 2025-12-02 10:49:08.387216809 +0000 UTC m=+0.258773193 container attach e660fbc9aa31144b208be9044e7614b49772bde7c7f29e51629512f52cc4a07c (image=quay.io/ceph/ceph:v18, name=charming_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  2 05:49:09 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'crash'
Dec  2 05:49:09 np0005542249 ceph-mgr[75372]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  2 05:49:09 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'dashboard'
Dec  2 05:49:09 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:09.433+0000 7fc31b61a140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  2 05:49:10 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'devicehealth'
Dec  2 05:49:11 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:11.063+0000 7fc31b61a140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  2 05:49:11 np0005542249 ceph-mgr[75372]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  2 05:49:11 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'diskprediction_local'
Dec  2 05:49:11 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  2 05:49:11 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  2 05:49:11 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]:  from numpy import show_config as show_numpy_config
Dec  2 05:49:11 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:11.578+0000 7fc31b61a140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  2 05:49:11 np0005542249 ceph-mgr[75372]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  2 05:49:11 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'influx'
Dec  2 05:49:11 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:11.809+0000 7fc31b61a140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  2 05:49:11 np0005542249 ceph-mgr[75372]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  2 05:49:11 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'insights'
Dec  2 05:49:12 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'iostat'
Dec  2 05:49:12 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:12.312+0000 7fc31b61a140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  2 05:49:12 np0005542249 ceph-mgr[75372]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  2 05:49:12 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'k8sevents'
Dec  2 05:49:14 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'localpool'
Dec  2 05:49:14 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'mds_autoscaler'
Dec  2 05:49:14 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'mirroring'
Dec  2 05:49:15 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'nfs'
Dec  2 05:49:15 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:15.954+0000 7fc31b61a140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  2 05:49:15 np0005542249 ceph-mgr[75372]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  2 05:49:15 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'orchestrator'
Dec  2 05:49:16 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:16.655+0000 7fc31b61a140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  2 05:49:16 np0005542249 ceph-mgr[75372]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  2 05:49:16 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'osd_perf_query'
Dec  2 05:49:16 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:16.899+0000 7fc31b61a140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  2 05:49:16 np0005542249 ceph-mgr[75372]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  2 05:49:16 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'osd_support'
Dec  2 05:49:17 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:17.126+0000 7fc31b61a140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  2 05:49:17 np0005542249 ceph-mgr[75372]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  2 05:49:17 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'pg_autoscaler'
Dec  2 05:49:17 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:17.394+0000 7fc31b61a140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  2 05:49:17 np0005542249 ceph-mgr[75372]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  2 05:49:17 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'progress'
Dec  2 05:49:17 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:17.644+0000 7fc31b61a140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  2 05:49:17 np0005542249 ceph-mgr[75372]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  2 05:49:17 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'prometheus'
Dec  2 05:49:18 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:18.700+0000 7fc31b61a140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  2 05:49:18 np0005542249 ceph-mgr[75372]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  2 05:49:18 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'rbd_support'
Dec  2 05:49:18 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:18.995+0000 7fc31b61a140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  2 05:49:18 np0005542249 ceph-mgr[75372]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  2 05:49:18 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'restful'
Dec  2 05:49:19 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'rgw'
Dec  2 05:49:20 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:20.407+0000 7fc31b61a140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  2 05:49:20 np0005542249 ceph-mgr[75372]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  2 05:49:20 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'rook'
Dec  2 05:49:22 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:22.462+0000 7fc31b61a140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  2 05:49:22 np0005542249 ceph-mgr[75372]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  2 05:49:22 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'selftest'
Dec  2 05:49:22 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:22.719+0000 7fc31b61a140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  2 05:49:22 np0005542249 ceph-mgr[75372]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  2 05:49:22 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'snap_schedule'
Dec  2 05:49:22 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:22.957+0000 7fc31b61a140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  2 05:49:22 np0005542249 ceph-mgr[75372]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  2 05:49:22 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'stats'
Dec  2 05:49:23 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'status'
Dec  2 05:49:23 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:23.462+0000 7fc31b61a140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  2 05:49:23 np0005542249 ceph-mgr[75372]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  2 05:49:23 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'telegraf'
Dec  2 05:49:23 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:23.703+0000 7fc31b61a140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  2 05:49:23 np0005542249 ceph-mgr[75372]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  2 05:49:23 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'telemetry'
Dec  2 05:49:24 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:24.279+0000 7fc31b61a140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  2 05:49:24 np0005542249 ceph-mgr[75372]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  2 05:49:24 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'test_orchestrator'
Dec  2 05:49:24 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:24.972+0000 7fc31b61a140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  2 05:49:24 np0005542249 ceph-mgr[75372]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  2 05:49:24 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'volumes'
Dec  2 05:49:25 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:25.717+0000 7fc31b61a140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  2 05:49:25 np0005542249 ceph-mgr[75372]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  2 05:49:25 np0005542249 ceph-mgr[75372]: mgr[py] Loading python module 'zabbix'
Dec  2 05:49:25 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T10:49:25.955+0000 7fc31b61a140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  2 05:49:25 np0005542249 ceph-mgr[75372]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  2 05:49:25 np0005542249 ceph-mon[75081]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ntxcvs restarted
Dec  2 05:49:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Dec  2 05:49:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  2 05:49:25 np0005542249 ceph-mon[75081]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ntxcvs
Dec  2 05:49:25 np0005542249 ceph-mgr[75372]: ms_deliver_dispatch: unhandled message 0x565033e991e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: mgr handle_mgr_map Activating!
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: mgr handle_mgr_map I am now activating
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.ntxcvs(active, starting, since 0.227844s)
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ntxcvs", "id": "compute-0.ntxcvs"} v 0) v1
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ntxcvs", "id": "compute-0.ntxcvs"}]: dispatch
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).mds e1 all = 1
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: balancer
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Starting
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: log_channel(cluster) log [INF] : Manager daemon compute-0.ntxcvs is now available
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_10:49:26
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] No pools available
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: Active manager daemon compute-0.ntxcvs restarted
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: Activating manager daemon compute-0.ntxcvs
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: Manager daemon compute-0.ntxcvs is now available
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: cephadm
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: crash
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: devicehealth
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [devicehealth INFO root] Starting
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: iostat
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: nfs
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: orchestrator
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: pg_autoscaler
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: progress
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [progress INFO root] Loading...
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [progress INFO root] No stored events to load
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [progress INFO root] Loaded [] historic events
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [progress INFO root] Loaded OSDMap, ready.
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] recovery thread starting
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] starting setup
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: rbd_support
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: restful
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ntxcvs/mirror_snapshot_schedule"} v 0) v1
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [restful INFO root] server_addr: :: server_port: 8003
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ntxcvs/mirror_snapshot_schedule"}]: dispatch
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [restful WARNING root] server not running: no certificate configured
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: status
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: telemetry
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] PerfHandler: starting
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TaskHandler: starting
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ntxcvs/trash_purge_schedule"} v 0) v1
Dec  2 05:49:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ntxcvs/trash_purge_schedule"}]: dispatch
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] setup complete
Dec  2 05:49:26 np0005542249 ceph-mgr[75372]: mgr load Constructed class from module: volumes
Dec  2 05:49:27 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.ntxcvs(active, since 1.23858s)
Dec  2 05:49:27 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec  2 05:49:27 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec  2 05:49:27 np0005542249 charming_moore[76134]: {
Dec  2 05:49:27 np0005542249 charming_moore[76134]:    "mgrmap_epoch": 7,
Dec  2 05:49:27 np0005542249 charming_moore[76134]:    "initialized": true
Dec  2 05:49:27 np0005542249 charming_moore[76134]: }
Dec  2 05:49:27 np0005542249 systemd[1]: libpod-e660fbc9aa31144b208be9044e7614b49772bde7c7f29e51629512f52cc4a07c.scope: Deactivated successfully.
Dec  2 05:49:27 np0005542249 podman[76118]: 2025-12-02 10:49:27.220548434 +0000 UTC m=+19.092104808 container died e660fbc9aa31144b208be9044e7614b49772bde7c7f29e51629512f52cc4a07c (image=quay.io/ceph/ceph:v18, name=charming_moore, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  2 05:49:27 np0005542249 systemd[1]: var-lib-containers-storage-overlay-44ab27d451bbc30b51f927d9e14c2eeb405e3212a0039f6f002864311241ce18-merged.mount: Deactivated successfully.
Dec  2 05:49:27 np0005542249 ceph-mon[75081]: Found migration_current of "None". Setting to last migration.
Dec  2 05:49:27 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:27 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:27 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ntxcvs/mirror_snapshot_schedule"}]: dispatch
Dec  2 05:49:27 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ntxcvs/trash_purge_schedule"}]: dispatch
Dec  2 05:49:27 np0005542249 podman[76118]: 2025-12-02 10:49:27.291326158 +0000 UTC m=+19.162882522 container remove e660fbc9aa31144b208be9044e7614b49772bde7c7f29e51629512f52cc4a07c (image=quay.io/ceph/ceph:v18, name=charming_moore, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:49:27 np0005542249 systemd[1]: libpod-conmon-e660fbc9aa31144b208be9044e7614b49772bde7c7f29e51629512f52cc4a07c.scope: Deactivated successfully.
Dec  2 05:49:27 np0005542249 podman[76295]: 2025-12-02 10:49:27.344613652 +0000 UTC m=+0.034747647 container create cf67abb129db7cdf1099fccd0b93fb3d45949e3dd123f8c54ac6cb47348e154d (image=quay.io/ceph/ceph:v18, name=bold_mahavira, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Dec  2 05:49:27 np0005542249 systemd[1]: Started libpod-conmon-cf67abb129db7cdf1099fccd0b93fb3d45949e3dd123f8c54ac6cb47348e154d.scope.
Dec  2 05:49:27 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:27 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714965a74dc45e68d16691c305f9396d739669e131a5884629c87638f631dc06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:27 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714965a74dc45e68d16691c305f9396d739669e131a5884629c87638f631dc06/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:27 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714965a74dc45e68d16691c305f9396d739669e131a5884629c87638f631dc06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:27 np0005542249 podman[76295]: 2025-12-02 10:49:27.329357111 +0000 UTC m=+0.019491126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:27 np0005542249 podman[76295]: 2025-12-02 10:49:27.441086587 +0000 UTC m=+0.131220652 container init cf67abb129db7cdf1099fccd0b93fb3d45949e3dd123f8c54ac6cb47348e154d (image=quay.io/ceph/ceph:v18, name=bold_mahavira, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  2 05:49:27 np0005542249 podman[76295]: 2025-12-02 10:49:27.449459402 +0000 UTC m=+0.139593407 container start cf67abb129db7cdf1099fccd0b93fb3d45949e3dd123f8c54ac6cb47348e154d (image=quay.io/ceph/ceph:v18, name=bold_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:49:27 np0005542249 podman[76295]: 2025-12-02 10:49:27.45680853 +0000 UTC m=+0.146942605 container attach cf67abb129db7cdf1099fccd0b93fb3d45949e3dd123f8c54ac6cb47348e154d (image=quay.io/ceph/ceph:v18, name=bold_mahavira, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  2 05:49:27 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 05:49:28 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Dec  2 05:49:28 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:28 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec  2 05:49:28 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  2 05:49:28 np0005542249 systemd[1]: libpod-cf67abb129db7cdf1099fccd0b93fb3d45949e3dd123f8c54ac6cb47348e154d.scope: Deactivated successfully.
Dec  2 05:49:28 np0005542249 podman[76295]: 2025-12-02 10:49:28.03588275 +0000 UTC m=+0.726016735 container died cf67abb129db7cdf1099fccd0b93fb3d45949e3dd123f8c54ac6cb47348e154d (image=quay.io/ceph/ceph:v18, name=bold_mahavira, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  2 05:49:28 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Dec  2 05:49:28 np0005542249 systemd[1]: var-lib-containers-storage-overlay-714965a74dc45e68d16691c305f9396d739669e131a5884629c87638f631dc06-merged.mount: Deactivated successfully.
Dec  2 05:49:28 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:28 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Dec  2 05:49:28 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:28 np0005542249 podman[76295]: 2025-12-02 10:49:28.142843748 +0000 UTC m=+0.832977783 container remove cf67abb129db7cdf1099fccd0b93fb3d45949e3dd123f8c54ac6cb47348e154d (image=quay.io/ceph/ceph:v18, name=bold_mahavira, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Dec  2 05:49:28 np0005542249 systemd[1]: libpod-conmon-cf67abb129db7cdf1099fccd0b93fb3d45949e3dd123f8c54ac6cb47348e154d.scope: Deactivated successfully.
Dec  2 05:49:28 np0005542249 ceph-mgr[75372]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  2 05:49:28 np0005542249 podman[76349]: 2025-12-02 10:49:28.209588144 +0000 UTC m=+0.049491683 container create 06051d7135643905452f2c809ed91f1048fa0af02a83fb2e9c053f8d18cb72cc (image=quay.io/ceph/ceph:v18, name=epic_ellis, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:49:28 np0005542249 systemd[1]: Started libpod-conmon-06051d7135643905452f2c809ed91f1048fa0af02a83fb2e9c053f8d18cb72cc.scope.
Dec  2 05:49:28 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:28 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:28 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:28 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:28 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.ntxcvs(active, since 2s)
Dec  2 05:49:28 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/582412ef933fefd06ea73fe29fa857a47be7ad7f4cdeec51456fb200ab851410/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:28 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/582412ef933fefd06ea73fe29fa857a47be7ad7f4cdeec51456fb200ab851410/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:28 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/582412ef933fefd06ea73fe29fa857a47be7ad7f4cdeec51456fb200ab851410/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:28 np0005542249 podman[76349]: 2025-12-02 10:49:28.188974779 +0000 UTC m=+0.028878348 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:28 np0005542249 podman[76349]: 2025-12-02 10:49:28.299391449 +0000 UTC m=+0.139295008 container init 06051d7135643905452f2c809ed91f1048fa0af02a83fb2e9c053f8d18cb72cc (image=quay.io/ceph/ceph:v18, name=epic_ellis, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:49:28 np0005542249 podman[76349]: 2025-12-02 10:49:28.304659442 +0000 UTC m=+0.144562981 container start 06051d7135643905452f2c809ed91f1048fa0af02a83fb2e9c053f8d18cb72cc (image=quay.io/ceph/ceph:v18, name=epic_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:49:28 np0005542249 podman[76349]: 2025-12-02 10:49:28.307699533 +0000 UTC m=+0.147603072 container attach 06051d7135643905452f2c809ed91f1048fa0af02a83fb2e9c053f8d18cb72cc (image=quay.io/ceph/ceph:v18, name=epic_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  2 05:49:28 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 05:49:28 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Dec  2 05:49:28 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:28 np0005542249 ceph-mgr[75372]: [cephadm INFO root] Set ssh ssh_user
Dec  2 05:49:28 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Dec  2 05:49:28 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Dec  2 05:49:28 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:28 np0005542249 ceph-mgr[75372]: [cephadm INFO root] Set ssh ssh_config
Dec  2 05:49:28 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Dec  2 05:49:28 np0005542249 ceph-mgr[75372]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Dec  2 05:49:28 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Dec  2 05:49:28 np0005542249 epic_ellis[76366]: ssh user set to ceph-admin. sudo will be used
Dec  2 05:49:28 np0005542249 systemd[1]: libpod-06051d7135643905452f2c809ed91f1048fa0af02a83fb2e9c053f8d18cb72cc.scope: Deactivated successfully.
Dec  2 05:49:28 np0005542249 podman[76349]: 2025-12-02 10:49:28.83315214 +0000 UTC m=+0.673055679 container died 06051d7135643905452f2c809ed91f1048fa0af02a83fb2e9c053f8d18cb72cc (image=quay.io/ceph/ceph:v18, name=epic_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  2 05:49:28 np0005542249 systemd[1]: var-lib-containers-storage-overlay-582412ef933fefd06ea73fe29fa857a47be7ad7f4cdeec51456fb200ab851410-merged.mount: Deactivated successfully.
Dec  2 05:49:28 np0005542249 podman[76349]: 2025-12-02 10:49:28.873156517 +0000 UTC m=+0.713060056 container remove 06051d7135643905452f2c809ed91f1048fa0af02a83fb2e9c053f8d18cb72cc (image=quay.io/ceph/ceph:v18, name=epic_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  2 05:49:28 np0005542249 systemd[1]: libpod-conmon-06051d7135643905452f2c809ed91f1048fa0af02a83fb2e9c053f8d18cb72cc.scope: Deactivated successfully.
Dec  2 05:49:28 np0005542249 podman[76404]: 2025-12-02 10:49:28.96062586 +0000 UTC m=+0.058740811 container create 34c1ea7be0b562eb8f12298a6fc90623d7e5d6fdf9e6e97ee5b29f996a216d87 (image=quay.io/ceph/ceph:v18, name=festive_dirac, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  2 05:49:29 np0005542249 systemd[1]: Started libpod-conmon-34c1ea7be0b562eb8f12298a6fc90623d7e5d6fdf9e6e97ee5b29f996a216d87.scope.
Dec  2 05:49:29 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:29 np0005542249 podman[76404]: 2025-12-02 10:49:28.938345111 +0000 UTC m=+0.036460082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:29 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3615441c0290cbe0a5db05f6662c32beda20e1226e8439562a6b78a93a434e4b/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:29 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3615441c0290cbe0a5db05f6662c32beda20e1226e8439562a6b78a93a434e4b/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:29 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3615441c0290cbe0a5db05f6662c32beda20e1226e8439562a6b78a93a434e4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:29 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3615441c0290cbe0a5db05f6662c32beda20e1226e8439562a6b78a93a434e4b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:29 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3615441c0290cbe0a5db05f6662c32beda20e1226e8439562a6b78a93a434e4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:29 np0005542249 podman[76404]: 2025-12-02 10:49:29.087415661 +0000 UTC m=+0.185530612 container init 34c1ea7be0b562eb8f12298a6fc90623d7e5d6fdf9e6e97ee5b29f996a216d87 (image=quay.io/ceph/ceph:v18, name=festive_dirac, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  2 05:49:29 np0005542249 podman[76404]: 2025-12-02 10:49:29.097379979 +0000 UTC m=+0.195494900 container start 34c1ea7be0b562eb8f12298a6fc90623d7e5d6fdf9e6e97ee5b29f996a216d87 (image=quay.io/ceph/ceph:v18, name=festive_dirac, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:49:29 np0005542249 podman[76404]: 2025-12-02 10:49:29.100895774 +0000 UTC m=+0.199010735 container attach 34c1ea7be0b562eb8f12298a6fc90623d7e5d6fdf9e6e97ee5b29f996a216d87 (image=quay.io/ceph/ceph:v18, name=festive_dirac, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:49:29 np0005542249 ceph-mgr[75372]: [cephadm INFO cherrypy.error] [02/Dec/2025:10:49:29] ENGINE Bus STARTING
Dec  2 05:49:29 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : [02/Dec/2025:10:49:29] ENGINE Bus STARTING
Dec  2 05:49:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019922978 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:49:29 np0005542249 ceph-mgr[75372]: [cephadm INFO cherrypy.error] [02/Dec/2025:10:49:29] ENGINE Serving on http://192.168.122.100:8765
Dec  2 05:49:29 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : [02/Dec/2025:10:49:29] ENGINE Serving on http://192.168.122.100:8765
Dec  2 05:49:29 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:29 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:29 np0005542249 ceph-mgr[75372]: [cephadm INFO cherrypy.error] [02/Dec/2025:10:49:29] ENGINE Serving on https://192.168.122.100:7150
Dec  2 05:49:29 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : [02/Dec/2025:10:49:29] ENGINE Serving on https://192.168.122.100:7150
Dec  2 05:49:29 np0005542249 ceph-mgr[75372]: [cephadm INFO cherrypy.error] [02/Dec/2025:10:49:29] ENGINE Bus STARTED
Dec  2 05:49:29 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : [02/Dec/2025:10:49:29] ENGINE Bus STARTED
Dec  2 05:49:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec  2 05:49:29 np0005542249 ceph-mgr[75372]: [cephadm INFO cherrypy.error] [02/Dec/2025:10:49:29] ENGINE Client ('192.168.122.100', 42192) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  2 05:49:29 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : [02/Dec/2025:10:49:29] ENGINE Client ('192.168.122.100', 42192) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  2 05:49:29 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  2 05:49:29 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 05:49:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Dec  2 05:49:29 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:29 np0005542249 ceph-mgr[75372]: [cephadm INFO root] Set ssh ssh_identity_key
Dec  2 05:49:29 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Dec  2 05:49:29 np0005542249 ceph-mgr[75372]: [cephadm INFO root] Set ssh private key
Dec  2 05:49:29 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Set ssh private key
Dec  2 05:49:29 np0005542249 systemd[1]: libpod-34c1ea7be0b562eb8f12298a6fc90623d7e5d6fdf9e6e97ee5b29f996a216d87.scope: Deactivated successfully.
Dec  2 05:49:29 np0005542249 podman[76404]: 2025-12-02 10:49:29.682581224 +0000 UTC m=+0.780696165 container died 34c1ea7be0b562eb8f12298a6fc90623d7e5d6fdf9e6e97ee5b29f996a216d87 (image=quay.io/ceph/ceph:v18, name=festive_dirac, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:49:29 np0005542249 systemd[1]: var-lib-containers-storage-overlay-3615441c0290cbe0a5db05f6662c32beda20e1226e8439562a6b78a93a434e4b-merged.mount: Deactivated successfully.
Dec  2 05:49:29 np0005542249 podman[76404]: 2025-12-02 10:49:29.738728744 +0000 UTC m=+0.836843685 container remove 34c1ea7be0b562eb8f12298a6fc90623d7e5d6fdf9e6e97ee5b29f996a216d87 (image=quay.io/ceph/ceph:v18, name=festive_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:49:29 np0005542249 systemd[1]: libpod-conmon-34c1ea7be0b562eb8f12298a6fc90623d7e5d6fdf9e6e97ee5b29f996a216d87.scope: Deactivated successfully.
Dec  2 05:49:29 np0005542249 podman[76480]: 2025-12-02 10:49:29.821132962 +0000 UTC m=+0.059047870 container create cfb7a52e6e19fcda2356163736c26280e9751210f786cbff14573f51366fe2b8 (image=quay.io/ceph/ceph:v18, name=elegant_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  2 05:49:29 np0005542249 systemd[1]: Started libpod-conmon-cfb7a52e6e19fcda2356163736c26280e9751210f786cbff14573f51366fe2b8.scope.
Dec  2 05:49:29 np0005542249 podman[76480]: 2025-12-02 10:49:29.793529939 +0000 UTC m=+0.031444887 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:29 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:29 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4d800a4c638c25094783c02069f8caeefb908dd94438ef1a009aaaf40f39e26/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:29 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4d800a4c638c25094783c02069f8caeefb908dd94438ef1a009aaaf40f39e26/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:29 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4d800a4c638c25094783c02069f8caeefb908dd94438ef1a009aaaf40f39e26/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:29 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4d800a4c638c25094783c02069f8caeefb908dd94438ef1a009aaaf40f39e26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:29 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4d800a4c638c25094783c02069f8caeefb908dd94438ef1a009aaaf40f39e26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:29 np0005542249 podman[76480]: 2025-12-02 10:49:29.918040919 +0000 UTC m=+0.155955797 container init cfb7a52e6e19fcda2356163736c26280e9751210f786cbff14573f51366fe2b8 (image=quay.io/ceph/ceph:v18, name=elegant_curie, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  2 05:49:29 np0005542249 podman[76480]: 2025-12-02 10:49:29.923936887 +0000 UTC m=+0.161851745 container start cfb7a52e6e19fcda2356163736c26280e9751210f786cbff14573f51366fe2b8 (image=quay.io/ceph/ceph:v18, name=elegant_curie, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:49:29 np0005542249 podman[76480]: 2025-12-02 10:49:29.927428692 +0000 UTC m=+0.165343560 container attach cfb7a52e6e19fcda2356163736c26280e9751210f786cbff14573f51366fe2b8 (image=quay.io/ceph/ceph:v18, name=elegant_curie, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  2 05:49:30 np0005542249 ceph-mgr[75372]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  2 05:49:30 np0005542249 ceph-mon[75081]: Set ssh ssh_user
Dec  2 05:49:30 np0005542249 ceph-mon[75081]: Set ssh ssh_config
Dec  2 05:49:30 np0005542249 ceph-mon[75081]: ssh user set to ceph-admin. sudo will be used
Dec  2 05:49:30 np0005542249 ceph-mon[75081]: [02/Dec/2025:10:49:29] ENGINE Bus STARTING
Dec  2 05:49:30 np0005542249 ceph-mon[75081]: [02/Dec/2025:10:49:29] ENGINE Serving on http://192.168.122.100:8765
Dec  2 05:49:30 np0005542249 ceph-mon[75081]: [02/Dec/2025:10:49:29] ENGINE Serving on https://192.168.122.100:7150
Dec  2 05:49:30 np0005542249 ceph-mon[75081]: [02/Dec/2025:10:49:29] ENGINE Bus STARTED
Dec  2 05:49:30 np0005542249 ceph-mon[75081]: [02/Dec/2025:10:49:29] ENGINE Client ('192.168.122.100', 42192) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  2 05:49:30 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:30 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 05:49:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Dec  2 05:49:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:30 np0005542249 ceph-mgr[75372]: [cephadm INFO root] Set ssh ssh_identity_pub
Dec  2 05:49:30 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Dec  2 05:49:30 np0005542249 systemd[1]: libpod-cfb7a52e6e19fcda2356163736c26280e9751210f786cbff14573f51366fe2b8.scope: Deactivated successfully.
Dec  2 05:49:30 np0005542249 podman[76480]: 2025-12-02 10:49:30.473136224 +0000 UTC m=+0.711051112 container died cfb7a52e6e19fcda2356163736c26280e9751210f786cbff14573f51366fe2b8 (image=quay.io/ceph/ceph:v18, name=elegant_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:49:30 np0005542249 systemd[1]: var-lib-containers-storage-overlay-c4d800a4c638c25094783c02069f8caeefb908dd94438ef1a009aaaf40f39e26-merged.mount: Deactivated successfully.
Dec  2 05:49:30 np0005542249 podman[76480]: 2025-12-02 10:49:30.547384751 +0000 UTC m=+0.785299619 container remove cfb7a52e6e19fcda2356163736c26280e9751210f786cbff14573f51366fe2b8 (image=quay.io/ceph/ceph:v18, name=elegant_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  2 05:49:30 np0005542249 systemd[1]: libpod-conmon-cfb7a52e6e19fcda2356163736c26280e9751210f786cbff14573f51366fe2b8.scope: Deactivated successfully.
Dec  2 05:49:30 np0005542249 podman[76533]: 2025-12-02 10:49:30.628222636 +0000 UTC m=+0.054665422 container create 292e549e283f0534dbf6c11bed6a0fea167a6213ddae1eadf24ba05105307b7b (image=quay.io/ceph/ceph:v18, name=elegant_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  2 05:49:30 np0005542249 systemd[1]: Started libpod-conmon-292e549e283f0534dbf6c11bed6a0fea167a6213ddae1eadf24ba05105307b7b.scope.
Dec  2 05:49:30 np0005542249 podman[76533]: 2025-12-02 10:49:30.602883834 +0000 UTC m=+0.029326700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:30 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:30 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb7b3a9f37dde8e18ba29d519110b1aae53bfa0029aaa4ef4ed687f822d7fdb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:30 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb7b3a9f37dde8e18ba29d519110b1aae53bfa0029aaa4ef4ed687f822d7fdb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:30 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb7b3a9f37dde8e18ba29d519110b1aae53bfa0029aaa4ef4ed687f822d7fdb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:30 np0005542249 podman[76533]: 2025-12-02 10:49:30.751201175 +0000 UTC m=+0.177644031 container init 292e549e283f0534dbf6c11bed6a0fea167a6213ddae1eadf24ba05105307b7b (image=quay.io/ceph/ceph:v18, name=elegant_heyrovsky, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:49:30 np0005542249 podman[76533]: 2025-12-02 10:49:30.763968768 +0000 UTC m=+0.190411584 container start 292e549e283f0534dbf6c11bed6a0fea167a6213ddae1eadf24ba05105307b7b (image=quay.io/ceph/ceph:v18, name=elegant_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  2 05:49:30 np0005542249 podman[76533]: 2025-12-02 10:49:30.76847417 +0000 UTC m=+0.194916966 container attach 292e549e283f0534dbf6c11bed6a0fea167a6213ddae1eadf24ba05105307b7b (image=quay.io/ceph/ceph:v18, name=elegant_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  2 05:49:31 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 05:49:31 np0005542249 elegant_heyrovsky[76550]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDuWMj9mRZ2qJhcG4dbSvkw960kSyv5OvZ43iMB9pc5qxkaMEJlkyE7Mc9MzD+1ap07qHdGvcjovQloU0bB1vZ+nEmfbey2Cv8BjM6aiShF4DnXuGGM+zmAWJNBPZJdRARt1xhev6rz+jwqMRoK8010R2IBGo17+6qF5Da33RBbWlTYqxsToi7P1HktKUo9vd8C4aAbjrqae1pPPG5puOvHgTy/ZVTI4yAwEm/ckt3ZIMv1FCbqAwKIpOA1/akn7xqw1lTDyAuHPqQQsnb6aPgmQn17nA8t1tWq4/xGhKy+M0RllYyGtm59u06tJQBMPpyVLHx2FZQ3uICRGV3twvviPKBjzMAOxg1DU7ZigWaMjbsVv4cDJ26apvepmnGFGgEz9Oc1U2Bi5zaq4s9xisS1wJz18P0UW9JmMduqKenoZIec+2FZbuNIJ9q5cWkgSMrdXAmYXHtquEtaA6WEH3owNrqjK3djuwdl7eJOSFZTAOEsE7Xmyn9DxCQMtljkdcM= zuul@controller
Dec  2 05:49:31 np0005542249 systemd[1]: libpod-292e549e283f0534dbf6c11bed6a0fea167a6213ddae1eadf24ba05105307b7b.scope: Deactivated successfully.
Dec  2 05:49:31 np0005542249 conmon[76550]: conmon 292e549e283f0534dbf6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-292e549e283f0534dbf6c11bed6a0fea167a6213ddae1eadf24ba05105307b7b.scope/container/memory.events
Dec  2 05:49:31 np0005542249 podman[76533]: 2025-12-02 10:49:31.366340555 +0000 UTC m=+0.792783371 container died 292e549e283f0534dbf6c11bed6a0fea167a6213ddae1eadf24ba05105307b7b (image=quay.io/ceph/ceph:v18, name=elegant_heyrovsky, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  2 05:49:31 np0005542249 systemd[1]: var-lib-containers-storage-overlay-eeb7b3a9f37dde8e18ba29d519110b1aae53bfa0029aaa4ef4ed687f822d7fdb-merged.mount: Deactivated successfully.
Dec  2 05:49:31 np0005542249 podman[76533]: 2025-12-02 10:49:31.40670703 +0000 UTC m=+0.833149816 container remove 292e549e283f0534dbf6c11bed6a0fea167a6213ddae1eadf24ba05105307b7b (image=quay.io/ceph/ceph:v18, name=elegant_heyrovsky, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  2 05:49:31 np0005542249 systemd[1]: libpod-conmon-292e549e283f0534dbf6c11bed6a0fea167a6213ddae1eadf24ba05105307b7b.scope: Deactivated successfully.
Dec  2 05:49:31 np0005542249 ceph-mon[75081]: Set ssh ssh_identity_key
Dec  2 05:49:31 np0005542249 ceph-mon[75081]: Set ssh private key
Dec  2 05:49:31 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:31 np0005542249 ceph-mon[75081]: Set ssh ssh_identity_pub
Dec  2 05:49:31 np0005542249 podman[76587]: 2025-12-02 10:49:31.469054228 +0000 UTC m=+0.043962043 container create 9bb92820e3ed351a10f9f9412532b587bfd9e1850f80acfe724fe0242b564ec2 (image=quay.io/ceph/ceph:v18, name=nice_dijkstra, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  2 05:49:31 np0005542249 systemd[1]: Started libpod-conmon-9bb92820e3ed351a10f9f9412532b587bfd9e1850f80acfe724fe0242b564ec2.scope.
Dec  2 05:49:31 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:31 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1489e0490b888c707f3a2c8b7b6eae9a5362c6824661557c380d4769eb8568b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:31 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1489e0490b888c707f3a2c8b7b6eae9a5362c6824661557c380d4769eb8568b1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:31 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1489e0490b888c707f3a2c8b7b6eae9a5362c6824661557c380d4769eb8568b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:31 np0005542249 podman[76587]: 2025-12-02 10:49:31.448257359 +0000 UTC m=+0.023165154 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:31 np0005542249 podman[76587]: 2025-12-02 10:49:31.548608159 +0000 UTC m=+0.123515984 container init 9bb92820e3ed351a10f9f9412532b587bfd9e1850f80acfe724fe0242b564ec2 (image=quay.io/ceph/ceph:v18, name=nice_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  2 05:49:31 np0005542249 podman[76587]: 2025-12-02 10:49:31.554126547 +0000 UTC m=+0.129034322 container start 9bb92820e3ed351a10f9f9412532b587bfd9e1850f80acfe724fe0242b564ec2 (image=quay.io/ceph/ceph:v18, name=nice_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  2 05:49:31 np0005542249 podman[76587]: 2025-12-02 10:49:31.563496149 +0000 UTC m=+0.138403974 container attach 9bb92820e3ed351a10f9f9412532b587bfd9e1850f80acfe724fe0242b564ec2 (image=quay.io/ceph/ceph:v18, name=nice_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Dec  2 05:49:32 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 05:49:32 np0005542249 ceph-mgr[75372]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  2 05:49:32 np0005542249 systemd[1]: Created slice User Slice of UID 42477.
Dec  2 05:49:32 np0005542249 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  2 05:49:32 np0005542249 systemd-logind[787]: New session 21 of user ceph-admin.
Dec  2 05:49:32 np0005542249 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  2 05:49:32 np0005542249 systemd[1]: Starting User Manager for UID 42477...
Dec  2 05:49:32 np0005542249 systemd[76634]: Queued start job for default target Main User Target.
Dec  2 05:49:32 np0005542249 systemd[76634]: Created slice User Application Slice.
Dec  2 05:49:32 np0005542249 systemd[76634]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  2 05:49:32 np0005542249 systemd[76634]: Started Daily Cleanup of User's Temporary Directories.
Dec  2 05:49:32 np0005542249 systemd[76634]: Reached target Paths.
Dec  2 05:49:32 np0005542249 systemd[76634]: Reached target Timers.
Dec  2 05:49:32 np0005542249 systemd[76634]: Starting D-Bus User Message Bus Socket...
Dec  2 05:49:32 np0005542249 systemd[76634]: Starting Create User's Volatile Files and Directories...
Dec  2 05:49:32 np0005542249 systemd-logind[787]: New session 23 of user ceph-admin.
Dec  2 05:49:32 np0005542249 systemd[76634]: Listening on D-Bus User Message Bus Socket.
Dec  2 05:49:32 np0005542249 systemd[76634]: Reached target Sockets.
Dec  2 05:49:32 np0005542249 systemd[76634]: Finished Create User's Volatile Files and Directories.
Dec  2 05:49:32 np0005542249 systemd[76634]: Reached target Basic System.
Dec  2 05:49:32 np0005542249 systemd[76634]: Reached target Main User Target.
Dec  2 05:49:32 np0005542249 systemd[76634]: Startup finished in 131ms.
Dec  2 05:49:32 np0005542249 systemd[1]: Started User Manager for UID 42477.
Dec  2 05:49:32 np0005542249 systemd[1]: Started Session 21 of User ceph-admin.
Dec  2 05:49:32 np0005542249 systemd[1]: Started Session 23 of User ceph-admin.
Dec  2 05:49:33 np0005542249 systemd-logind[787]: New session 24 of user ceph-admin.
Dec  2 05:49:33 np0005542249 systemd[1]: Started Session 24 of User ceph-admin.
Dec  2 05:49:33 np0005542249 systemd-logind[787]: New session 25 of user ceph-admin.
Dec  2 05:49:33 np0005542249 systemd[1]: Started Session 25 of User ceph-admin.
Dec  2 05:49:33 np0005542249 ceph-mgr[75372]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Dec  2 05:49:33 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Dec  2 05:49:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053053 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:49:34 np0005542249 ceph-mgr[75372]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  2 05:49:34 np0005542249 systemd-logind[787]: New session 26 of user ceph-admin.
Dec  2 05:49:34 np0005542249 ceph-mon[75081]: Deploying cephadm binary to compute-0
Dec  2 05:49:35 np0005542249 systemd[1]: Started Session 26 of User ceph-admin.
Dec  2 05:49:35 np0005542249 systemd-logind[787]: New session 27 of user ceph-admin.
Dec  2 05:49:35 np0005542249 systemd[1]: Started Session 27 of User ceph-admin.
Dec  2 05:49:35 np0005542249 systemd-logind[787]: New session 28 of user ceph-admin.
Dec  2 05:49:35 np0005542249 systemd[1]: Started Session 28 of User ceph-admin.
Dec  2 05:49:36 np0005542249 systemd-logind[787]: New session 29 of user ceph-admin.
Dec  2 05:49:36 np0005542249 systemd[1]: Started Session 29 of User ceph-admin.
Dec  2 05:49:36 np0005542249 systemd-logind[787]: New session 30 of user ceph-admin.
Dec  2 05:49:36 np0005542249 systemd[1]: Started Session 30 of User ceph-admin.
Dec  2 05:49:36 np0005542249 ceph-mgr[75372]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  2 05:49:37 np0005542249 systemd-logind[787]: New session 31 of user ceph-admin.
Dec  2 05:49:37 np0005542249 systemd[1]: Started Session 31 of User ceph-admin.
Dec  2 05:49:37 np0005542249 systemd-logind[787]: New session 32 of user ceph-admin.
Dec  2 05:49:37 np0005542249 systemd[1]: Started Session 32 of User ceph-admin.
Dec  2 05:49:38 np0005542249 systemd-logind[787]: New session 33 of user ceph-admin.
Dec  2 05:49:38 np0005542249 systemd[1]: Started Session 33 of User ceph-admin.
Dec  2 05:49:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec  2 05:49:38 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:38 np0005542249 ceph-mgr[75372]: [cephadm INFO root] Added host compute-0
Dec  2 05:49:38 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Added host compute-0
Dec  2 05:49:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec  2 05:49:38 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  2 05:49:38 np0005542249 nice_dijkstra[76604]: Added host 'compute-0' with addr '192.168.122.100'
Dec  2 05:49:38 np0005542249 systemd[1]: libpod-9bb92820e3ed351a10f9f9412532b587bfd9e1850f80acfe724fe0242b564ec2.scope: Deactivated successfully.
Dec  2 05:49:38 np0005542249 ceph-mgr[75372]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  2 05:49:38 np0005542249 podman[77252]: 2025-12-02 10:49:38.972460145 +0000 UTC m=+0.051095555 container died 9bb92820e3ed351a10f9f9412532b587bfd9e1850f80acfe724fe0242b564ec2 (image=quay.io/ceph/ceph:v18, name=nice_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  2 05:49:39 np0005542249 systemd[1]: var-lib-containers-storage-overlay-1489e0490b888c707f3a2c8b7b6eae9a5362c6824661557c380d4769eb8568b1-merged.mount: Deactivated successfully.
Dec  2 05:49:39 np0005542249 podman[77252]: 2025-12-02 10:49:39.029631344 +0000 UTC m=+0.108266734 container remove 9bb92820e3ed351a10f9f9412532b587bfd9e1850f80acfe724fe0242b564ec2 (image=quay.io/ceph/ceph:v18, name=nice_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  2 05:49:39 np0005542249 systemd[1]: libpod-conmon-9bb92820e3ed351a10f9f9412532b587bfd9e1850f80acfe724fe0242b564ec2.scope: Deactivated successfully.
Dec  2 05:49:39 np0005542249 podman[77306]: 2025-12-02 10:49:39.107358684 +0000 UTC m=+0.047627062 container create a28a5568689d7f15e2774219b780e9da3dc0a6f31de83c479fcc4f026c31155d (image=quay.io/ceph/ceph:v18, name=kind_poincare, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:49:39 np0005542249 systemd[1]: Started libpod-conmon-a28a5568689d7f15e2774219b780e9da3dc0a6f31de83c479fcc4f026c31155d.scope.
Dec  2 05:49:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:49:39 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:39 np0005542249 podman[77306]: 2025-12-02 10:49:39.0863854 +0000 UTC m=+0.026653798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:39 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113bd972533684dce4728bba511dbe1248568d4b0d68f61976d13acd6ccef693/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:39 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113bd972533684dce4728bba511dbe1248568d4b0d68f61976d13acd6ccef693/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:39 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113bd972533684dce4728bba511dbe1248568d4b0d68f61976d13acd6ccef693/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:39 np0005542249 podman[77306]: 2025-12-02 10:49:39.196605496 +0000 UTC m=+0.136873904 container init a28a5568689d7f15e2774219b780e9da3dc0a6f31de83c479fcc4f026c31155d (image=quay.io/ceph/ceph:v18, name=kind_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  2 05:49:39 np0005542249 podman[77306]: 2025-12-02 10:49:39.208479636 +0000 UTC m=+0.148748064 container start a28a5568689d7f15e2774219b780e9da3dc0a6f31de83c479fcc4f026c31155d (image=quay.io/ceph/ceph:v18, name=kind_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  2 05:49:39 np0005542249 podman[77306]: 2025-12-02 10:49:39.213194802 +0000 UTC m=+0.153463220 container attach a28a5568689d7f15e2774219b780e9da3dc0a6f31de83c479fcc4f026c31155d (image=quay.io/ceph/ceph:v18, name=kind_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 05:49:39 np0005542249 podman[77398]: 2025-12-02 10:49:39.520301965 +0000 UTC m=+0.045143726 container create 643d8ed4f8601839967aa86a22d5f502e4561696c21ae0dbbbd01c853e35c881 (image=quay.io/ceph/ceph:v18, name=elated_jang, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Dec  2 05:49:39 np0005542249 systemd[1]: Started libpod-conmon-643d8ed4f8601839967aa86a22d5f502e4561696c21ae0dbbbd01c853e35c881.scope.
Dec  2 05:49:39 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:39 np0005542249 podman[77398]: 2025-12-02 10:49:39.592268351 +0000 UTC m=+0.117110102 container init 643d8ed4f8601839967aa86a22d5f502e4561696c21ae0dbbbd01c853e35c881 (image=quay.io/ceph/ceph:v18, name=elated_jang, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Dec  2 05:49:39 np0005542249 podman[77398]: 2025-12-02 10:49:39.499066573 +0000 UTC m=+0.023908344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:39 np0005542249 podman[77398]: 2025-12-02 10:49:39.597332137 +0000 UTC m=+0.122173878 container start 643d8ed4f8601839967aa86a22d5f502e4561696c21ae0dbbbd01c853e35c881 (image=quay.io/ceph/ceph:v18, name=elated_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:49:39 np0005542249 podman[77398]: 2025-12-02 10:49:39.600984566 +0000 UTC m=+0.125826337 container attach 643d8ed4f8601839967aa86a22d5f502e4561696c21ae0dbbbd01c853e35c881 (image=quay.io/ceph/ceph:v18, name=elated_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  2 05:49:39 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 05:49:39 np0005542249 ceph-mgr[75372]: [cephadm INFO root] Saving service mon spec with placement count:5
Dec  2 05:49:39 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Dec  2 05:49:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec  2 05:49:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:39 np0005542249 kind_poincare[77366]: Scheduled mon update...
Dec  2 05:49:39 np0005542249 systemd[1]: libpod-a28a5568689d7f15e2774219b780e9da3dc0a6f31de83c479fcc4f026c31155d.scope: Deactivated successfully.
Dec  2 05:49:39 np0005542249 podman[77306]: 2025-12-02 10:49:39.773016684 +0000 UTC m=+0.713285092 container died a28a5568689d7f15e2774219b780e9da3dc0a6f31de83c479fcc4f026c31155d (image=quay.io/ceph/ceph:v18, name=kind_poincare, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  2 05:49:39 np0005542249 systemd[1]: var-lib-containers-storage-overlay-113bd972533684dce4728bba511dbe1248568d4b0d68f61976d13acd6ccef693-merged.mount: Deactivated successfully.
Dec  2 05:49:39 np0005542249 podman[77306]: 2025-12-02 10:49:39.819263289 +0000 UTC m=+0.759531677 container remove a28a5568689d7f15e2774219b780e9da3dc0a6f31de83c479fcc4f026c31155d (image=quay.io/ceph/ceph:v18, name=kind_poincare, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Dec  2 05:49:39 np0005542249 systemd[1]: libpod-conmon-a28a5568689d7f15e2774219b780e9da3dc0a6f31de83c479fcc4f026c31155d.scope: Deactivated successfully.
Dec  2 05:49:39 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:39 np0005542249 ceph-mon[75081]: Added host compute-0
Dec  2 05:49:39 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:39 np0005542249 elated_jang[77433]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Dec  2 05:49:39 np0005542249 podman[77453]: 2025-12-02 10:49:39.86950122 +0000 UTC m=+0.034883299 container create 9294ccde1dcd88659d98ff8cb181347a2daa8055840fcbc4a015471e953be9b4 (image=quay.io/ceph/ceph:v18, name=heuristic_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  2 05:49:39 np0005542249 systemd[1]: libpod-643d8ed4f8601839967aa86a22d5f502e4561696c21ae0dbbbd01c853e35c881.scope: Deactivated successfully.
Dec  2 05:49:39 np0005542249 podman[77398]: 2025-12-02 10:49:39.879965771 +0000 UTC m=+0.404807522 container died 643d8ed4f8601839967aa86a22d5f502e4561696c21ae0dbbbd01c853e35c881 (image=quay.io/ceph/ceph:v18, name=elated_jang, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:49:39 np0005542249 systemd[1]: Started libpod-conmon-9294ccde1dcd88659d98ff8cb181347a2daa8055840fcbc4a015471e953be9b4.scope.
Dec  2 05:49:39 np0005542249 podman[77398]: 2025-12-02 10:49:39.93379624 +0000 UTC m=+0.458637981 container remove 643d8ed4f8601839967aa86a22d5f502e4561696c21ae0dbbbd01c853e35c881 (image=quay.io/ceph/ceph:v18, name=elated_jang, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  2 05:49:39 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:39 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e937ac4171f18cd6deb0ee59e88596720d6a576aadc580b092a88553ad35359c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:39 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e937ac4171f18cd6deb0ee59e88596720d6a576aadc580b092a88553ad35359c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:39 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e937ac4171f18cd6deb0ee59e88596720d6a576aadc580b092a88553ad35359c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:39 np0005542249 systemd[1]: libpod-conmon-643d8ed4f8601839967aa86a22d5f502e4561696c21ae0dbbbd01c853e35c881.scope: Deactivated successfully.
Dec  2 05:49:39 np0005542249 podman[77453]: 2025-12-02 10:49:39.854905477 +0000 UTC m=+0.020287576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:39 np0005542249 podman[77453]: 2025-12-02 10:49:39.958953207 +0000 UTC m=+0.124335306 container init 9294ccde1dcd88659d98ff8cb181347a2daa8055840fcbc4a015471e953be9b4 (image=quay.io/ceph/ceph:v18, name=heuristic_raman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:49:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Dec  2 05:49:39 np0005542249 podman[77453]: 2025-12-02 10:49:39.970841876 +0000 UTC m=+0.136223965 container start 9294ccde1dcd88659d98ff8cb181347a2daa8055840fcbc4a015471e953be9b4 (image=quay.io/ceph/ceph:v18, name=heuristic_raman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:49:39 np0005542249 podman[77453]: 2025-12-02 10:49:39.975972285 +0000 UTC m=+0.141354384 container attach 9294ccde1dcd88659d98ff8cb181347a2daa8055840fcbc4a015471e953be9b4 (image=quay.io/ceph/ceph:v18, name=heuristic_raman, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  2 05:49:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:40 np0005542249 systemd[1]: var-lib-containers-storage-overlay-86ba9c6d0c18fa5eafd65a2eb2aaefb12ef9cee80ce09545598291795a199417-merged.mount: Deactivated successfully.
Dec  2 05:49:40 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 05:49:40 np0005542249 ceph-mgr[75372]: [cephadm INFO root] Saving service mgr spec with placement count:2
Dec  2 05:49:40 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Dec  2 05:49:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec  2 05:49:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:49:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:40 np0005542249 heuristic_raman[77477]: Scheduled mgr update...
Dec  2 05:49:40 np0005542249 systemd[1]: libpod-9294ccde1dcd88659d98ff8cb181347a2daa8055840fcbc4a015471e953be9b4.scope: Deactivated successfully.
Dec  2 05:49:40 np0005542249 podman[77453]: 2025-12-02 10:49:40.696983584 +0000 UTC m=+0.862365693 container died 9294ccde1dcd88659d98ff8cb181347a2daa8055840fcbc4a015471e953be9b4 (image=quay.io/ceph/ceph:v18, name=heuristic_raman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  2 05:49:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:40 np0005542249 ceph-mgr[75372]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  2 05:49:41 np0005542249 ceph-mon[75081]: Saving service mon spec with placement count:5
Dec  2 05:49:41 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:41 np0005542249 ceph-mon[75081]: Saving service mgr spec with placement count:2
Dec  2 05:49:41 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:41 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:41 np0005542249 systemd[1]: var-lib-containers-storage-overlay-e937ac4171f18cd6deb0ee59e88596720d6a576aadc580b092a88553ad35359c-merged.mount: Deactivated successfully.
Dec  2 05:49:41 np0005542249 podman[77453]: 2025-12-02 10:49:41.220643753 +0000 UTC m=+1.386025842 container remove 9294ccde1dcd88659d98ff8cb181347a2daa8055840fcbc4a015471e953be9b4 (image=quay.io/ceph/ceph:v18, name=heuristic_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  2 05:49:41 np0005542249 systemd[1]: libpod-conmon-9294ccde1dcd88659d98ff8cb181347a2daa8055840fcbc4a015471e953be9b4.scope: Deactivated successfully.
Dec  2 05:49:41 np0005542249 podman[77740]: 2025-12-02 10:49:41.317304503 +0000 UTC m=+0.066509301 container create bf54373595c9139ddf049b0c563c1b2387f39e2fec7126735aeb8e9e758f294f (image=quay.io/ceph/ceph:v18, name=magical_merkle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:49:41 np0005542249 systemd[1]: Started libpod-conmon-bf54373595c9139ddf049b0c563c1b2387f39e2fec7126735aeb8e9e758f294f.scope.
Dec  2 05:49:41 np0005542249 podman[77740]: 2025-12-02 10:49:41.287285185 +0000 UTC m=+0.036490013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:41 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:41 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/231b66463cb694cacb8ad68227afcca5a76e255ebb07f472dbb2b55d69246c9e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:41 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/231b66463cb694cacb8ad68227afcca5a76e255ebb07f472dbb2b55d69246c9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:41 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/231b66463cb694cacb8ad68227afcca5a76e255ebb07f472dbb2b55d69246c9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:41 np0005542249 podman[77740]: 2025-12-02 10:49:41.418079534 +0000 UTC m=+0.167284432 container init bf54373595c9139ddf049b0c563c1b2387f39e2fec7126735aeb8e9e758f294f (image=quay.io/ceph/ceph:v18, name=magical_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  2 05:49:41 np0005542249 podman[77740]: 2025-12-02 10:49:41.424349513 +0000 UTC m=+0.173554311 container start bf54373595c9139ddf049b0c563c1b2387f39e2fec7126735aeb8e9e758f294f (image=quay.io/ceph/ceph:v18, name=magical_merkle, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:49:41 np0005542249 podman[77740]: 2025-12-02 10:49:41.428116494 +0000 UTC m=+0.177321332 container attach bf54373595c9139ddf049b0c563c1b2387f39e2fec7126735aeb8e9e758f294f (image=quay.io/ceph/ceph:v18, name=magical_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  2 05:49:41 np0005542249 podman[77834]: 2025-12-02 10:49:41.747817586 +0000 UTC m=+0.072779910 container exec cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  2 05:49:41 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 05:49:41 np0005542249 ceph-mgr[75372]: [cephadm INFO root] Saving service crash spec with placement *
Dec  2 05:49:41 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Dec  2 05:49:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Dec  2 05:49:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:41 np0005542249 magical_merkle[77759]: Scheduled crash update...
Dec  2 05:49:41 np0005542249 systemd[1]: libpod-bf54373595c9139ddf049b0c563c1b2387f39e2fec7126735aeb8e9e758f294f.scope: Deactivated successfully.
Dec  2 05:49:41 np0005542249 podman[77740]: 2025-12-02 10:49:41.972951983 +0000 UTC m=+0.722156771 container died bf54373595c9139ddf049b0c563c1b2387f39e2fec7126735aeb8e9e758f294f (image=quay.io/ceph/ceph:v18, name=magical_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:49:41 np0005542249 systemd[1]: var-lib-containers-storage-overlay-231b66463cb694cacb8ad68227afcca5a76e255ebb07f472dbb2b55d69246c9e-merged.mount: Deactivated successfully.
Dec  2 05:49:42 np0005542249 podman[77740]: 2025-12-02 10:49:42.009920697 +0000 UTC m=+0.759125485 container remove bf54373595c9139ddf049b0c563c1b2387f39e2fec7126735aeb8e9e758f294f (image=quay.io/ceph/ceph:v18, name=magical_merkle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  2 05:49:42 np0005542249 systemd[1]: libpod-conmon-bf54373595c9139ddf049b0c563c1b2387f39e2fec7126735aeb8e9e758f294f.scope: Deactivated successfully.
Dec  2 05:49:42 np0005542249 podman[77834]: 2025-12-02 10:49:42.029802463 +0000 UTC m=+0.354764707 container exec_died cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  2 05:49:42 np0005542249 podman[77890]: 2025-12-02 10:49:42.048047994 +0000 UTC m=+0.020766730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:42 np0005542249 podman[77890]: 2025-12-02 10:49:42.359678217 +0000 UTC m=+0.332396953 container create 123c944bfb802c6fd42efd04e8cc9cddb4c4dba18565fdbd4bfcd3a3de80081b (image=quay.io/ceph/ceph:v18, name=eager_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  2 05:49:42 np0005542249 systemd[1]: Started libpod-conmon-123c944bfb802c6fd42efd04e8cc9cddb4c4dba18565fdbd4bfcd3a3de80081b.scope.
Dec  2 05:49:42 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:42 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c69b0b5d92e1c1d619496b98bdf85c51a6c7b61c3c7ad3683e6d3b054ca411ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:42 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c69b0b5d92e1c1d619496b98bdf85c51a6c7b61c3c7ad3683e6d3b054ca411ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:42 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c69b0b5d92e1c1d619496b98bdf85c51a6c7b61c3c7ad3683e6d3b054ca411ef/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:42 np0005542249 podman[77890]: 2025-12-02 10:49:42.473119669 +0000 UTC m=+0.445838445 container init 123c944bfb802c6fd42efd04e8cc9cddb4c4dba18565fdbd4bfcd3a3de80081b (image=quay.io/ceph/ceph:v18, name=eager_lumiere, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:49:42 np0005542249 podman[77890]: 2025-12-02 10:49:42.482870572 +0000 UTC m=+0.455589308 container start 123c944bfb802c6fd42efd04e8cc9cddb4c4dba18565fdbd4bfcd3a3de80081b (image=quay.io/ceph/ceph:v18, name=eager_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 05:49:42 np0005542249 podman[77890]: 2025-12-02 10:49:42.489420108 +0000 UTC m=+0.462138844 container attach 123c944bfb802c6fd42efd04e8cc9cddb4c4dba18565fdbd4bfcd3a3de80081b (image=quay.io/ceph/ceph:v18, name=eager_lumiere, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:49:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:49:42 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:42 np0005542249 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 78068 (sysctl)
Dec  2 05:49:42 np0005542249 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec  2 05:49:42 np0005542249 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec  2 05:49:42 np0005542249 ceph-mon[75081]: Saving service crash spec with placement *
Dec  2 05:49:42 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:42 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:42 np0005542249 ceph-mgr[75372]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  2 05:49:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Dec  2 05:49:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/964521568' entity='client.admin' 
Dec  2 05:49:43 np0005542249 systemd[1]: libpod-123c944bfb802c6fd42efd04e8cc9cddb4c4dba18565fdbd4bfcd3a3de80081b.scope: Deactivated successfully.
Dec  2 05:49:43 np0005542249 podman[77890]: 2025-12-02 10:49:43.055448817 +0000 UTC m=+1.028167563 container died 123c944bfb802c6fd42efd04e8cc9cddb4c4dba18565fdbd4bfcd3a3de80081b (image=quay.io/ceph/ceph:v18, name=eager_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  2 05:49:43 np0005542249 systemd[1]: var-lib-containers-storage-overlay-c69b0b5d92e1c1d619496b98bdf85c51a6c7b61c3c7ad3683e6d3b054ca411ef-merged.mount: Deactivated successfully.
Dec  2 05:49:43 np0005542249 podman[77890]: 2025-12-02 10:49:43.102239386 +0000 UTC m=+1.074958092 container remove 123c944bfb802c6fd42efd04e8cc9cddb4c4dba18565fdbd4bfcd3a3de80081b (image=quay.io/ceph/ceph:v18, name=eager_lumiere, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  2 05:49:43 np0005542249 systemd[1]: libpod-conmon-123c944bfb802c6fd42efd04e8cc9cddb4c4dba18565fdbd4bfcd3a3de80081b.scope: Deactivated successfully.
Dec  2 05:49:43 np0005542249 podman[78092]: 2025-12-02 10:49:43.16445865 +0000 UTC m=+0.040997174 container create 2cadba7391744424d55f8adec940839f2adc095b3c9db6725dc18627c4ef7b55 (image=quay.io/ceph/ceph:v18, name=amazing_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  2 05:49:43 np0005542249 systemd[1]: Started libpod-conmon-2cadba7391744424d55f8adec940839f2adc095b3c9db6725dc18627c4ef7b55.scope.
Dec  2 05:49:43 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:43 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e48fca78e28c654a250906125be08a7b737efe0a57af32ca7f0d0b9bd6548b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:43 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e48fca78e28c654a250906125be08a7b737efe0a57af32ca7f0d0b9bd6548b8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:43 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e48fca78e28c654a250906125be08a7b737efe0a57af32ca7f0d0b9bd6548b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:43 np0005542249 podman[78092]: 2025-12-02 10:49:43.238304307 +0000 UTC m=+0.114842841 container init 2cadba7391744424d55f8adec940839f2adc095b3c9db6725dc18627c4ef7b55 (image=quay.io/ceph/ceph:v18, name=amazing_aryabhata, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:49:43 np0005542249 podman[78092]: 2025-12-02 10:49:43.145444348 +0000 UTC m=+0.021982872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:43 np0005542249 podman[78092]: 2025-12-02 10:49:43.244948045 +0000 UTC m=+0.121486569 container start 2cadba7391744424d55f8adec940839f2adc095b3c9db6725dc18627c4ef7b55 (image=quay.io/ceph/ceph:v18, name=amazing_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  2 05:49:43 np0005542249 podman[78092]: 2025-12-02 10:49:43.249628642 +0000 UTC m=+0.126167226 container attach 2cadba7391744424d55f8adec940839f2adc095b3c9db6725dc18627c4ef7b55 (image=quay.io/ceph/ceph:v18, name=amazing_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  2 05:49:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:49:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:43 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 05:49:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Dec  2 05:49:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:43 np0005542249 systemd[1]: libpod-2cadba7391744424d55f8adec940839f2adc095b3c9db6725dc18627c4ef7b55.scope: Deactivated successfully.
Dec  2 05:49:43 np0005542249 podman[78092]: 2025-12-02 10:49:43.818213219 +0000 UTC m=+0.694751753 container died 2cadba7391744424d55f8adec940839f2adc095b3c9db6725dc18627c4ef7b55 (image=quay.io/ceph/ceph:v18, name=amazing_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:49:43 np0005542249 systemd[1]: var-lib-containers-storage-overlay-4e48fca78e28c654a250906125be08a7b737efe0a57af32ca7f0d0b9bd6548b8-merged.mount: Deactivated successfully.
Dec  2 05:49:43 np0005542249 podman[78092]: 2025-12-02 10:49:43.865249355 +0000 UTC m=+0.741787889 container remove 2cadba7391744424d55f8adec940839f2adc095b3c9db6725dc18627c4ef7b55 (image=quay.io/ceph/ceph:v18, name=amazing_aryabhata, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  2 05:49:43 np0005542249 systemd[1]: libpod-conmon-2cadba7391744424d55f8adec940839f2adc095b3c9db6725dc18627c4ef7b55.scope: Deactivated successfully.
Dec  2 05:49:43 np0005542249 podman[78334]: 2025-12-02 10:49:43.956366386 +0000 UTC m=+0.061646159 container create 0f54327b27a630d82bf0d4606cc32e428eec9269728139152da966eaedb52c0e (image=quay.io/ceph/ceph:v18, name=confident_pike, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:49:44 np0005542249 systemd[1]: Started libpod-conmon-0f54327b27a630d82bf0d4606cc32e428eec9269728139152da966eaedb52c0e.scope.
Dec  2 05:49:44 np0005542249 podman[78334]: 2025-12-02 10:49:43.933742377 +0000 UTC m=+0.039022190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:44 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:44 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3c81f00b265638acc671228fdca2e88fe5360351158a5ee2a676cba1fd232ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:44 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3c81f00b265638acc671228fdca2e88fe5360351158a5ee2a676cba1fd232ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:44 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3c81f00b265638acc671228fdca2e88fe5360351158a5ee2a676cba1fd232ac/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:44 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/964521568' entity='client.admin' 
Dec  2 05:49:44 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:44 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:44 np0005542249 podman[78334]: 2025-12-02 10:49:44.052494392 +0000 UTC m=+0.157774225 container init 0f54327b27a630d82bf0d4606cc32e428eec9269728139152da966eaedb52c0e (image=quay.io/ceph/ceph:v18, name=confident_pike, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:49:44 np0005542249 podman[78334]: 2025-12-02 10:49:44.059872111 +0000 UTC m=+0.165151884 container start 0f54327b27a630d82bf0d4606cc32e428eec9269728139152da966eaedb52c0e (image=quay.io/ceph/ceph:v18, name=confident_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  2 05:49:44 np0005542249 podman[78334]: 2025-12-02 10:49:44.063083247 +0000 UTC m=+0.168363050 container attach 0f54327b27a630d82bf0d4606cc32e428eec9269728139152da966eaedb52c0e (image=quay.io/ceph/ceph:v18, name=confident_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  2 05:49:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:49:44 np0005542249 podman[78436]: 2025-12-02 10:49:44.378200515 +0000 UTC m=+0.072778049 container create 8a698ba22bd35e25f8294d74ecc572e146f53c320cd35d345afa8d385ae5f238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:49:44 np0005542249 systemd[1]: Started libpod-conmon-8a698ba22bd35e25f8294d74ecc572e146f53c320cd35d345afa8d385ae5f238.scope.
Dec  2 05:49:44 np0005542249 podman[78436]: 2025-12-02 10:49:44.351543008 +0000 UTC m=+0.046120582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:49:44 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:44 np0005542249 podman[78436]: 2025-12-02 10:49:44.478306518 +0000 UTC m=+0.172884092 container init 8a698ba22bd35e25f8294d74ecc572e146f53c320cd35d345afa8d385ae5f238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_buck, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:49:44 np0005542249 podman[78436]: 2025-12-02 10:49:44.488493873 +0000 UTC m=+0.183071357 container start 8a698ba22bd35e25f8294d74ecc572e146f53c320cd35d345afa8d385ae5f238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:49:44 np0005542249 podman[78436]: 2025-12-02 10:49:44.492476679 +0000 UTC m=+0.187054193 container attach 8a698ba22bd35e25f8294d74ecc572e146f53c320cd35d345afa8d385ae5f238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_buck, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  2 05:49:44 np0005542249 sad_buck[78470]: 167 167
Dec  2 05:49:44 np0005542249 systemd[1]: libpod-8a698ba22bd35e25f8294d74ecc572e146f53c320cd35d345afa8d385ae5f238.scope: Deactivated successfully.
Dec  2 05:49:44 np0005542249 podman[78436]: 2025-12-02 10:49:44.495733898 +0000 UTC m=+0.190311422 container died 8a698ba22bd35e25f8294d74ecc572e146f53c320cd35d345afa8d385ae5f238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_buck, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  2 05:49:44 np0005542249 systemd[1]: var-lib-containers-storage-overlay-4a90ba2c6cb3f127e062794b1e6cce43d318ed44d318416c182fe1224fff7c02-merged.mount: Deactivated successfully.
Dec  2 05:49:44 np0005542249 podman[78436]: 2025-12-02 10:49:44.553584324 +0000 UTC m=+0.248161818 container remove 8a698ba22bd35e25f8294d74ecc572e146f53c320cd35d345afa8d385ae5f238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_buck, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  2 05:49:44 np0005542249 systemd[1]: libpod-conmon-8a698ba22bd35e25f8294d74ecc572e146f53c320cd35d345afa8d385ae5f238.scope: Deactivated successfully.
Dec  2 05:49:44 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 05:49:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec  2 05:49:44 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:44 np0005542249 ceph-mgr[75372]: [cephadm INFO root] Added label _admin to host compute-0
Dec  2 05:49:44 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Dec  2 05:49:44 np0005542249 confident_pike[78390]: Added label _admin to host compute-0
Dec  2 05:49:44 np0005542249 systemd[1]: libpod-0f54327b27a630d82bf0d4606cc32e428eec9269728139152da966eaedb52c0e.scope: Deactivated successfully.
Dec  2 05:49:44 np0005542249 podman[78334]: 2025-12-02 10:49:44.625420336 +0000 UTC m=+0.730700139 container died 0f54327b27a630d82bf0d4606cc32e428eec9269728139152da966eaedb52c0e (image=quay.io/ceph/ceph:v18, name=confident_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:49:44 np0005542249 systemd[1]: var-lib-containers-storage-overlay-f3c81f00b265638acc671228fdca2e88fe5360351158a5ee2a676cba1fd232ac-merged.mount: Deactivated successfully.
Dec  2 05:49:44 np0005542249 podman[78334]: 2025-12-02 10:49:44.680019385 +0000 UTC m=+0.785299148 container remove 0f54327b27a630d82bf0d4606cc32e428eec9269728139152da966eaedb52c0e (image=quay.io/ceph/ceph:v18, name=confident_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  2 05:49:44 np0005542249 systemd[1]: libpod-conmon-0f54327b27a630d82bf0d4606cc32e428eec9269728139152da966eaedb52c0e.scope: Deactivated successfully.
Dec  2 05:49:44 np0005542249 podman[78507]: 2025-12-02 10:49:44.775881125 +0000 UTC m=+0.066213762 container create 07c64c79a75fa153a34f7cb86e972b1f9d1c2124fe42c5faaa93af66b0d476bf (image=quay.io/ceph/ceph:v18, name=lucid_cerf, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:49:44 np0005542249 systemd[1]: Started libpod-conmon-07c64c79a75fa153a34f7cb86e972b1f9d1c2124fe42c5faaa93af66b0d476bf.scope.
Dec  2 05:49:44 np0005542249 podman[78507]: 2025-12-02 10:49:44.747350928 +0000 UTC m=+0.037683615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:44 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:44 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a63510e7cb363b9221fa763e04d1ceb011637976d6eb8893d6c7f62ed50e3a5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:44 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a63510e7cb363b9221fa763e04d1ceb011637976d6eb8893d6c7f62ed50e3a5c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:44 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a63510e7cb363b9221fa763e04d1ceb011637976d6eb8893d6c7f62ed50e3a5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:44 np0005542249 podman[78507]: 2025-12-02 10:49:44.874960781 +0000 UTC m=+0.165293428 container init 07c64c79a75fa153a34f7cb86e972b1f9d1c2124fe42c5faaa93af66b0d476bf (image=quay.io/ceph/ceph:v18, name=lucid_cerf, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  2 05:49:44 np0005542249 podman[78507]: 2025-12-02 10:49:44.886624734 +0000 UTC m=+0.176957361 container start 07c64c79a75fa153a34f7cb86e972b1f9d1c2124fe42c5faaa93af66b0d476bf (image=quay.io/ceph/ceph:v18, name=lucid_cerf, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Dec  2 05:49:44 np0005542249 podman[78507]: 2025-12-02 10:49:44.891110175 +0000 UTC m=+0.181442862 container attach 07c64c79a75fa153a34f7cb86e972b1f9d1c2124fe42c5faaa93af66b0d476bf (image=quay.io/ceph/ceph:v18, name=lucid_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:49:44 np0005542249 ceph-mgr[75372]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  2 05:49:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Dec  2 05:49:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/433685290' entity='client.admin' 
Dec  2 05:49:45 np0005542249 systemd[1]: libpod-07c64c79a75fa153a34f7cb86e972b1f9d1c2124fe42c5faaa93af66b0d476bf.scope: Deactivated successfully.
Dec  2 05:49:45 np0005542249 podman[78550]: 2025-12-02 10:49:45.49494263 +0000 UTC m=+0.037845989 container died 07c64c79a75fa153a34f7cb86e972b1f9d1c2124fe42c5faaa93af66b0d476bf (image=quay.io/ceph/ceph:v18, name=lucid_cerf, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:49:45 np0005542249 systemd[1]: var-lib-containers-storage-overlay-a63510e7cb363b9221fa763e04d1ceb011637976d6eb8893d6c7f62ed50e3a5c-merged.mount: Deactivated successfully.
Dec  2 05:49:45 np0005542249 podman[78550]: 2025-12-02 10:49:45.548877362 +0000 UTC m=+0.091780671 container remove 07c64c79a75fa153a34f7cb86e972b1f9d1c2124fe42c5faaa93af66b0d476bf (image=quay.io/ceph/ceph:v18, name=lucid_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Dec  2 05:49:45 np0005542249 systemd[1]: libpod-conmon-07c64c79a75fa153a34f7cb86e972b1f9d1c2124fe42c5faaa93af66b0d476bf.scope: Deactivated successfully.
Dec  2 05:49:45 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:45 np0005542249 ceph-mon[75081]: Added label _admin to host compute-0
Dec  2 05:49:45 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/433685290' entity='client.admin' 
Dec  2 05:49:45 np0005542249 podman[78564]: 2025-12-02 10:49:45.654148924 +0000 UTC m=+0.068019400 container create d0005040216243627f4501bae593ca2a0f05f15ea14443f6d67ecd52d8f27177 (image=quay.io/ceph/ceph:v18, name=hungry_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:49:45 np0005542249 systemd[1]: Started libpod-conmon-d0005040216243627f4501bae593ca2a0f05f15ea14443f6d67ecd52d8f27177.scope.
Dec  2 05:49:45 np0005542249 podman[78564]: 2025-12-02 10:49:45.626657765 +0000 UTC m=+0.040528251 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:45 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:45 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f155dd2d391ad650b5d9a70e49d94980b7d00cd8d1304b58b554d24a7e7eebc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:45 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f155dd2d391ad650b5d9a70e49d94980b7d00cd8d1304b58b554d24a7e7eebc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:45 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f155dd2d391ad650b5d9a70e49d94980b7d00cd8d1304b58b554d24a7e7eebc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:45 np0005542249 podman[78564]: 2025-12-02 10:49:45.777065051 +0000 UTC m=+0.190935527 container init d0005040216243627f4501bae593ca2a0f05f15ea14443f6d67ecd52d8f27177 (image=quay.io/ceph/ceph:v18, name=hungry_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  2 05:49:45 np0005542249 podman[78564]: 2025-12-02 10:49:45.78188626 +0000 UTC m=+0.195756746 container start d0005040216243627f4501bae593ca2a0f05f15ea14443f6d67ecd52d8f27177 (image=quay.io/ceph/ceph:v18, name=hungry_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:49:45 np0005542249 podman[78564]: 2025-12-02 10:49:45.78669 +0000 UTC m=+0.200560446 container attach d0005040216243627f4501bae593ca2a0f05f15ea14443f6d67ecd52d8f27177 (image=quay.io/ceph/ceph:v18, name=hungry_sinoussi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:49:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Dec  2 05:49:46 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/713415408' entity='client.admin' 
Dec  2 05:49:46 np0005542249 hungry_sinoussi[78580]: set mgr/dashboard/cluster/status
Dec  2 05:49:46 np0005542249 systemd[1]: libpod-d0005040216243627f4501bae593ca2a0f05f15ea14443f6d67ecd52d8f27177.scope: Deactivated successfully.
Dec  2 05:49:46 np0005542249 podman[78564]: 2025-12-02 10:49:46.466273464 +0000 UTC m=+0.880143940 container died d0005040216243627f4501bae593ca2a0f05f15ea14443f6d67ecd52d8f27177 (image=quay.io/ceph/ceph:v18, name=hungry_sinoussi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:49:46 np0005542249 systemd[1]: var-lib-containers-storage-overlay-1f155dd2d391ad650b5d9a70e49d94980b7d00cd8d1304b58b554d24a7e7eebc-merged.mount: Deactivated successfully.
Dec  2 05:49:46 np0005542249 podman[78564]: 2025-12-02 10:49:46.523215086 +0000 UTC m=+0.937085562 container remove d0005040216243627f4501bae593ca2a0f05f15ea14443f6d67ecd52d8f27177 (image=quay.io/ceph/ceph:v18, name=hungry_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  2 05:49:46 np0005542249 systemd[1]: libpod-conmon-d0005040216243627f4501bae593ca2a0f05f15ea14443f6d67ecd52d8f27177.scope: Deactivated successfully.
Dec  2 05:49:46 np0005542249 podman[78625]: 2025-12-02 10:49:46.811722038 +0000 UTC m=+0.068109133 container create e52cf2e79e5f4242353dec0e17b3955cf9cb7d8626465537ee01389e7f6ebfc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_leavitt, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:49:46 np0005542249 systemd[1]: Started libpod-conmon-e52cf2e79e5f4242353dec0e17b3955cf9cb7d8626465537ee01389e7f6ebfc7.scope.
Dec  2 05:49:46 np0005542249 podman[78625]: 2025-12-02 10:49:46.783793607 +0000 UTC m=+0.040180712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:49:46 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:46 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8edef4afc1ad095619ec4714a6a9aa67d322e8d2eb928b43e45b0c3aca095c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:46 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8edef4afc1ad095619ec4714a6a9aa67d322e8d2eb928b43e45b0c3aca095c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:46 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8edef4afc1ad095619ec4714a6a9aa67d322e8d2eb928b43e45b0c3aca095c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:46 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8edef4afc1ad095619ec4714a6a9aa67d322e8d2eb928b43e45b0c3aca095c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:46 np0005542249 podman[78625]: 2025-12-02 10:49:46.926691862 +0000 UTC m=+0.183078937 container init e52cf2e79e5f4242353dec0e17b3955cf9cb7d8626465537ee01389e7f6ebfc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:49:46 np0005542249 podman[78625]: 2025-12-02 10:49:46.938641794 +0000 UTC m=+0.195028889 container start e52cf2e79e5f4242353dec0e17b3955cf9cb7d8626465537ee01389e7f6ebfc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_leavitt, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  2 05:49:46 np0005542249 podman[78625]: 2025-12-02 10:49:46.946041092 +0000 UTC m=+0.202428157 container attach e52cf2e79e5f4242353dec0e17b3955cf9cb7d8626465537ee01389e7f6ebfc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_leavitt, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  2 05:49:46 np0005542249 ceph-mgr[75372]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Dec  2 05:49:46 np0005542249 ceph-mon[75081]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec  2 05:49:46 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  2 05:49:47 np0005542249 python3[78669]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:49:47 np0005542249 podman[78672]: 2025-12-02 10:49:47.124817393 +0000 UTC m=+0.091649668 container create 3f19f1b217b8e0d8c3ec559f701e08def029005d38bb53846876ccef3601ea4f (image=quay.io/ceph/ceph:v18, name=laughing_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Dec  2 05:49:47 np0005542249 systemd[1]: Started libpod-conmon-3f19f1b217b8e0d8c3ec559f701e08def029005d38bb53846876ccef3601ea4f.scope.
Dec  2 05:49:47 np0005542249 podman[78672]: 2025-12-02 10:49:47.068402864 +0000 UTC m=+0.035235149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:47 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:47 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27e1c5ea0f07cb5d735b21642df239e522a37ca95e3ad477ccf70b3102f96339/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:47 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27e1c5ea0f07cb5d735b21642df239e522a37ca95e3ad477ccf70b3102f96339/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:47 np0005542249 podman[78672]: 2025-12-02 10:49:47.22285323 +0000 UTC m=+0.189685515 container init 3f19f1b217b8e0d8c3ec559f701e08def029005d38bb53846876ccef3601ea4f (image=quay.io/ceph/ceph:v18, name=laughing_tu, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:49:47 np0005542249 podman[78672]: 2025-12-02 10:49:47.232706815 +0000 UTC m=+0.199539080 container start 3f19f1b217b8e0d8c3ec559f701e08def029005d38bb53846876ccef3601ea4f (image=quay.io/ceph/ceph:v18, name=laughing_tu, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:49:47 np0005542249 podman[78672]: 2025-12-02 10:49:47.267917332 +0000 UTC m=+0.234749597 container attach 3f19f1b217b8e0d8c3ec559f701e08def029005d38bb53846876ccef3601ea4f (image=quay.io/ceph/ceph:v18, name=laughing_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:49:47 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/713415408' entity='client.admin' 
Dec  2 05:49:47 np0005542249 ceph-mon[75081]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec  2 05:49:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Dec  2 05:49:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3042304631' entity='client.admin' 
Dec  2 05:49:47 np0005542249 systemd[1]: libpod-3f19f1b217b8e0d8c3ec559f701e08def029005d38bb53846876ccef3601ea4f.scope: Deactivated successfully.
Dec  2 05:49:47 np0005542249 podman[78672]: 2025-12-02 10:49:47.790721348 +0000 UTC m=+0.757553613 container died 3f19f1b217b8e0d8c3ec559f701e08def029005d38bb53846876ccef3601ea4f (image=quay.io/ceph/ceph:v18, name=laughing_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec  2 05:49:47 np0005542249 systemd[1]: var-lib-containers-storage-overlay-27e1c5ea0f07cb5d735b21642df239e522a37ca95e3ad477ccf70b3102f96339-merged.mount: Deactivated successfully.
Dec  2 05:49:47 np0005542249 podman[78672]: 2025-12-02 10:49:47.843613991 +0000 UTC m=+0.810446266 container remove 3f19f1b217b8e0d8c3ec559f701e08def029005d38bb53846876ccef3601ea4f (image=quay.io/ceph/ceph:v18, name=laughing_tu, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:49:47 np0005542249 systemd[1]: libpod-conmon-3f19f1b217b8e0d8c3ec559f701e08def029005d38bb53846876ccef3601ea4f.scope: Deactivated successfully.
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]: [
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:    {
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:        "available": false,
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:        "ceph_device": false,
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:        "lsm_data": {},
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:        "lvs": [],
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:        "path": "/dev/sr0",
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:        "rejected_reasons": [
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:            "Has a FileSystem",
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:            "Insufficient space (<5GB)"
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:        ],
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:        "sys_api": {
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:            "actuators": null,
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:            "device_nodes": "sr0",
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:            "devname": "sr0",
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:            "human_readable_size": "482.00 KB",
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:            "id_bus": "ata",
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:            "model": "QEMU DVD-ROM",
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:            "nr_requests": "2",
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:            "parent": "/dev/sr0",
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:            "partitions": {},
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:            "path": "/dev/sr0",
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:            "removable": "1",
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:            "rev": "2.5+",
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:            "ro": "0",
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:            "rotational": "1",
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:            "sas_address": "",
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:            "sas_device_handle": "",
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:            "scheduler_mode": "mq-deadline",
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:            "sectors": 0,
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:            "sectorsize": "2048",
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:            "size": 493568.0,
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:            "support_discard": "2048",
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:            "type": "disk",
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:            "vendor": "QEMU"
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:        }
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]:    }
Dec  2 05:49:48 np0005542249 gifted_leavitt[78665]: ]
Dec  2 05:49:48 np0005542249 systemd[1]: libpod-e52cf2e79e5f4242353dec0e17b3955cf9cb7d8626465537ee01389e7f6ebfc7.scope: Deactivated successfully.
Dec  2 05:49:48 np0005542249 podman[78625]: 2025-12-02 10:49:48.447951421 +0000 UTC m=+1.704338476 container died e52cf2e79e5f4242353dec0e17b3955cf9cb7d8626465537ee01389e7f6ebfc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:49:48 np0005542249 systemd[1]: libpod-e52cf2e79e5f4242353dec0e17b3955cf9cb7d8626465537ee01389e7f6ebfc7.scope: Consumed 1.576s CPU time.
Dec  2 05:49:48 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/3042304631' entity='client.admin' 
Dec  2 05:49:48 np0005542249 systemd[1]: var-lib-containers-storage-overlay-be8edef4afc1ad095619ec4714a6a9aa67d322e8d2eb928b43e45b0c3aca095c-merged.mount: Deactivated successfully.
Dec  2 05:49:48 np0005542249 podman[78625]: 2025-12-02 10:49:48.495282544 +0000 UTC m=+1.751669599 container remove e52cf2e79e5f4242353dec0e17b3955cf9cb7d8626465537ee01389e7f6ebfc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_leavitt, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  2 05:49:48 np0005542249 systemd[1]: libpod-conmon-e52cf2e79e5f4242353dec0e17b3955cf9cb7d8626465537ee01389e7f6ebfc7.scope: Deactivated successfully.
Dec  2 05:49:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:49:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:49:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:49:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:49:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec  2 05:49:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  2 05:49:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:49:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:49:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 05:49:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:49:48 np0005542249 ceph-mgr[75372]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  2 05:49:48 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  2 05:49:48 np0005542249 ansible-async_wrapper.py[80821]: Invoked with j691517161277 30 /home/zuul/.ansible/tmp/ansible-tmp-1764672588.2272723-36553-133529644835267/AnsiballZ_command.py _
Dec  2 05:49:48 np0005542249 ansible-async_wrapper.py[80883]: Starting module and watcher
Dec  2 05:49:48 np0005542249 ansible-async_wrapper.py[80883]: Start watching 80888 (30)
Dec  2 05:49:48 np0005542249 ansible-async_wrapper.py[80888]: Start module (80888)
Dec  2 05:49:48 np0005542249 ansible-async_wrapper.py[80821]: Return async_wrapper task started.
Dec  2 05:49:48 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  2 05:49:49 np0005542249 python3[80893]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:49:49 np0005542249 podman[80946]: 2025-12-02 10:49:49.120514436 +0000 UTC m=+0.053687445 container create d010b5050b36cf5978e57d0202add05184b40f5ad4d104d335acc9230652b25c (image=quay.io/ceph/ceph:v18, name=stupefied_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:49:49 np0005542249 systemd[1]: Started libpod-conmon-d010b5050b36cf5978e57d0202add05184b40f5ad4d104d335acc9230652b25c.scope.
Dec  2 05:49:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:49:49 np0005542249 podman[80946]: 2025-12-02 10:49:49.09910639 +0000 UTC m=+0.032279439 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:49 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:49 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f70364973f4741c6ee7201a1e02cf6c61a29640468ae8ea11c34a34a2eb3890/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:49 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f70364973f4741c6ee7201a1e02cf6c61a29640468ae8ea11c34a34a2eb3890/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:49 np0005542249 podman[80946]: 2025-12-02 10:49:49.216231791 +0000 UTC m=+0.149404870 container init d010b5050b36cf5978e57d0202add05184b40f5ad4d104d335acc9230652b25c (image=quay.io/ceph/ceph:v18, name=stupefied_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  2 05:49:49 np0005542249 podman[80946]: 2025-12-02 10:49:49.22588273 +0000 UTC m=+0.159055779 container start d010b5050b36cf5978e57d0202add05184b40f5ad4d104d335acc9230652b25c (image=quay.io/ceph/ceph:v18, name=stupefied_bartik, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:49:49 np0005542249 podman[80946]: 2025-12-02 10:49:49.231265005 +0000 UTC m=+0.164438054 container attach d010b5050b36cf5978e57d0202add05184b40f5ad4d104d335acc9230652b25c (image=quay.io/ceph/ceph:v18, name=stupefied_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Dec  2 05:49:49 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:49 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:49 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:49 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:49 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  2 05:49:49 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:49:49 np0005542249 ceph-mon[75081]: Updating compute-0:/etc/ceph/ceph.conf
Dec  2 05:49:49 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  2 05:49:49 np0005542249 stupefied_bartik[81008]: 
Dec  2 05:49:49 np0005542249 stupefied_bartik[81008]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  2 05:49:49 np0005542249 systemd[1]: libpod-d010b5050b36cf5978e57d0202add05184b40f5ad4d104d335acc9230652b25c.scope: Deactivated successfully.
Dec  2 05:49:49 np0005542249 podman[80946]: 2025-12-02 10:49:49.801693332 +0000 UTC m=+0.734866351 container died d010b5050b36cf5978e57d0202add05184b40f5ad4d104d335acc9230652b25c (image=quay.io/ceph/ceph:v18, name=stupefied_bartik, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:49:49 np0005542249 ceph-mgr[75372]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/95bc4eaa-1a14-59bf-acf2-4b3da055547d/config/ceph.conf
Dec  2 05:49:49 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/95bc4eaa-1a14-59bf-acf2-4b3da055547d/config/ceph.conf
Dec  2 05:49:49 np0005542249 systemd[1]: var-lib-containers-storage-overlay-0f70364973f4741c6ee7201a1e02cf6c61a29640468ae8ea11c34a34a2eb3890-merged.mount: Deactivated successfully.
Dec  2 05:49:49 np0005542249 podman[80946]: 2025-12-02 10:49:49.849951811 +0000 UTC m=+0.783124820 container remove d010b5050b36cf5978e57d0202add05184b40f5ad4d104d335acc9230652b25c (image=quay.io/ceph/ceph:v18, name=stupefied_bartik, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  2 05:49:49 np0005542249 systemd[1]: libpod-conmon-d010b5050b36cf5978e57d0202add05184b40f5ad4d104d335acc9230652b25c.scope: Deactivated successfully.
Dec  2 05:49:49 np0005542249 ansible-async_wrapper.py[80888]: Module complete (80888)
Dec  2 05:49:50 np0005542249 python3[81447]: ansible-ansible.legacy.async_status Invoked with jid=j691517161277.80821 mode=status _async_dir=/root/.ansible_async
Dec  2 05:49:50 np0005542249 python3[81626]: ansible-ansible.legacy.async_status Invoked with jid=j691517161277.80821 mode=cleanup _async_dir=/root/.ansible_async
Dec  2 05:49:50 np0005542249 ceph-mgr[75372]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  2 05:49:50 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  2 05:49:50 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  2 05:49:51 np0005542249 python3[81818]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  2 05:49:51 np0005542249 python3[82036]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:49:51 np0005542249 ceph-mon[75081]: Updating compute-0:/var/lib/ceph/95bc4eaa-1a14-59bf-acf2-4b3da055547d/config/ceph.conf
Dec  2 05:49:51 np0005542249 podman[82081]: 2025-12-02 10:49:51.600830518 +0000 UTC m=+0.044206590 container create 3fb9b1180ebec8bd9db4e4bbf34bebb9f868fd6cafca97bf55febaa4a5018cc1 (image=quay.io/ceph/ceph:v18, name=busy_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:49:51 np0005542249 systemd[1]: Started libpod-conmon-3fb9b1180ebec8bd9db4e4bbf34bebb9f868fd6cafca97bf55febaa4a5018cc1.scope.
Dec  2 05:49:51 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:51 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8772f0feb447f203757559040b8a4591a7616364cc8ce8308448944c9831b30/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:51 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8772f0feb447f203757559040b8a4591a7616364cc8ce8308448944c9831b30/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:51 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8772f0feb447f203757559040b8a4591a7616364cc8ce8308448944c9831b30/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:51 np0005542249 podman[82081]: 2025-12-02 10:49:51.6797099 +0000 UTC m=+0.123086002 container init 3fb9b1180ebec8bd9db4e4bbf34bebb9f868fd6cafca97bf55febaa4a5018cc1 (image=quay.io/ceph/ceph:v18, name=busy_wescoff, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:49:51 np0005542249 podman[82081]: 2025-12-02 10:49:51.584166119 +0000 UTC m=+0.027542231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:51 np0005542249 podman[82081]: 2025-12-02 10:49:51.6893524 +0000 UTC m=+0.132728482 container start 3fb9b1180ebec8bd9db4e4bbf34bebb9f868fd6cafca97bf55febaa4a5018cc1 (image=quay.io/ceph/ceph:v18, name=busy_wescoff, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:49:51 np0005542249 podman[82081]: 2025-12-02 10:49:51.692495324 +0000 UTC m=+0.135871436 container attach 3fb9b1180ebec8bd9db4e4bbf34bebb9f868fd6cafca97bf55febaa4a5018cc1 (image=quay.io/ceph/ceph:v18, name=busy_wescoff, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:49:51 np0005542249 ceph-mgr[75372]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/95bc4eaa-1a14-59bf-acf2-4b3da055547d/config/ceph.client.admin.keyring
Dec  2 05:49:51 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/95bc4eaa-1a14-59bf-acf2-4b3da055547d/config/ceph.client.admin.keyring
Dec  2 05:49:52 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  2 05:49:52 np0005542249 busy_wescoff[82138]: 
Dec  2 05:49:52 np0005542249 busy_wescoff[82138]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  2 05:49:52 np0005542249 systemd[1]: libpod-3fb9b1180ebec8bd9db4e4bbf34bebb9f868fd6cafca97bf55febaa4a5018cc1.scope: Deactivated successfully.
Dec  2 05:49:52 np0005542249 podman[82081]: 2025-12-02 10:49:52.265017498 +0000 UTC m=+0.708393570 container died 3fb9b1180ebec8bd9db4e4bbf34bebb9f868fd6cafca97bf55febaa4a5018cc1 (image=quay.io/ceph/ceph:v18, name=busy_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  2 05:49:52 np0005542249 systemd[1]: var-lib-containers-storage-overlay-b8772f0feb447f203757559040b8a4591a7616364cc8ce8308448944c9831b30-merged.mount: Deactivated successfully.
Dec  2 05:49:52 np0005542249 podman[82081]: 2025-12-02 10:49:52.315913127 +0000 UTC m=+0.759289209 container remove 3fb9b1180ebec8bd9db4e4bbf34bebb9f868fd6cafca97bf55febaa4a5018cc1 (image=quay.io/ceph/ceph:v18, name=busy_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:49:52 np0005542249 systemd[1]: libpod-conmon-3fb9b1180ebec8bd9db4e4bbf34bebb9f868fd6cafca97bf55febaa4a5018cc1.scope: Deactivated successfully.
Dec  2 05:49:52 np0005542249 ceph-mon[75081]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  2 05:49:52 np0005542249 python3[82618]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:49:52 np0005542249 podman[82692]: 2025-12-02 10:49:52.824328436 +0000 UTC m=+0.042117014 container create a37bd663386fc2ce0c251d4afe199405953b68826667598b8d9af2e41665204f (image=quay.io/ceph/ceph:v18, name=goofy_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  2 05:49:52 np0005542249 systemd[1]: Started libpod-conmon-a37bd663386fc2ce0c251d4afe199405953b68826667598b8d9af2e41665204f.scope.
Dec  2 05:49:52 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a51a239a9b6275297c1e84a6076143d547e5a8ca51d36c3cb828f88a8276e5b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a51a239a9b6275297c1e84a6076143d547e5a8ca51d36c3cb828f88a8276e5b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a51a239a9b6275297c1e84a6076143d547e5a8ca51d36c3cb828f88a8276e5b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:52 np0005542249 podman[82692]: 2025-12-02 10:49:52.895444169 +0000 UTC m=+0.113232797 container init a37bd663386fc2ce0c251d4afe199405953b68826667598b8d9af2e41665204f (image=quay.io/ceph/ceph:v18, name=goofy_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Dec  2 05:49:52 np0005542249 podman[82692]: 2025-12-02 10:49:52.901950435 +0000 UTC m=+0.119739033 container start a37bd663386fc2ce0c251d4afe199405953b68826667598b8d9af2e41665204f (image=quay.io/ceph/ceph:v18, name=goofy_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:49:52 np0005542249 podman[82692]: 2025-12-02 10:49:52.905260483 +0000 UTC m=+0.123049081 container attach a37bd663386fc2ce0c251d4afe199405953b68826667598b8d9af2e41665204f (image=quay.io/ceph/ceph:v18, name=goofy_zhukovsky, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  2 05:49:52 np0005542249 podman[82692]: 2025-12-02 10:49:52.809692452 +0000 UTC m=+0.027481050 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:49:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:49:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 05:49:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:52 np0005542249 ceph-mgr[75372]: [progress INFO root] update: starting ev b589106f-3349-4715-95b4-95c301250995 (Updating crash deployment (+1 -> 1))
Dec  2 05:49:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Dec  2 05:49:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  2 05:49:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  2 05:49:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:49:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:49:52 np0005542249 ceph-mgr[75372]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Dec  2 05:49:52 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Dec  2 05:49:52 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  2 05:49:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Dec  2 05:49:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1494692853' entity='client.admin' 
Dec  2 05:49:53 np0005542249 systemd[1]: libpod-a37bd663386fc2ce0c251d4afe199405953b68826667598b8d9af2e41665204f.scope: Deactivated successfully.
Dec  2 05:49:53 np0005542249 podman[82692]: 2025-12-02 10:49:53.449958778 +0000 UTC m=+0.667747376 container died a37bd663386fc2ce0c251d4afe199405953b68826667598b8d9af2e41665204f (image=quay.io/ceph/ceph:v18, name=goofy_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  2 05:49:53 np0005542249 systemd[1]: var-lib-containers-storage-overlay-5a51a239a9b6275297c1e84a6076143d547e5a8ca51d36c3cb828f88a8276e5b-merged.mount: Deactivated successfully.
Dec  2 05:49:53 np0005542249 podman[82927]: 2025-12-02 10:49:53.496363667 +0000 UTC m=+0.065382851 container create ba3c3bb52cfbec862fdcce6c96e98a29864a1a4e393bab83f7f1ef319e81d7ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 05:49:53 np0005542249 podman[82692]: 2025-12-02 10:49:53.510509777 +0000 UTC m=+0.728298375 container remove a37bd663386fc2ce0c251d4afe199405953b68826667598b8d9af2e41665204f (image=quay.io/ceph/ceph:v18, name=goofy_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:49:53 np0005542249 systemd[1]: libpod-conmon-a37bd663386fc2ce0c251d4afe199405953b68826667598b8d9af2e41665204f.scope: Deactivated successfully.
Dec  2 05:49:53 np0005542249 systemd[1]: Started libpod-conmon-ba3c3bb52cfbec862fdcce6c96e98a29864a1a4e393bab83f7f1ef319e81d7ea.scope.
Dec  2 05:49:53 np0005542249 podman[82927]: 2025-12-02 10:49:53.453241426 +0000 UTC m=+0.022260650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:49:53 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:53 np0005542249 podman[82927]: 2025-12-02 10:49:53.572204367 +0000 UTC m=+0.141223591 container init ba3c3bb52cfbec862fdcce6c96e98a29864a1a4e393bab83f7f1ef319e81d7ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  2 05:49:53 np0005542249 podman[82927]: 2025-12-02 10:49:53.576916304 +0000 UTC m=+0.145935488 container start ba3c3bb52cfbec862fdcce6c96e98a29864a1a4e393bab83f7f1ef319e81d7ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dhawan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:49:53 np0005542249 dazzling_dhawan[82959]: 167 167
Dec  2 05:49:53 np0005542249 systemd[1]: libpod-ba3c3bb52cfbec862fdcce6c96e98a29864a1a4e393bab83f7f1ef319e81d7ea.scope: Deactivated successfully.
Dec  2 05:49:53 np0005542249 podman[82927]: 2025-12-02 10:49:53.579749451 +0000 UTC m=+0.148768645 container attach ba3c3bb52cfbec862fdcce6c96e98a29864a1a4e393bab83f7f1ef319e81d7ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dhawan, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  2 05:49:53 np0005542249 podman[82927]: 2025-12-02 10:49:53.580886151 +0000 UTC m=+0.149905335 container died ba3c3bb52cfbec862fdcce6c96e98a29864a1a4e393bab83f7f1ef319e81d7ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:49:53 np0005542249 ceph-mon[75081]: Updating compute-0:/var/lib/ceph/95bc4eaa-1a14-59bf-acf2-4b3da055547d/config/ceph.client.admin.keyring
Dec  2 05:49:53 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:53 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:53 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:53 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  2 05:49:53 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  2 05:49:53 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/1494692853' entity='client.admin' 
Dec  2 05:49:53 np0005542249 systemd[1]: var-lib-containers-storage-overlay-f9bd2d1c4fc2408d0022b11a613e42b06227b878c05bbce676bc84a30f7074a9-merged.mount: Deactivated successfully.
Dec  2 05:49:53 np0005542249 podman[82927]: 2025-12-02 10:49:53.61393822 +0000 UTC m=+0.182957404 container remove ba3c3bb52cfbec862fdcce6c96e98a29864a1a4e393bab83f7f1ef319e81d7ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dhawan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  2 05:49:53 np0005542249 systemd[1]: libpod-conmon-ba3c3bb52cfbec862fdcce6c96e98a29864a1a4e393bab83f7f1ef319e81d7ea.scope: Deactivated successfully.
Dec  2 05:49:53 np0005542249 systemd[1]: Reloading.
Dec  2 05:49:53 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:49:53 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:49:53 np0005542249 ansible-async_wrapper.py[80883]: Done in kid B.
Dec  2 05:49:53 np0005542249 systemd[1]: Reloading.
Dec  2 05:49:54 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:49:54 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:49:54 np0005542249 python3[83039]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:49:54 np0005542249 podman[83077]: 2025-12-02 10:49:54.10418585 +0000 UTC m=+0.041955240 container create c226e979dcdd0d73450acb8c2ae0e3d60d62ed81ce6a9e34705d96cc27f3b7b3 (image=quay.io/ceph/ceph:v18, name=gifted_shannon, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  2 05:49:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:49:54 np0005542249 podman[83077]: 2025-12-02 10:49:54.084937133 +0000 UTC m=+0.022706543 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:54 np0005542249 systemd[1]: Started libpod-conmon-c226e979dcdd0d73450acb8c2ae0e3d60d62ed81ce6a9e34705d96cc27f3b7b3.scope.
Dec  2 05:49:54 np0005542249 systemd[1]: Starting Ceph crash.compute-0 for 95bc4eaa-1a14-59bf-acf2-4b3da055547d...
Dec  2 05:49:54 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:54 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4292ebd12663ce18b5479757a0597d32a02c107d0e5a38ac1a740d3393e00bfe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:54 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4292ebd12663ce18b5479757a0597d32a02c107d0e5a38ac1a740d3393e00bfe/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:54 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4292ebd12663ce18b5479757a0597d32a02c107d0e5a38ac1a740d3393e00bfe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:54 np0005542249 podman[83077]: 2025-12-02 10:49:54.265417597 +0000 UTC m=+0.203186977 container init c226e979dcdd0d73450acb8c2ae0e3d60d62ed81ce6a9e34705d96cc27f3b7b3 (image=quay.io/ceph/ceph:v18, name=gifted_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:49:54 np0005542249 podman[83077]: 2025-12-02 10:49:54.271695027 +0000 UTC m=+0.209464387 container start c226e979dcdd0d73450acb8c2ae0e3d60d62ed81ce6a9e34705d96cc27f3b7b3 (image=quay.io/ceph/ceph:v18, name=gifted_shannon, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:49:54 np0005542249 podman[83077]: 2025-12-02 10:49:54.274690317 +0000 UTC m=+0.212459697 container attach c226e979dcdd0d73450acb8c2ae0e3d60d62ed81ce6a9e34705d96cc27f3b7b3 (image=quay.io/ceph/ceph:v18, name=gifted_shannon, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  2 05:49:54 np0005542249 podman[83145]: 2025-12-02 10:49:54.461912185 +0000 UTC m=+0.039751891 container create 3db2848945635a23ef05a5bb5103b9327936169af4da62c76a6e841d53c77f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-crash-compute-0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:49:54 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/061e98d451dac25599ee5792e210f3d07b02fa3c960c209c5b4596ee9b5b250d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:54 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/061e98d451dac25599ee5792e210f3d07b02fa3c960c209c5b4596ee9b5b250d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:54 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/061e98d451dac25599ee5792e210f3d07b02fa3c960c209c5b4596ee9b5b250d/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:54 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/061e98d451dac25599ee5792e210f3d07b02fa3c960c209c5b4596ee9b5b250d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:54 np0005542249 podman[83145]: 2025-12-02 10:49:54.518779664 +0000 UTC m=+0.096619390 container init 3db2848945635a23ef05a5bb5103b9327936169af4da62c76a6e841d53c77f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-crash-compute-0, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:49:54 np0005542249 podman[83145]: 2025-12-02 10:49:54.523310647 +0000 UTC m=+0.101150353 container start 3db2848945635a23ef05a5bb5103b9327936169af4da62c76a6e841d53c77f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-crash-compute-0, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  2 05:49:54 np0005542249 bash[83145]: 3db2848945635a23ef05a5bb5103b9327936169af4da62c76a6e841d53c77f4b
Dec  2 05:49:54 np0005542249 podman[83145]: 2025-12-02 10:49:54.444457695 +0000 UTC m=+0.022297411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:49:54 np0005542249 systemd[1]: Started Ceph crash.compute-0 for 95bc4eaa-1a14-59bf-acf2-4b3da055547d.
Dec  2 05:49:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:49:54 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:49:54 np0005542249 ceph-mon[75081]: Deploying daemon crash.compute-0 on compute-0
Dec  2 05:49:54 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:54 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Dec  2 05:49:54 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:54 np0005542249 ceph-mgr[75372]: [progress INFO root] complete: finished ev b589106f-3349-4715-95b4-95c301250995 (Updating crash deployment (+1 -> 1))
Dec  2 05:49:54 np0005542249 ceph-mgr[75372]: [progress INFO root] Completed event b589106f-3349-4715-95b4-95c301250995 (Updating crash deployment (+1 -> 1)) in 2 seconds
Dec  2 05:49:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Dec  2 05:49:54 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:54 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 83d657cd-8c4a-4346-84ee-e324158a3765 does not exist
Dec  2 05:49:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec  2 05:49:54 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:54 np0005542249 ceph-mgr[75372]: [progress INFO root] update: starting ev 023ac71e-4abd-4898-af54-b25292c44d68 (Updating mgr deployment (+1 -> 2))
Dec  2 05:49:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.jktmqd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Dec  2 05:49:54 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jktmqd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  2 05:49:54 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jktmqd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  2 05:49:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  2 05:49:54 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  2 05:49:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:49:54 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:49:54 np0005542249 ceph-mgr[75372]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.jktmqd on compute-0
Dec  2 05:49:54 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.jktmqd on compute-0
Dec  2 05:49:54 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-crash-compute-0[83160]: INFO:ceph-crash:pinging cluster to exercise our key
Dec  2 05:49:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Dec  2 05:49:54 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/980840862' entity='client.admin' 
Dec  2 05:49:54 np0005542249 systemd[1]: libpod-c226e979dcdd0d73450acb8c2ae0e3d60d62ed81ce6a9e34705d96cc27f3b7b3.scope: Deactivated successfully.
Dec  2 05:49:54 np0005542249 podman[83077]: 2025-12-02 10:49:54.865629076 +0000 UTC m=+0.803398456 container died c226e979dcdd0d73450acb8c2ae0e3d60d62ed81ce6a9e34705d96cc27f3b7b3 (image=quay.io/ceph/ceph:v18, name=gifted_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:49:54 np0005542249 systemd[1]: var-lib-containers-storage-overlay-4292ebd12663ce18b5479757a0597d32a02c107d0e5a38ac1a740d3393e00bfe-merged.mount: Deactivated successfully.
Dec  2 05:49:54 np0005542249 podman[83077]: 2025-12-02 10:49:54.910869413 +0000 UTC m=+0.848638773 container remove c226e979dcdd0d73450acb8c2ae0e3d60d62ed81ce6a9e34705d96cc27f3b7b3 (image=quay.io/ceph/ceph:v18, name=gifted_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  2 05:49:54 np0005542249 systemd[1]: libpod-conmon-c226e979dcdd0d73450acb8c2ae0e3d60d62ed81ce6a9e34705d96cc27f3b7b3.scope: Deactivated successfully.
Dec  2 05:49:54 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-crash-compute-0[83160]: 2025-12-02T10:49:54.957+0000 7f4975b83640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec  2 05:49:54 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-crash-compute-0[83160]: 2025-12-02T10:49:54.957+0000 7f4975b83640 -1 AuthRegistry(0x7f4970066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec  2 05:49:54 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-crash-compute-0[83160]: 2025-12-02T10:49:54.958+0000 7f4975b83640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec  2 05:49:54 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-crash-compute-0[83160]: 2025-12-02T10:49:54.958+0000 7f4975b83640 -1 AuthRegistry(0x7f4975b82000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec  2 05:49:54 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-crash-compute-0[83160]: 2025-12-02T10:49:54.959+0000 7f496f7fe640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec  2 05:49:54 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-crash-compute-0[83160]: 2025-12-02T10:49:54.959+0000 7f4975b83640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec  2 05:49:54 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-crash-compute-0[83160]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec  2 05:49:54 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-crash-compute-0[83160]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec  2 05:49:54 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  2 05:49:55 np0005542249 podman[83374]: 2025-12-02 10:49:55.212207311 +0000 UTC m=+0.053676115 container create 865d9f3a367f097b57877ee8e70577901ba04d714e1720f0b71d72f473ac1fb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  2 05:49:55 np0005542249 systemd[1]: Started libpod-conmon-865d9f3a367f097b57877ee8e70577901ba04d714e1720f0b71d72f473ac1fb5.scope.
Dec  2 05:49:55 np0005542249 python3[83364]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:49:55 np0005542249 podman[83374]: 2025-12-02 10:49:55.184175507 +0000 UTC m=+0.025644361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:49:55 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:55 np0005542249 podman[83374]: 2025-12-02 10:49:55.317645248 +0000 UTC m=+0.159114092 container init 865d9f3a367f097b57877ee8e70577901ba04d714e1720f0b71d72f473ac1fb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:49:55 np0005542249 podman[83374]: 2025-12-02 10:49:55.324384229 +0000 UTC m=+0.165853023 container start 865d9f3a367f097b57877ee8e70577901ba04d714e1720f0b71d72f473ac1fb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  2 05:49:55 np0005542249 exciting_jackson[83390]: 167 167
Dec  2 05:49:55 np0005542249 podman[83374]: 2025-12-02 10:49:55.329303481 +0000 UTC m=+0.170772285 container attach 865d9f3a367f097b57877ee8e70577901ba04d714e1720f0b71d72f473ac1fb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  2 05:49:55 np0005542249 systemd[1]: libpod-865d9f3a367f097b57877ee8e70577901ba04d714e1720f0b71d72f473ac1fb5.scope: Deactivated successfully.
Dec  2 05:49:55 np0005542249 podman[83374]: 2025-12-02 10:49:55.330973906 +0000 UTC m=+0.172442710 container died 865d9f3a367f097b57877ee8e70577901ba04d714e1720f0b71d72f473ac1fb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  2 05:49:55 np0005542249 podman[83392]: 2025-12-02 10:49:55.351710064 +0000 UTC m=+0.066728906 container create c6a4768f20c3f4457b1b0b404d492bf888bad4e7c80335cbca6dc2ba7701e26f (image=quay.io/ceph/ceph:v18, name=naughty_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:49:55 np0005542249 systemd[1]: var-lib-containers-storage-overlay-ba2749da296900b9cfac46c88a3153f3ba5745e947d31f3b19325268dd84fce7-merged.mount: Deactivated successfully.
Dec  2 05:49:55 np0005542249 podman[83374]: 2025-12-02 10:49:55.376148732 +0000 UTC m=+0.217617516 container remove 865d9f3a367f097b57877ee8e70577901ba04d714e1720f0b71d72f473ac1fb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  2 05:49:55 np0005542249 systemd[1]: Started libpod-conmon-c6a4768f20c3f4457b1b0b404d492bf888bad4e7c80335cbca6dc2ba7701e26f.scope.
Dec  2 05:49:55 np0005542249 systemd[1]: libpod-conmon-865d9f3a367f097b57877ee8e70577901ba04d714e1720f0b71d72f473ac1fb5.scope: Deactivated successfully.
Dec  2 05:49:55 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:55 np0005542249 podman[83392]: 2025-12-02 10:49:55.328310275 +0000 UTC m=+0.043329147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:55 np0005542249 systemd[1]: Reloading.
Dec  2 05:49:55 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb4b831bbf11239c2c85303b46cf5daaba7ef81a744362b970433a376386b33e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:55 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb4b831bbf11239c2c85303b46cf5daaba7ef81a744362b970433a376386b33e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:55 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb4b831bbf11239c2c85303b46cf5daaba7ef81a744362b970433a376386b33e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:55 np0005542249 podman[83392]: 2025-12-02 10:49:55.436510856 +0000 UTC m=+0.151529708 container init c6a4768f20c3f4457b1b0b404d492bf888bad4e7c80335cbca6dc2ba7701e26f (image=quay.io/ceph/ceph:v18, name=naughty_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  2 05:49:55 np0005542249 podman[83392]: 2025-12-02 10:49:55.446778902 +0000 UTC m=+0.161797784 container start c6a4768f20c3f4457b1b0b404d492bf888bad4e7c80335cbca6dc2ba7701e26f (image=quay.io/ceph/ceph:v18, name=naughty_hopper, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:49:55 np0005542249 podman[83392]: 2025-12-02 10:49:55.451793827 +0000 UTC m=+0.166812679 container attach c6a4768f20c3f4457b1b0b404d492bf888bad4e7c80335cbca6dc2ba7701e26f (image=quay.io/ceph/ceph:v18, name=naughty_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  2 05:49:55 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:49:55 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:49:55 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:55 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:55 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:55 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:55 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jktmqd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  2 05:49:55 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jktmqd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  2 05:49:55 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/980840862' entity='client.admin' 
Dec  2 05:49:55 np0005542249 systemd[1]: Reloading.
Dec  2 05:49:55 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:49:55 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:49:55 np0005542249 systemd[1]: Starting Ceph mgr.compute-0.jktmqd for 95bc4eaa-1a14-59bf-acf2-4b3da055547d...
Dec  2 05:49:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Dec  2 05:49:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3383117343' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec  2 05:49:56 np0005542249 podman[83575]: 2025-12-02 10:49:56.215072943 +0000 UTC m=+0.045305550 container create 48ce4bd2e5fbf7d75f9085937b0e1d093c6c21598138a015b8ee31e63cf0c1fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-jktmqd, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:49:56 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b501da99963dfcbc2eb8b16d19c448425beb739375e5f7a8353c8716c07c17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:56 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b501da99963dfcbc2eb8b16d19c448425beb739375e5f7a8353c8716c07c17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:56 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b501da99963dfcbc2eb8b16d19c448425beb739375e5f7a8353c8716c07c17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:56 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b501da99963dfcbc2eb8b16d19c448425beb739375e5f7a8353c8716c07c17/merged/var/lib/ceph/mgr/ceph-compute-0.jktmqd supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:56 np0005542249 podman[83575]: 2025-12-02 10:49:56.276317541 +0000 UTC m=+0.106550178 container init 48ce4bd2e5fbf7d75f9085937b0e1d093c6c21598138a015b8ee31e63cf0c1fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-jktmqd, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:49:56 np0005542249 podman[83575]: 2025-12-02 10:49:56.287576084 +0000 UTC m=+0.117808701 container start 48ce4bd2e5fbf7d75f9085937b0e1d093c6c21598138a015b8ee31e63cf0c1fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-jktmqd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  2 05:49:56 np0005542249 ceph-mgr[75372]: [progress INFO root] Writing back 1 completed events
Dec  2 05:49:56 np0005542249 podman[83575]: 2025-12-02 10:49:56.195154827 +0000 UTC m=+0.025387484 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:49:56 np0005542249 bash[83575]: 48ce4bd2e5fbf7d75f9085937b0e1d093c6c21598138a015b8ee31e63cf0c1fe
Dec  2 05:49:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  2 05:49:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:56 np0005542249 systemd[1]: Started Ceph mgr.compute-0.jktmqd for 95bc4eaa-1a14-59bf-acf2-4b3da055547d.
Dec  2 05:49:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:49:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:49:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:49:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:49:56 np0005542249 ceph-mgr[83595]: set uid:gid to 167:167 (ceph:ceph)
Dec  2 05:49:56 np0005542249 ceph-mgr[83595]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Dec  2 05:49:56 np0005542249 ceph-mgr[83595]: pidfile_write: ignore empty --pid-file
Dec  2 05:49:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:49:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:49:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:49:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:49:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec  2 05:49:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:56 np0005542249 ceph-mgr[75372]: [progress INFO root] complete: finished ev 023ac71e-4abd-4898-af54-b25292c44d68 (Updating mgr deployment (+1 -> 2))
Dec  2 05:49:56 np0005542249 ceph-mgr[75372]: [progress INFO root] Completed event 023ac71e-4abd-4898-af54-b25292c44d68 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Dec  2 05:49:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec  2 05:49:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:56 np0005542249 ceph-mgr[83595]: mgr[py] Loading python module 'alerts'
Dec  2 05:49:56 np0005542249 ceph-mon[75081]: Deploying daemon mgr.compute-0.jktmqd on compute-0
Dec  2 05:49:56 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/3383117343' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec  2 05:49:56 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:56 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:56 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:56 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:56 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:56 np0005542249 ceph-mgr[83595]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  2 05:49:56 np0005542249 ceph-mgr[83595]: mgr[py] Loading python module 'balancer'
Dec  2 05:49:56 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-jktmqd[83591]: 2025-12-02T10:49:56.746+0000 7f41c0358140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  2 05:49:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Dec  2 05:49:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  2 05:49:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3383117343' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec  2 05:49:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Dec  2 05:49:56 np0005542249 naughty_hopper[83423]: set require_min_compat_client to mimic
Dec  2 05:49:56 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Dec  2 05:49:56 np0005542249 systemd[1]: libpod-c6a4768f20c3f4457b1b0b404d492bf888bad4e7c80335cbca6dc2ba7701e26f.scope: Deactivated successfully.
Dec  2 05:49:56 np0005542249 podman[83392]: 2025-12-02 10:49:56.882431368 +0000 UTC m=+1.597450220 container died c6a4768f20c3f4457b1b0b404d492bf888bad4e7c80335cbca6dc2ba7701e26f (image=quay.io/ceph/ceph:v18, name=naughty_hopper, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:49:56 np0005542249 systemd[1]: var-lib-containers-storage-overlay-bb4b831bbf11239c2c85303b46cf5daaba7ef81a744362b970433a376386b33e-merged.mount: Deactivated successfully.
Dec  2 05:49:56 np0005542249 podman[83392]: 2025-12-02 10:49:56.943438009 +0000 UTC m=+1.658456891 container remove c6a4768f20c3f4457b1b0b404d492bf888bad4e7c80335cbca6dc2ba7701e26f (image=quay.io/ceph/ceph:v18, name=naughty_hopper, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  2 05:49:56 np0005542249 systemd[1]: libpod-conmon-c6a4768f20c3f4457b1b0b404d492bf888bad4e7c80335cbca6dc2ba7701e26f.scope: Deactivated successfully.
Dec  2 05:49:56 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  2 05:49:57 np0005542249 ceph-mgr[83595]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  2 05:49:57 np0005542249 ceph-mgr[83595]: mgr[py] Loading python module 'cephadm'
Dec  2 05:49:57 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-jktmqd[83591]: 2025-12-02T10:49:57.016+0000 7f41c0358140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  2 05:49:57 np0005542249 podman[83854]: 2025-12-02 10:49:57.375051321 +0000 UTC m=+0.079820008 container exec cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  2 05:49:57 np0005542249 podman[83854]: 2025-12-02 10:49:57.496616112 +0000 UTC m=+0.201384799 container exec_died cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  2 05:49:57 np0005542249 python3[83900]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:49:57 np0005542249 podman[83928]: 2025-12-02 10:49:57.693786227 +0000 UTC m=+0.064947968 container create 7a7625b5afdcd02eac36b10feac4f62b629ca19c77b7f5c1b04b45ec50c03b07 (image=quay.io/ceph/ceph:v18, name=affectionate_robinson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:49:57 np0005542249 systemd[1]: Started libpod-conmon-7a7625b5afdcd02eac36b10feac4f62b629ca19c77b7f5c1b04b45ec50c03b07.scope.
Dec  2 05:49:57 np0005542249 podman[83928]: 2025-12-02 10:49:57.669160954 +0000 UTC m=+0.040322735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:57 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:57 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b8e6b7a493842f4d19bad952c8af1a05a086d2723b4e37f8a2c4f8fca7b9a2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:57 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b8e6b7a493842f4d19bad952c8af1a05a086d2723b4e37f8a2c4f8fca7b9a2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:57 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b8e6b7a493842f4d19bad952c8af1a05a086d2723b4e37f8a2c4f8fca7b9a2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:57 np0005542249 podman[83928]: 2025-12-02 10:49:57.802205264 +0000 UTC m=+0.173367075 container init 7a7625b5afdcd02eac36b10feac4f62b629ca19c77b7f5c1b04b45ec50c03b07 (image=quay.io/ceph/ceph:v18, name=affectionate_robinson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:49:57 np0005542249 podman[83928]: 2025-12-02 10:49:57.810858517 +0000 UTC m=+0.182020288 container start 7a7625b5afdcd02eac36b10feac4f62b629ca19c77b7f5c1b04b45ec50c03b07 (image=quay.io/ceph/ceph:v18, name=affectionate_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  2 05:49:57 np0005542249 podman[83928]: 2025-12-02 10:49:57.831059211 +0000 UTC m=+0.202221032 container attach 7a7625b5afdcd02eac36b10feac4f62b629ca19c77b7f5c1b04b45ec50c03b07 (image=quay.io/ceph/ceph:v18, name=affectionate_robinson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:49:57 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/3383117343' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec  2 05:49:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:49:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:49:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:49:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:49:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 05:49:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:49:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 05:49:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:57 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 18b3570d-c6c9-4364-a9bd-65e62a6272df does not exist
Dec  2 05:49:57 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 658df39e-83dc-4b2e-95f6-47b8692fc31a does not exist
Dec  2 05:49:57 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 1ea0e20a-6b03-43c9-a9e3-0f67e756130a does not exist
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:58 np0005542249 ceph-mgr[75372]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Dec  2 05:49:58 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:49:58 np0005542249 ceph-mgr[75372]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec  2 05:49:58 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec  2 05:49:58 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 05:49:58 np0005542249 podman[84251]: 2025-12-02 10:49:58.682289883 +0000 UTC m=+0.054779825 container create 0b528e72569fe78bdd61c62540d5ea0b570beea4a35757e46158cc5df8df0655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  2 05:49:58 np0005542249 systemd[1]: Started libpod-conmon-0b528e72569fe78bdd61c62540d5ea0b570beea4a35757e46158cc5df8df0655.scope.
Dec  2 05:49:58 np0005542249 podman[84251]: 2025-12-02 10:49:58.658115763 +0000 UTC m=+0.030605735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:49:58 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:58 np0005542249 podman[84251]: 2025-12-02 10:49:58.775260764 +0000 UTC m=+0.147750726 container init 0b528e72569fe78bdd61c62540d5ea0b570beea4a35757e46158cc5df8df0655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  2 05:49:58 np0005542249 podman[84251]: 2025-12-02 10:49:58.786661931 +0000 UTC m=+0.159151893 container start 0b528e72569fe78bdd61c62540d5ea0b570beea4a35757e46158cc5df8df0655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:49:58 np0005542249 podman[84251]: 2025-12-02 10:49:58.790491793 +0000 UTC m=+0.162981745 container attach 0b528e72569fe78bdd61c62540d5ea0b570beea4a35757e46158cc5df8df0655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  2 05:49:58 np0005542249 vigorous_bartik[84297]: 167 167
Dec  2 05:49:58 np0005542249 systemd[1]: libpod-0b528e72569fe78bdd61c62540d5ea0b570beea4a35757e46158cc5df8df0655.scope: Deactivated successfully.
Dec  2 05:49:58 np0005542249 podman[84251]: 2025-12-02 10:49:58.793889575 +0000 UTC m=+0.166379527 container died 0b528e72569fe78bdd61c62540d5ea0b570beea4a35757e46158cc5df8df0655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:49:58 np0005542249 systemd[1]: var-lib-containers-storage-overlay-2f4a150e4d1bf9348fa3c2aa0c1ef5352f62675831316048334a91a6faab7066-merged.mount: Deactivated successfully.
Dec  2 05:49:58 np0005542249 podman[84251]: 2025-12-02 10:49:58.851209518 +0000 UTC m=+0.223699460 container remove 0b528e72569fe78bdd61c62540d5ea0b570beea4a35757e46158cc5df8df0655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  2 05:49:58 np0005542249 systemd[1]: libpod-conmon-0b528e72569fe78bdd61c62540d5ea0b570beea4a35757e46158cc5df8df0655.scope: Deactivated successfully.
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: Reconfiguring mon.compute-0 (unknown last config time)...
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: Reconfiguring daemon mon.compute-0 on compute-0
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:58 np0005542249 ceph-mgr[75372]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.ntxcvs (unknown last config time)...
Dec  2 05:49:58 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.ntxcvs (unknown last config time)...
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.ntxcvs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ntxcvs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:49:58 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:49:58 np0005542249 ceph-mgr[75372]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.ntxcvs on compute-0
Dec  2 05:49:58 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.ntxcvs on compute-0
Dec  2 05:49:58 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:59 np0005542249 ceph-mgr[83595]: mgr[py] Loading python module 'crash'
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:59 np0005542249 ceph-mgr[75372]: [cephadm INFO root] Added host compute-0
Dec  2 05:49:59 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Added host compute-0
Dec  2 05:49:59 np0005542249 ceph-mgr[75372]: [cephadm INFO root] Saving service mon spec with placement compute-0
Dec  2 05:49:59 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:59 np0005542249 ceph-mgr[75372]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Dec  2 05:49:59 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:59 np0005542249 ceph-mgr[75372]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Dec  2 05:49:59 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Dec  2 05:49:59 np0005542249 ceph-mgr[75372]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Dec  2 05:49:59 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:59 np0005542249 affectionate_robinson[83967]: Added host 'compute-0' with addr '192.168.122.100'
Dec  2 05:49:59 np0005542249 affectionate_robinson[83967]: Scheduled mon update...
Dec  2 05:49:59 np0005542249 affectionate_robinson[83967]: Scheduled mgr update...
Dec  2 05:49:59 np0005542249 affectionate_robinson[83967]: Scheduled osd.default_drive_group update...
Dec  2 05:49:59 np0005542249 systemd[1]: libpod-7a7625b5afdcd02eac36b10feac4f62b629ca19c77b7f5c1b04b45ec50c03b07.scope: Deactivated successfully.
Dec  2 05:49:59 np0005542249 podman[83928]: 2025-12-02 10:49:59.134360266 +0000 UTC m=+1.505522007 container died 7a7625b5afdcd02eac36b10feac4f62b629ca19c77b7f5c1b04b45ec50c03b07 (image=quay.io/ceph/ceph:v18, name=affectionate_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:49:59 np0005542249 systemd[1]: var-lib-containers-storage-overlay-11b8e6b7a493842f4d19bad952c8af1a05a086d2723b4e37f8a2c4f8fca7b9a2-merged.mount: Deactivated successfully.
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:49:59 np0005542249 podman[83928]: 2025-12-02 10:49:59.188636366 +0000 UTC m=+1.559798107 container remove 7a7625b5afdcd02eac36b10feac4f62b629ca19c77b7f5c1b04b45ec50c03b07 (image=quay.io/ceph/ceph:v18, name=affectionate_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:49:59 np0005542249 systemd[1]: libpod-conmon-7a7625b5afdcd02eac36b10feac4f62b629ca19c77b7f5c1b04b45ec50c03b07.scope: Deactivated successfully.
Dec  2 05:49:59 np0005542249 ceph-mgr[83595]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  2 05:49:59 np0005542249 ceph-mgr[83595]: mgr[py] Loading python module 'dashboard'
Dec  2 05:49:59 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-jktmqd[83591]: 2025-12-02T10:49:59.351+0000 7f41c0358140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  2 05:49:59 np0005542249 podman[84493]: 2025-12-02 10:49:59.561346494 +0000 UTC m=+0.059993326 container create 349fd0d61114afc84d141733e9645bb115b9d871fbe1d1b86f50d47d4858716c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:49:59 np0005542249 python3[84490]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:49:59 np0005542249 systemd[1]: Started libpod-conmon-349fd0d61114afc84d141733e9645bb115b9d871fbe1d1b86f50d47d4858716c.scope.
Dec  2 05:49:59 np0005542249 podman[84493]: 2025-12-02 10:49:59.532398554 +0000 UTC m=+0.031045436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:49:59 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:59 np0005542249 podman[84493]: 2025-12-02 10:49:59.68162248 +0000 UTC m=+0.180269372 container init 349fd0d61114afc84d141733e9645bb115b9d871fbe1d1b86f50d47d4858716c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ritchie, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:49:59 np0005542249 podman[84493]: 2025-12-02 10:49:59.692751349 +0000 UTC m=+0.191398181 container start 349fd0d61114afc84d141733e9645bb115b9d871fbe1d1b86f50d47d4858716c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ritchie, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  2 05:49:59 np0005542249 podman[84493]: 2025-12-02 10:49:59.697309411 +0000 UTC m=+0.195956303 container attach 349fd0d61114afc84d141733e9645bb115b9d871fbe1d1b86f50d47d4858716c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  2 05:49:59 np0005542249 elastic_ritchie[84513]: 167 167
Dec  2 05:49:59 np0005542249 systemd[1]: libpod-349fd0d61114afc84d141733e9645bb115b9d871fbe1d1b86f50d47d4858716c.scope: Deactivated successfully.
Dec  2 05:49:59 np0005542249 podman[84511]: 2025-12-02 10:49:59.706759106 +0000 UTC m=+0.086585641 container create 39b8433ff3894e47e4053e74da748077564e4a54013749ad9a372d6f5d49483e (image=quay.io/ceph/ceph:v18, name=epic_haslett, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:49:59 np0005542249 podman[84493]: 2025-12-02 10:49:59.709657644 +0000 UTC m=+0.208304476 container died 349fd0d61114afc84d141733e9645bb115b9d871fbe1d1b86f50d47d4858716c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec  2 05:49:59 np0005542249 podman[84511]: 2025-12-02 10:49:59.662650769 +0000 UTC m=+0.042477384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:49:59 np0005542249 systemd[1]: Started libpod-conmon-39b8433ff3894e47e4053e74da748077564e4a54013749ad9a372d6f5d49483e.scope.
Dec  2 05:49:59 np0005542249 systemd[1]: var-lib-containers-storage-overlay-f6dd6d8174a84a727a108079f43dc090ffcb446010f36efba362324bf66be5c3-merged.mount: Deactivated successfully.
Dec  2 05:49:59 np0005542249 podman[84493]: 2025-12-02 10:49:59.765417704 +0000 UTC m=+0.264064526 container remove 349fd0d61114afc84d141733e9645bb115b9d871fbe1d1b86f50d47d4858716c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ritchie, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:49:59 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:49:59 np0005542249 systemd[1]: libpod-conmon-349fd0d61114afc84d141733e9645bb115b9d871fbe1d1b86f50d47d4858716c.scope: Deactivated successfully.
Dec  2 05:49:59 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee6a4d10008cc7e80cf8ee71a27630b330371a330d04e8820691ee85b71f0c29/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:59 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee6a4d10008cc7e80cf8ee71a27630b330371a330d04e8820691ee85b71f0c29/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:59 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee6a4d10008cc7e80cf8ee71a27630b330371a330d04e8820691ee85b71f0c29/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 05:49:59 np0005542249 podman[84511]: 2025-12-02 10:49:59.810504947 +0000 UTC m=+0.190331502 container init 39b8433ff3894e47e4053e74da748077564e4a54013749ad9a372d6f5d49483e (image=quay.io/ceph/ceph:v18, name=epic_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  2 05:49:59 np0005542249 podman[84511]: 2025-12-02 10:49:59.817132676 +0000 UTC m=+0.196959211 container start 39b8433ff3894e47e4053e74da748077564e4a54013749ad9a372d6f5d49483e (image=quay.io/ceph/ceph:v18, name=epic_haslett, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  2 05:49:59 np0005542249 podman[84511]: 2025-12-02 10:49:59.820249549 +0000 UTC m=+0.200076094 container attach 39b8433ff3894e47e4053e74da748077564e4a54013749ad9a372d6f5d49483e (image=quay.io/ceph/ceph:v18, name=epic_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: Reconfiguring mgr.compute-0.ntxcvs (unknown last config time)...
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ntxcvs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: Reconfiguring daemon mgr.compute-0.ntxcvs on compute-0
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: Added host compute-0
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: Saving service mon spec with placement compute-0
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: Saving service mgr spec with placement compute-0
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: Marking host: compute-0 for OSDSpec preview refresh.
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: Saving service osd.default_drive_group spec with placement compute-0
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:49:59 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec  2 05:50:00 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/43980049' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  2 05:50:00 np0005542249 epic_haslett[84544]: 
Dec  2 05:50:00 np0005542249 epic_haslett[84544]: {"fsid":"95bc4eaa-1a14-59bf-acf2-4b3da055547d","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":81,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-02T10:48:35.983755+0000","services":{}},"progress_events":{}}
Dec  2 05:50:00 np0005542249 systemd[1]: libpod-39b8433ff3894e47e4053e74da748077564e4a54013749ad9a372d6f5d49483e.scope: Deactivated successfully.
Dec  2 05:50:00 np0005542249 podman[84511]: 2025-12-02 10:50:00.457504979 +0000 UTC m=+0.837331524 container died 39b8433ff3894e47e4053e74da748077564e4a54013749ad9a372d6f5d49483e (image=quay.io/ceph/ceph:v18, name=epic_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:50:00 np0005542249 systemd[1]: var-lib-containers-storage-overlay-ee6a4d10008cc7e80cf8ee71a27630b330371a330d04e8820691ee85b71f0c29-merged.mount: Deactivated successfully.
Dec  2 05:50:00 np0005542249 podman[84511]: 2025-12-02 10:50:00.514586874 +0000 UTC m=+0.894413419 container remove 39b8433ff3894e47e4053e74da748077564e4a54013749ad9a372d6f5d49483e (image=quay.io/ceph/ceph:v18, name=epic_haslett, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:50:00 np0005542249 systemd[1]: libpod-conmon-39b8433ff3894e47e4053e74da748077564e4a54013749ad9a372d6f5d49483e.scope: Deactivated successfully.
Dec  2 05:50:00 np0005542249 ceph-mgr[83595]: mgr[py] Loading python module 'devicehealth'
Dec  2 05:50:00 np0005542249 podman[84756]: 2025-12-02 10:50:00.783687858 +0000 UTC m=+0.085022257 container exec cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  2 05:50:00 np0005542249 podman[84756]: 2025-12-02 10:50:00.915392332 +0000 UTC m=+0.216726661 container exec_died cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:50:00 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  2 05:50:01 np0005542249 ceph-mgr[83595]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  2 05:50:01 np0005542249 ceph-mgr[83595]: mgr[py] Loading python module 'diskprediction_local'
Dec  2 05:50:01 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-jktmqd[83591]: 2025-12-02T10:50:00.998+0000 7f41c0358140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  2 05:50:01 np0005542249 ceph-mgr[75372]: [progress INFO root] Writing back 2 completed events
Dec  2 05:50:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  2 05:50:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:50:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:50:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:50:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:50:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:50:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:50:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 05:50:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:50:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 05:50:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:01 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 22d90961-647a-498d-a427-b023cfdf3bdf does not exist
Dec  2 05:50:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec  2 05:50:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:01 np0005542249 ceph-mgr[75372]: [progress INFO root] update: starting ev 0d3fd39f-2f87-459b-aec2-84df14470700 (Updating mgr deployment (-1 -> 1))
Dec  2 05:50:01 np0005542249 ceph-mgr[75372]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.jktmqd from compute-0 -- ports [8765]
Dec  2 05:50:01 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.jktmqd from compute-0 -- ports [8765]
Dec  2 05:50:01 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-jktmqd[83591]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  2 05:50:01 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-jktmqd[83591]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  2 05:50:01 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-jktmqd[83591]:  from numpy import show_config as show_numpy_config
Dec  2 05:50:01 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-jktmqd[83591]: 2025-12-02T10:50:01.544+0000 7f41c0358140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  2 05:50:01 np0005542249 ceph-mgr[83595]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  2 05:50:01 np0005542249 ceph-mgr[83595]: mgr[py] Loading python module 'influx'
Dec  2 05:50:01 np0005542249 ceph-mgr[83595]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  2 05:50:01 np0005542249 ceph-mgr[83595]: mgr[py] Loading python module 'insights'
Dec  2 05:50:01 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-jktmqd[83591]: 2025-12-02T10:50:01.783+0000 7f41c0358140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  2 05:50:02 np0005542249 ceph-mgr[83595]: mgr[py] Loading python module 'iostat'
Dec  2 05:50:02 np0005542249 systemd[1]: Stopping Ceph mgr.compute-0.jktmqd for 95bc4eaa-1a14-59bf-acf2-4b3da055547d...
Dec  2 05:50:02 np0005542249 ceph-mgr[83595]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  2 05:50:02 np0005542249 ceph-mgr[83595]: mgr[py] Loading python module 'k8sevents'
Dec  2 05:50:02 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-jktmqd[83591]: 2025-12-02T10:50:02.260+0000 7f41c0358140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  2 05:50:02 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:02 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:02 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:02 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:02 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:02 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:50:02 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:02 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:02 np0005542249 ceph-mon[75081]: Removing daemon mgr.compute-0.jktmqd from compute-0 -- ports [8765]
Dec  2 05:50:02 np0005542249 podman[85007]: 2025-12-02 10:50:02.354726764 +0000 UTC m=+0.105344797 container stop 48ce4bd2e5fbf7d75f9085937b0e1d093c6c21598138a015b8ee31e63cf0c1fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-jktmqd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  2 05:50:02 np0005542249 podman[85007]: 2025-12-02 10:50:02.380875056 +0000 UTC m=+0.131493079 container died 48ce4bd2e5fbf7d75f9085937b0e1d093c6c21598138a015b8ee31e63cf0c1fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-jktmqd, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  2 05:50:02 np0005542249 systemd[1]: var-lib-containers-storage-overlay-92b501da99963dfcbc2eb8b16d19c448425beb739375e5f7a8353c8716c07c17-merged.mount: Deactivated successfully.
Dec  2 05:50:02 np0005542249 podman[85007]: 2025-12-02 10:50:02.442163136 +0000 UTC m=+0.192781139 container remove 48ce4bd2e5fbf7d75f9085937b0e1d093c6c21598138a015b8ee31e63cf0c1fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-jktmqd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  2 05:50:02 np0005542249 bash[85007]: ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-jktmqd
Dec  2 05:50:02 np0005542249 systemd[1]: ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d@mgr.compute-0.jktmqd.service: Main process exited, code=exited, status=143/n/a
Dec  2 05:50:02 np0005542249 systemd[1]: ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d@mgr.compute-0.jktmqd.service: Failed with result 'exit-code'.
Dec  2 05:50:02 np0005542249 systemd[1]: Stopped Ceph mgr.compute-0.jktmqd for 95bc4eaa-1a14-59bf-acf2-4b3da055547d.
Dec  2 05:50:02 np0005542249 systemd[1]: ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d@mgr.compute-0.jktmqd.service: Consumed 7.087s CPU time.
Dec  2 05:50:02 np0005542249 systemd[1]: Reloading.
Dec  2 05:50:02 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:50:02 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:50:02 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  2 05:50:02 np0005542249 ceph-mgr[75372]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.jktmqd
Dec  2 05:50:02 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.jktmqd
Dec  2 05:50:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.jktmqd"} v 0) v1
Dec  2 05:50:02 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.jktmqd"}]: dispatch
Dec  2 05:50:03 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.jktmqd"}]': finished
Dec  2 05:50:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec  2 05:50:03 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:03 np0005542249 ceph-mgr[75372]: [progress INFO root] complete: finished ev 0d3fd39f-2f87-459b-aec2-84df14470700 (Updating mgr deployment (-1 -> 1))
Dec  2 05:50:03 np0005542249 ceph-mgr[75372]: [progress INFO root] Completed event 0d3fd39f-2f87-459b-aec2-84df14470700 (Updating mgr deployment (-1 -> 1)) in 2 seconds
Dec  2 05:50:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec  2 05:50:03 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:03 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 7228dc16-cffe-4eac-81c3-b2807a3a0cea does not exist
Dec  2 05:50:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 05:50:03 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 05:50:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 05:50:03 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 05:50:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:50:03 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:50:03 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.jktmqd"}]: dispatch
Dec  2 05:50:03 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.jktmqd"}]': finished
Dec  2 05:50:03 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:03 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:03 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 05:50:03 np0005542249 podman[85244]: 2025-12-02 10:50:03.628442777 +0000 UTC m=+0.066795814 container create 7955ed6ea50cfaf65b6d2c419f1c83824327d822a53ee026837d72c4ee381d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:50:03 np0005542249 systemd[1]: Started libpod-conmon-7955ed6ea50cfaf65b6d2c419f1c83824327d822a53ee026837d72c4ee381d39.scope.
Dec  2 05:50:03 np0005542249 podman[85244]: 2025-12-02 10:50:03.602224493 +0000 UTC m=+0.040577580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:03 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:03 np0005542249 podman[85244]: 2025-12-02 10:50:03.726848552 +0000 UTC m=+0.165201629 container init 7955ed6ea50cfaf65b6d2c419f1c83824327d822a53ee026837d72c4ee381d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  2 05:50:03 np0005542249 podman[85244]: 2025-12-02 10:50:03.738566905 +0000 UTC m=+0.176919902 container start 7955ed6ea50cfaf65b6d2c419f1c83824327d822a53ee026837d72c4ee381d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  2 05:50:03 np0005542249 podman[85244]: 2025-12-02 10:50:03.743034368 +0000 UTC m=+0.181387375 container attach 7955ed6ea50cfaf65b6d2c419f1c83824327d822a53ee026837d72c4ee381d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  2 05:50:03 np0005542249 happy_lamport[85261]: 167 167
Dec  2 05:50:03 np0005542249 systemd[1]: libpod-7955ed6ea50cfaf65b6d2c419f1c83824327d822a53ee026837d72c4ee381d39.scope: Deactivated successfully.
Dec  2 05:50:03 np0005542249 podman[85244]: 2025-12-02 10:50:03.748430867 +0000 UTC m=+0.186783894 container died 7955ed6ea50cfaf65b6d2c419f1c83824327d822a53ee026837d72c4ee381d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  2 05:50:03 np0005542249 systemd[1]: var-lib-containers-storage-overlay-a40c70ad660d276dc6b2062226ab7caa99baf7359bacd025d21918930ec6da3a-merged.mount: Deactivated successfully.
Dec  2 05:50:03 np0005542249 podman[85244]: 2025-12-02 10:50:03.808430352 +0000 UTC m=+0.246783389 container remove 7955ed6ea50cfaf65b6d2c419f1c83824327d822a53ee026837d72c4ee381d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  2 05:50:03 np0005542249 systemd[1]: libpod-conmon-7955ed6ea50cfaf65b6d2c419f1c83824327d822a53ee026837d72c4ee381d39.scope: Deactivated successfully.
Dec  2 05:50:04 np0005542249 podman[85285]: 2025-12-02 10:50:04.040461315 +0000 UTC m=+0.062782824 container create a48353226d387aed68d00e6c3beb3ae85583faa226ebfa452030f87f4df3029f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  2 05:50:04 np0005542249 systemd[1]: Started libpod-conmon-a48353226d387aed68d00e6c3beb3ae85583faa226ebfa452030f87f4df3029f.scope.
Dec  2 05:50:04 np0005542249 podman[85285]: 2025-12-02 10:50:04.013663866 +0000 UTC m=+0.035985425 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:04 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:04 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3488e2bc9eb50ff3a25626ccdad2fce049e3d012c7e71d8db8a267205d63eb89/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:04 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3488e2bc9eb50ff3a25626ccdad2fce049e3d012c7e71d8db8a267205d63eb89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:04 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3488e2bc9eb50ff3a25626ccdad2fce049e3d012c7e71d8db8a267205d63eb89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:04 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3488e2bc9eb50ff3a25626ccdad2fce049e3d012c7e71d8db8a267205d63eb89/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:04 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3488e2bc9eb50ff3a25626ccdad2fce049e3d012c7e71d8db8a267205d63eb89/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:04 np0005542249 podman[85285]: 2025-12-02 10:50:04.135484047 +0000 UTC m=+0.157805636 container init a48353226d387aed68d00e6c3beb3ae85583faa226ebfa452030f87f4df3029f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  2 05:50:04 np0005542249 podman[85285]: 2025-12-02 10:50:04.151414876 +0000 UTC m=+0.173736405 container start a48353226d387aed68d00e6c3beb3ae85583faa226ebfa452030f87f4df3029f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kilby, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:50:04 np0005542249 podman[85285]: 2025-12-02 10:50:04.157211396 +0000 UTC m=+0.179532915 container attach a48353226d387aed68d00e6c3beb3ae85583faa226ebfa452030f87f4df3029f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  2 05:50:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:50:04 np0005542249 ceph-mon[75081]: Removing key for mgr.compute-0.jktmqd
Dec  2 05:50:04 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  2 05:50:05 np0005542249 determined_kilby[85302]: --> passed data devices: 0 physical, 3 LVM
Dec  2 05:50:05 np0005542249 determined_kilby[85302]: --> relative data size: 1.0
Dec  2 05:50:05 np0005542249 determined_kilby[85302]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  2 05:50:05 np0005542249 determined_kilby[85302]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 7e72cc75-6117-4faf-a687-17040ed0df80
Dec  2 05:50:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "7e72cc75-6117-4faf-a687-17040ed0df80"} v 0) v1
Dec  2 05:50:05 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1431052584' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7e72cc75-6117-4faf-a687-17040ed0df80"}]: dispatch
Dec  2 05:50:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Dec  2 05:50:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  2 05:50:05 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1431052584' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7e72cc75-6117-4faf-a687-17040ed0df80"}]': finished
Dec  2 05:50:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Dec  2 05:50:05 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Dec  2 05:50:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  2 05:50:05 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  2 05:50:05 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  2 05:50:05 np0005542249 determined_kilby[85302]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  2 05:50:05 np0005542249 determined_kilby[85302]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Dec  2 05:50:05 np0005542249 lvm[85364]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  2 05:50:05 np0005542249 lvm[85364]: VG ceph_vg0 finished
Dec  2 05:50:05 np0005542249 determined_kilby[85302]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec  2 05:50:05 np0005542249 determined_kilby[85302]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  2 05:50:05 np0005542249 determined_kilby[85302]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  2 05:50:05 np0005542249 determined_kilby[85302]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Dec  2 05:50:06 np0005542249 ceph-mgr[75372]: [progress INFO root] Writing back 3 completed events
Dec  2 05:50:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  2 05:50:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:06 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/1431052584' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7e72cc75-6117-4faf-a687-17040ed0df80"}]: dispatch
Dec  2 05:50:06 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/1431052584' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7e72cc75-6117-4faf-a687-17040ed0df80"}]': finished
Dec  2 05:50:06 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Dec  2 05:50:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/175430245' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec  2 05:50:06 np0005542249 determined_kilby[85302]: stderr: got monmap epoch 1
Dec  2 05:50:06 np0005542249 determined_kilby[85302]: --> Creating keyring file for osd.0
Dec  2 05:50:06 np0005542249 determined_kilby[85302]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Dec  2 05:50:06 np0005542249 determined_kilby[85302]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Dec  2 05:50:06 np0005542249 determined_kilby[85302]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 7e72cc75-6117-4faf-a687-17040ed0df80 --setuser ceph --setgroup ceph
Dec  2 05:50:06 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  2 05:50:07 np0005542249 ceph-mon[75081]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec  2 05:50:07 np0005542249 ceph-mon[75081]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  2 05:50:08 np0005542249 ceph-mon[75081]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec  2 05:50:08 np0005542249 ceph-mon[75081]: Cluster is now healthy
Dec  2 05:50:08 np0005542249 determined_kilby[85302]: stderr: 2025-12-02T10:50:06.576+0000 7f6b19fe7740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  2 05:50:08 np0005542249 determined_kilby[85302]: stderr: 2025-12-02T10:50:06.576+0000 7f6b19fe7740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  2 05:50:08 np0005542249 determined_kilby[85302]: stderr: 2025-12-02T10:50:06.576+0000 7f6b19fe7740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  2 05:50:08 np0005542249 determined_kilby[85302]: stderr: 2025-12-02T10:50:06.577+0000 7f6b19fe7740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Dec  2 05:50:08 np0005542249 determined_kilby[85302]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec  2 05:50:08 np0005542249 determined_kilby[85302]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  2 05:50:08 np0005542249 determined_kilby[85302]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec  2 05:50:08 np0005542249 determined_kilby[85302]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  2 05:50:08 np0005542249 determined_kilby[85302]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec  2 05:50:08 np0005542249 determined_kilby[85302]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  2 05:50:08 np0005542249 determined_kilby[85302]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  2 05:50:08 np0005542249 determined_kilby[85302]: --> ceph-volume lvm activate successful for osd ID: 0
Dec  2 05:50:08 np0005542249 determined_kilby[85302]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec  2 05:50:08 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  2 05:50:08 np0005542249 determined_kilby[85302]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  2 05:50:09 np0005542249 determined_kilby[85302]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new cb22d311-a01e-4327-afb4-565a5b394930
Dec  2 05:50:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:50:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "cb22d311-a01e-4327-afb4-565a5b394930"} v 0) v1
Dec  2 05:50:09 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/12362099' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "cb22d311-a01e-4327-afb4-565a5b394930"}]: dispatch
Dec  2 05:50:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Dec  2 05:50:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  2 05:50:09 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/12362099' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "cb22d311-a01e-4327-afb4-565a5b394930"}]': finished
Dec  2 05:50:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Dec  2 05:50:09 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Dec  2 05:50:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  2 05:50:09 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  2 05:50:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  2 05:50:09 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  2 05:50:09 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  2 05:50:09 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  2 05:50:09 np0005542249 determined_kilby[85302]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  2 05:50:09 np0005542249 determined_kilby[85302]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec  2 05:50:09 np0005542249 determined_kilby[85302]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Dec  2 05:50:09 np0005542249 determined_kilby[85302]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec  2 05:50:09 np0005542249 lvm[86296]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  2 05:50:09 np0005542249 lvm[86296]: VG ceph_vg1 finished
Dec  2 05:50:09 np0005542249 determined_kilby[85302]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec  2 05:50:09 np0005542249 determined_kilby[85302]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec  2 05:50:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Dec  2 05:50:09 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3576559710' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec  2 05:50:09 np0005542249 determined_kilby[85302]: stderr: got monmap epoch 1
Dec  2 05:50:10 np0005542249 determined_kilby[85302]: --> Creating keyring file for osd.1
Dec  2 05:50:10 np0005542249 determined_kilby[85302]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec  2 05:50:10 np0005542249 determined_kilby[85302]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec  2 05:50:10 np0005542249 determined_kilby[85302]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid cb22d311-a01e-4327-afb4-565a5b394930 --setuser ceph --setgroup ceph
Dec  2 05:50:10 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/12362099' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "cb22d311-a01e-4327-afb4-565a5b394930"}]: dispatch
Dec  2 05:50:10 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/12362099' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "cb22d311-a01e-4327-afb4-565a5b394930"}]': finished
Dec  2 05:50:10 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  2 05:50:12 np0005542249 determined_kilby[85302]: stderr: 2025-12-02T10:50:10.092+0000 7f08cbfc7740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  2 05:50:12 np0005542249 determined_kilby[85302]: stderr: 2025-12-02T10:50:10.092+0000 7f08cbfc7740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  2 05:50:12 np0005542249 determined_kilby[85302]: stderr: 2025-12-02T10:50:10.092+0000 7f08cbfc7740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  2 05:50:12 np0005542249 determined_kilby[85302]: stderr: 2025-12-02T10:50:10.092+0000 7f08cbfc7740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec  2 05:50:12 np0005542249 determined_kilby[85302]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Dec  2 05:50:12 np0005542249 determined_kilby[85302]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  2 05:50:12 np0005542249 determined_kilby[85302]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec  2 05:50:12 np0005542249 determined_kilby[85302]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec  2 05:50:12 np0005542249 determined_kilby[85302]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec  2 05:50:12 np0005542249 determined_kilby[85302]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec  2 05:50:12 np0005542249 determined_kilby[85302]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  2 05:50:12 np0005542249 determined_kilby[85302]: --> ceph-volume lvm activate successful for osd ID: 1
Dec  2 05:50:12 np0005542249 determined_kilby[85302]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Dec  2 05:50:12 np0005542249 determined_kilby[85302]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  2 05:50:12 np0005542249 determined_kilby[85302]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 844c55bd-4f5a-4ef7-af48-77f5584b8079
Dec  2 05:50:12 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  2 05:50:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079"} v 0) v1
Dec  2 05:50:13 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2572919169' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079"}]: dispatch
Dec  2 05:50:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Dec  2 05:50:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  2 05:50:13 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2572919169' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079"}]': finished
Dec  2 05:50:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Dec  2 05:50:13 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Dec  2 05:50:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  2 05:50:13 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  2 05:50:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  2 05:50:13 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  2 05:50:13 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  2 05:50:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  2 05:50:13 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  2 05:50:13 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  2 05:50:13 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  2 05:50:13 np0005542249 lvm[87229]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  2 05:50:13 np0005542249 lvm[87229]: VG ceph_vg2 finished
Dec  2 05:50:13 np0005542249 determined_kilby[85302]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  2 05:50:13 np0005542249 determined_kilby[85302]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Dec  2 05:50:13 np0005542249 determined_kilby[85302]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Dec  2 05:50:13 np0005542249 determined_kilby[85302]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec  2 05:50:13 np0005542249 determined_kilby[85302]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec  2 05:50:13 np0005542249 determined_kilby[85302]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Dec  2 05:50:13 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/2572919169' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079"}]: dispatch
Dec  2 05:50:13 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/2572919169' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079"}]': finished
Dec  2 05:50:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Dec  2 05:50:13 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/372945398' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec  2 05:50:13 np0005542249 determined_kilby[85302]: stderr: got monmap epoch 1
Dec  2 05:50:13 np0005542249 determined_kilby[85302]: --> Creating keyring file for osd.2
Dec  2 05:50:13 np0005542249 determined_kilby[85302]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Dec  2 05:50:13 np0005542249 determined_kilby[85302]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Dec  2 05:50:13 np0005542249 determined_kilby[85302]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 844c55bd-4f5a-4ef7-af48-77f5584b8079 --setuser ceph --setgroup ceph
Dec  2 05:50:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:50:14 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  2 05:50:16 np0005542249 determined_kilby[85302]: stderr: 2025-12-02T10:50:13.832+0000 7f60b937d740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  2 05:50:16 np0005542249 determined_kilby[85302]: stderr: 2025-12-02T10:50:13.832+0000 7f60b937d740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  2 05:50:16 np0005542249 determined_kilby[85302]: stderr: 2025-12-02T10:50:13.832+0000 7f60b937d740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  2 05:50:16 np0005542249 determined_kilby[85302]: stderr: 2025-12-02T10:50:13.833+0000 7f60b937d740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Dec  2 05:50:16 np0005542249 determined_kilby[85302]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Dec  2 05:50:16 np0005542249 determined_kilby[85302]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  2 05:50:16 np0005542249 determined_kilby[85302]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec  2 05:50:16 np0005542249 determined_kilby[85302]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec  2 05:50:16 np0005542249 determined_kilby[85302]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec  2 05:50:16 np0005542249 determined_kilby[85302]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec  2 05:50:16 np0005542249 determined_kilby[85302]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  2 05:50:16 np0005542249 determined_kilby[85302]: --> ceph-volume lvm activate successful for osd ID: 2
Dec  2 05:50:16 np0005542249 determined_kilby[85302]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Dec  2 05:50:16 np0005542249 systemd[1]: libpod-a48353226d387aed68d00e6c3beb3ae85583faa226ebfa452030f87f4df3029f.scope: Deactivated successfully.
Dec  2 05:50:16 np0005542249 systemd[1]: libpod-a48353226d387aed68d00e6c3beb3ae85583faa226ebfa452030f87f4df3029f.scope: Consumed 6.289s CPU time.
Dec  2 05:50:16 np0005542249 podman[88137]: 2025-12-02 10:50:16.284852994 +0000 UTC m=+0.031164801 container died a48353226d387aed68d00e6c3beb3ae85583faa226ebfa452030f87f4df3029f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:50:16 np0005542249 systemd[1]: var-lib-containers-storage-overlay-3488e2bc9eb50ff3a25626ccdad2fce049e3d012c7e71d8db8a267205d63eb89-merged.mount: Deactivated successfully.
Dec  2 05:50:16 np0005542249 podman[88137]: 2025-12-02 10:50:16.34196527 +0000 UTC m=+0.088277047 container remove a48353226d387aed68d00e6c3beb3ae85583faa226ebfa452030f87f4df3029f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kilby, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:50:16 np0005542249 systemd[1]: libpod-conmon-a48353226d387aed68d00e6c3beb3ae85583faa226ebfa452030f87f4df3029f.scope: Deactivated successfully.
Dec  2 05:50:16 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  2 05:50:17 np0005542249 podman[88293]: 2025-12-02 10:50:17.139867585 +0000 UTC m=+0.067319899 container create 3bb9a7db764202f360a41683fb43acb4c32abd051d3fa08fc50825805a480404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermat, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  2 05:50:17 np0005542249 systemd[1]: Started libpod-conmon-3bb9a7db764202f360a41683fb43acb4c32abd051d3fa08fc50825805a480404.scope.
Dec  2 05:50:17 np0005542249 podman[88293]: 2025-12-02 10:50:17.113319172 +0000 UTC m=+0.040771586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:17 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:17 np0005542249 podman[88293]: 2025-12-02 10:50:17.234376642 +0000 UTC m=+0.161829056 container init 3bb9a7db764202f360a41683fb43acb4c32abd051d3fa08fc50825805a480404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermat, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:50:17 np0005542249 podman[88293]: 2025-12-02 10:50:17.243689919 +0000 UTC m=+0.171142273 container start 3bb9a7db764202f360a41683fb43acb4c32abd051d3fa08fc50825805a480404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermat, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  2 05:50:17 np0005542249 podman[88293]: 2025-12-02 10:50:17.247620738 +0000 UTC m=+0.175073082 container attach 3bb9a7db764202f360a41683fb43acb4c32abd051d3fa08fc50825805a480404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:50:17 np0005542249 tender_fermat[88310]: 167 167
Dec  2 05:50:17 np0005542249 systemd[1]: libpod-3bb9a7db764202f360a41683fb43acb4c32abd051d3fa08fc50825805a480404.scope: Deactivated successfully.
Dec  2 05:50:17 np0005542249 podman[88293]: 2025-12-02 10:50:17.252993735 +0000 UTC m=+0.180446089 container died 3bb9a7db764202f360a41683fb43acb4c32abd051d3fa08fc50825805a480404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:50:17 np0005542249 systemd[1]: var-lib-containers-storage-overlay-4add580ed4fb28eee4237e7b01662b877e5723b7808831c93fcf24d44b2d3a9e-merged.mount: Deactivated successfully.
Dec  2 05:50:17 np0005542249 podman[88293]: 2025-12-02 10:50:17.302395819 +0000 UTC m=+0.229848173 container remove 3bb9a7db764202f360a41683fb43acb4c32abd051d3fa08fc50825805a480404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermat, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:50:17 np0005542249 systemd[1]: libpod-conmon-3bb9a7db764202f360a41683fb43acb4c32abd051d3fa08fc50825805a480404.scope: Deactivated successfully.
Dec  2 05:50:17 np0005542249 podman[88333]: 2025-12-02 10:50:17.495540188 +0000 UTC m=+0.044723945 container create a731df8f5d7ac159f8932a6da03b0ae0281c25a7f8817debb5928b38c5f2b67f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_leakey, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:50:17 np0005542249 systemd[1]: Started libpod-conmon-a731df8f5d7ac159f8932a6da03b0ae0281c25a7f8817debb5928b38c5f2b67f.scope.
Dec  2 05:50:17 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:17 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6b8fc63069ebecc8a5f3c373943422f02259e9021f0ebb578e1804effd6894/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:17 np0005542249 podman[88333]: 2025-12-02 10:50:17.477196822 +0000 UTC m=+0.026380609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:17 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6b8fc63069ebecc8a5f3c373943422f02259e9021f0ebb578e1804effd6894/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:17 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6b8fc63069ebecc8a5f3c373943422f02259e9021f0ebb578e1804effd6894/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:17 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6b8fc63069ebecc8a5f3c373943422f02259e9021f0ebb578e1804effd6894/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:17 np0005542249 podman[88333]: 2025-12-02 10:50:17.59170802 +0000 UTC m=+0.140891857 container init a731df8f5d7ac159f8932a6da03b0ae0281c25a7f8817debb5928b38c5f2b67f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:50:17 np0005542249 podman[88333]: 2025-12-02 10:50:17.607432795 +0000 UTC m=+0.156616582 container start a731df8f5d7ac159f8932a6da03b0ae0281c25a7f8817debb5928b38c5f2b67f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_leakey, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:50:17 np0005542249 podman[88333]: 2025-12-02 10:50:17.612073563 +0000 UTC m=+0.161257350 container attach a731df8f5d7ac159f8932a6da03b0ae0281c25a7f8817debb5928b38c5f2b67f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:50:18 np0005542249 strange_leakey[88349]: {
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:    "0": [
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:        {
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "devices": [
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "/dev/loop3"
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            ],
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "lv_name": "ceph_lv0",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "lv_size": "21470642176",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "name": "ceph_lv0",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "tags": {
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.cluster_name": "ceph",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.crush_device_class": "",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.encrypted": "0",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.osd_id": "0",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.type": "block",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.vdo": "0"
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            },
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "type": "block",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "vg_name": "ceph_vg0"
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:        }
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:    ],
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:    "1": [
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:        {
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "devices": [
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "/dev/loop4"
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            ],
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "lv_name": "ceph_lv1",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "lv_size": "21470642176",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "name": "ceph_lv1",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "tags": {
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.cluster_name": "ceph",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.crush_device_class": "",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.encrypted": "0",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.osd_id": "1",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.type": "block",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.vdo": "0"
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            },
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "type": "block",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "vg_name": "ceph_vg1"
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:        }
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:    ],
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:    "2": [
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:        {
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "devices": [
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "/dev/loop5"
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            ],
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "lv_name": "ceph_lv2",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "lv_size": "21470642176",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "name": "ceph_lv2",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "tags": {
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.cluster_name": "ceph",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.crush_device_class": "",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.encrypted": "0",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.osd_id": "2",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.type": "block",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:                "ceph.vdo": "0"
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            },
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "type": "block",
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:            "vg_name": "ceph_vg2"
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:        }
Dec  2 05:50:18 np0005542249 strange_leakey[88349]:    ]
Dec  2 05:50:18 np0005542249 strange_leakey[88349]: }
Dec  2 05:50:18 np0005542249 systemd[1]: libpod-a731df8f5d7ac159f8932a6da03b0ae0281c25a7f8817debb5928b38c5f2b67f.scope: Deactivated successfully.
Dec  2 05:50:18 np0005542249 podman[88333]: 2025-12-02 10:50:18.402760879 +0000 UTC m=+0.951944666 container died a731df8f5d7ac159f8932a6da03b0ae0281c25a7f8817debb5928b38c5f2b67f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_leakey, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  2 05:50:18 np0005542249 systemd[1]: var-lib-containers-storage-overlay-ec6b8fc63069ebecc8a5f3c373943422f02259e9021f0ebb578e1804effd6894-merged.mount: Deactivated successfully.
Dec  2 05:50:18 np0005542249 podman[88333]: 2025-12-02 10:50:18.473306895 +0000 UTC m=+1.022490652 container remove a731df8f5d7ac159f8932a6da03b0ae0281c25a7f8817debb5928b38c5f2b67f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  2 05:50:18 np0005542249 systemd[1]: libpod-conmon-a731df8f5d7ac159f8932a6da03b0ae0281c25a7f8817debb5928b38c5f2b67f.scope: Deactivated successfully.
Dec  2 05:50:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Dec  2 05:50:18 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  2 05:50:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:50:18 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:50:18 np0005542249 ceph-mgr[75372]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Dec  2 05:50:18 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Dec  2 05:50:18 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  2 05:50:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:50:19 np0005542249 podman[88509]: 2025-12-02 10:50:19.209900207 +0000 UTC m=+0.065458827 container create e52283be44696632d9a12952a1f3dc8f8ad4ea1f2f4ce1b8c1ea49ee179604fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:50:19 np0005542249 systemd[1]: Started libpod-conmon-e52283be44696632d9a12952a1f3dc8f8ad4ea1f2f4ce1b8c1ea49ee179604fd.scope.
Dec  2 05:50:19 np0005542249 podman[88509]: 2025-12-02 10:50:19.181109624 +0000 UTC m=+0.036668264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:19 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:19 np0005542249 podman[88509]: 2025-12-02 10:50:19.311200432 +0000 UTC m=+0.166759062 container init e52283be44696632d9a12952a1f3dc8f8ad4ea1f2f4ce1b8c1ea49ee179604fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 05:50:19 np0005542249 podman[88509]: 2025-12-02 10:50:19.321667691 +0000 UTC m=+0.177226311 container start e52283be44696632d9a12952a1f3dc8f8ad4ea1f2f4ce1b8c1ea49ee179604fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bartik, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  2 05:50:19 np0005542249 podman[88509]: 2025-12-02 10:50:19.326258898 +0000 UTC m=+0.181817678 container attach e52283be44696632d9a12952a1f3dc8f8ad4ea1f2f4ce1b8c1ea49ee179604fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Dec  2 05:50:19 np0005542249 elated_bartik[88525]: 167 167
Dec  2 05:50:19 np0005542249 systemd[1]: libpod-e52283be44696632d9a12952a1f3dc8f8ad4ea1f2f4ce1b8c1ea49ee179604fd.scope: Deactivated successfully.
Dec  2 05:50:19 np0005542249 podman[88509]: 2025-12-02 10:50:19.328533741 +0000 UTC m=+0.184092361 container died e52283be44696632d9a12952a1f3dc8f8ad4ea1f2f4ce1b8c1ea49ee179604fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bartik, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  2 05:50:19 np0005542249 systemd[1]: var-lib-containers-storage-overlay-840996ef8e80430dbf412674a31743996e2de7f4e1a9fee44cb89f9d2f9e6993-merged.mount: Deactivated successfully.
Dec  2 05:50:19 np0005542249 podman[88509]: 2025-12-02 10:50:19.380628818 +0000 UTC m=+0.236187438 container remove e52283be44696632d9a12952a1f3dc8f8ad4ea1f2f4ce1b8c1ea49ee179604fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 05:50:19 np0005542249 systemd[1]: libpod-conmon-e52283be44696632d9a12952a1f3dc8f8ad4ea1f2f4ce1b8c1ea49ee179604fd.scope: Deactivated successfully.
Dec  2 05:50:19 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  2 05:50:19 np0005542249 ceph-mon[75081]: Deploying daemon osd.0 on compute-0
Dec  2 05:50:19 np0005542249 podman[88557]: 2025-12-02 10:50:19.737222777 +0000 UTC m=+0.057137028 container create 7a8f876b153dffa7f3debd90c8426e0efca066cd0fd222845034288dd3a23794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  2 05:50:19 np0005542249 systemd[1]: Started libpod-conmon-7a8f876b153dffa7f3debd90c8426e0efca066cd0fd222845034288dd3a23794.scope.
Dec  2 05:50:19 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:19 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/915a4e34c0cd9d7d25ac337eaa63fd7c064450c6d7cb5527a2534f7b96d93f10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:19 np0005542249 podman[88557]: 2025-12-02 10:50:19.717840562 +0000 UTC m=+0.037754793 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:19 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/915a4e34c0cd9d7d25ac337eaa63fd7c064450c6d7cb5527a2534f7b96d93f10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:19 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/915a4e34c0cd9d7d25ac337eaa63fd7c064450c6d7cb5527a2534f7b96d93f10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:19 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/915a4e34c0cd9d7d25ac337eaa63fd7c064450c6d7cb5527a2534f7b96d93f10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:19 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/915a4e34c0cd9d7d25ac337eaa63fd7c064450c6d7cb5527a2534f7b96d93f10/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:19 np0005542249 podman[88557]: 2025-12-02 10:50:19.828058633 +0000 UTC m=+0.147972914 container init 7a8f876b153dffa7f3debd90c8426e0efca066cd0fd222845034288dd3a23794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  2 05:50:19 np0005542249 podman[88557]: 2025-12-02 10:50:19.84061745 +0000 UTC m=+0.160531671 container start 7a8f876b153dffa7f3debd90c8426e0efca066cd0fd222845034288dd3a23794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:50:19 np0005542249 podman[88557]: 2025-12-02 10:50:19.844181938 +0000 UTC m=+0.164096199 container attach 7a8f876b153dffa7f3debd90c8426e0efca066cd0fd222845034288dd3a23794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  2 05:50:20 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0-activate-test[88574]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec  2 05:50:20 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0-activate-test[88574]:                            [--no-systemd] [--no-tmpfs]
Dec  2 05:50:20 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0-activate-test[88574]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec  2 05:50:20 np0005542249 systemd[1]: libpod-7a8f876b153dffa7f3debd90c8426e0efca066cd0fd222845034288dd3a23794.scope: Deactivated successfully.
Dec  2 05:50:20 np0005542249 podman[88557]: 2025-12-02 10:50:20.450867626 +0000 UTC m=+0.770781887 container died 7a8f876b153dffa7f3debd90c8426e0efca066cd0fd222845034288dd3a23794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  2 05:50:20 np0005542249 systemd[1]: var-lib-containers-storage-overlay-915a4e34c0cd9d7d25ac337eaa63fd7c064450c6d7cb5527a2534f7b96d93f10-merged.mount: Deactivated successfully.
Dec  2 05:50:20 np0005542249 podman[88557]: 2025-12-02 10:50:20.521709341 +0000 UTC m=+0.841623572 container remove 7a8f876b153dffa7f3debd90c8426e0efca066cd0fd222845034288dd3a23794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 05:50:20 np0005542249 systemd[1]: libpod-conmon-7a8f876b153dffa7f3debd90c8426e0efca066cd0fd222845034288dd3a23794.scope: Deactivated successfully.
Dec  2 05:50:20 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  2 05:50:20 np0005542249 systemd[1]: Reloading.
Dec  2 05:50:21 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:50:21 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:50:21 np0005542249 systemd[1]: Reloading.
Dec  2 05:50:21 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:50:21 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:50:21 np0005542249 systemd[1]: Starting Ceph osd.0 for 95bc4eaa-1a14-59bf-acf2-4b3da055547d...
Dec  2 05:50:21 np0005542249 podman[88736]: 2025-12-02 10:50:21.860235362 +0000 UTC m=+0.051791650 container create 3c2b50c4f4dc3f44381b40e8c6270e8f86826477f7683babe73917e99171df72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0-activate, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:50:21 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:21 np0005542249 podman[88736]: 2025-12-02 10:50:21.837088753 +0000 UTC m=+0.028645051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:21 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/930b73fc91cdcafef3c7d5203e78af4be89cc6c19c02ffdd1602fd1809c914b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:21 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/930b73fc91cdcafef3c7d5203e78af4be89cc6c19c02ffdd1602fd1809c914b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:21 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/930b73fc91cdcafef3c7d5203e78af4be89cc6c19c02ffdd1602fd1809c914b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:21 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/930b73fc91cdcafef3c7d5203e78af4be89cc6c19c02ffdd1602fd1809c914b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:21 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/930b73fc91cdcafef3c7d5203e78af4be89cc6c19c02ffdd1602fd1809c914b4/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:21 np0005542249 podman[88736]: 2025-12-02 10:50:21.949801423 +0000 UTC m=+0.141357751 container init 3c2b50c4f4dc3f44381b40e8c6270e8f86826477f7683babe73917e99171df72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:50:21 np0005542249 podman[88736]: 2025-12-02 10:50:21.965433094 +0000 UTC m=+0.156989362 container start 3c2b50c4f4dc3f44381b40e8c6270e8f86826477f7683babe73917e99171df72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0-activate, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:50:21 np0005542249 podman[88736]: 2025-12-02 10:50:21.969591509 +0000 UTC m=+0.161147797 container attach 3c2b50c4f4dc3f44381b40e8c6270e8f86826477f7683babe73917e99171df72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:50:22 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  2 05:50:23 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0-activate[88751]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  2 05:50:23 np0005542249 bash[88736]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  2 05:50:23 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0-activate[88751]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Dec  2 05:50:23 np0005542249 bash[88736]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Dec  2 05:50:23 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0-activate[88751]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Dec  2 05:50:23 np0005542249 bash[88736]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Dec  2 05:50:23 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0-activate[88751]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  2 05:50:23 np0005542249 bash[88736]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  2 05:50:23 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0-activate[88751]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  2 05:50:23 np0005542249 bash[88736]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  2 05:50:23 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0-activate[88751]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  2 05:50:23 np0005542249 bash[88736]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  2 05:50:23 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0-activate[88751]: --> ceph-volume raw activate successful for osd ID: 0
Dec  2 05:50:23 np0005542249 bash[88736]: --> ceph-volume raw activate successful for osd ID: 0
Dec  2 05:50:23 np0005542249 systemd[1]: libpod-3c2b50c4f4dc3f44381b40e8c6270e8f86826477f7683babe73917e99171df72.scope: Deactivated successfully.
Dec  2 05:50:23 np0005542249 systemd[1]: libpod-3c2b50c4f4dc3f44381b40e8c6270e8f86826477f7683babe73917e99171df72.scope: Consumed 1.301s CPU time.
Dec  2 05:50:23 np0005542249 podman[88736]: 2025-12-02 10:50:23.245118091 +0000 UTC m=+1.436674389 container died 3c2b50c4f4dc3f44381b40e8c6270e8f86826477f7683babe73917e99171df72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0-activate, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:50:23 np0005542249 systemd[1]: var-lib-containers-storage-overlay-930b73fc91cdcafef3c7d5203e78af4be89cc6c19c02ffdd1602fd1809c914b4-merged.mount: Deactivated successfully.
Dec  2 05:50:23 np0005542249 podman[88736]: 2025-12-02 10:50:23.352681539 +0000 UTC m=+1.544237807 container remove 3c2b50c4f4dc3f44381b40e8c6270e8f86826477f7683babe73917e99171df72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0-activate, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:50:23 np0005542249 podman[88942]: 2025-12-02 10:50:23.610000669 +0000 UTC m=+0.071553395 container create 13b62c875bf29e1fd647b09f26aeb88392fd8e7fd6f27f5d56d59c8433a0b222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  2 05:50:23 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee4d307ff59af50425527f7f4b0b792737ab73ee7d0267a7648552c7a3177b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:23 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee4d307ff59af50425527f7f4b0b792737ab73ee7d0267a7648552c7a3177b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:23 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee4d307ff59af50425527f7f4b0b792737ab73ee7d0267a7648552c7a3177b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:23 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee4d307ff59af50425527f7f4b0b792737ab73ee7d0267a7648552c7a3177b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:23 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee4d307ff59af50425527f7f4b0b792737ab73ee7d0267a7648552c7a3177b9/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:23 np0005542249 podman[88942]: 2025-12-02 10:50:23.57960119 +0000 UTC m=+0.041153956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:23 np0005542249 podman[88942]: 2025-12-02 10:50:23.680456163 +0000 UTC m=+0.142008939 container init 13b62c875bf29e1fd647b09f26aeb88392fd8e7fd6f27f5d56d59c8433a0b222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:50:23 np0005542249 podman[88942]: 2025-12-02 10:50:23.694070419 +0000 UTC m=+0.155623175 container start 13b62c875bf29e1fd647b09f26aeb88392fd8e7fd6f27f5d56d59c8433a0b222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:50:23 np0005542249 bash[88942]: 13b62c875bf29e1fd647b09f26aeb88392fd8e7fd6f27f5d56d59c8433a0b222
Dec  2 05:50:23 np0005542249 systemd[1]: Started Ceph osd.0 for 95bc4eaa-1a14-59bf-acf2-4b3da055547d.
Dec  2 05:50:23 np0005542249 ceph-osd[88961]: set uid:gid to 167:167 (ceph:ceph)
Dec  2 05:50:23 np0005542249 ceph-osd[88961]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Dec  2 05:50:23 np0005542249 ceph-osd[88961]: pidfile_write: ignore empty --pid-file
Dec  2 05:50:23 np0005542249 ceph-osd[88961]: bdev(0x5562833d9800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  2 05:50:23 np0005542249 ceph-osd[88961]: bdev(0x5562833d9800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  2 05:50:23 np0005542249 ceph-osd[88961]: bdev(0x5562833d9800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  2 05:50:23 np0005542249 ceph-osd[88961]: bdev(0x5562833d9800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  2 05:50:23 np0005542249 ceph-osd[88961]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  2 05:50:23 np0005542249 ceph-osd[88961]: bdev(0x55628421b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  2 05:50:23 np0005542249 ceph-osd[88961]: bdev(0x55628421b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  2 05:50:23 np0005542249 ceph-osd[88961]: bdev(0x55628421b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  2 05:50:23 np0005542249 ceph-osd[88961]: bdev(0x55628421b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  2 05:50:23 np0005542249 ceph-osd[88961]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec  2 05:50:23 np0005542249 ceph-osd[88961]: bdev(0x55628421b800 /var/lib/ceph/osd/ceph-0/block) close
Dec  2 05:50:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:50:23 np0005542249 ceph-osd[88961]: bdev(0x5562833d9800 /var/lib/ceph/osd/ceph-0/block) close
Dec  2 05:50:23 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:50:23 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Dec  2 05:50:23 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  2 05:50:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:50:23 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:50:23 np0005542249 ceph-mgr[75372]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Dec  2 05:50:23 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: load: jerasure load: lrc 
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bdev(0x55628429cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bdev(0x55628429cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bdev(0x55628429cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bdev(0x55628429cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bdev(0x55628429cc00 /var/lib/ceph/osd/ceph-0/block) close
Dec  2 05:50:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bdev(0x55628429cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bdev(0x55628429cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bdev(0x55628429cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bdev(0x55628429cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bdev(0x55628429cc00 /var/lib/ceph/osd/ceph-0/block) close
Dec  2 05:50:24 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:24 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:24 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bdev(0x55628429cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bdev(0x55628429cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bdev(0x55628429cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bdev(0x55628429cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bdev(0x55628429d400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bdev(0x55628429d400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bdev(0x55628429d400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bdev(0x55628429d400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bluefs mount
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bluefs mount shared_bdev_used = 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: RocksDB version: 7.9.2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Git sha 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: DB SUMMARY
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: DB Session ID:  L2UU7OFQP6CX350FQ2B3
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: CURRENT file:  CURRENT
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: IDENTITY file:  IDENTITY
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                         Options.error_if_exists: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.create_if_missing: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                         Options.paranoid_checks: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                                     Options.env: 0x55628426dc70
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                                Options.info_log: 0x5562834608a0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.max_file_opening_threads: 16
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                              Options.statistics: (nil)
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                               Options.use_fsync: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.max_log_file_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                         Options.allow_fallocate: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.use_direct_reads: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.create_missing_column_families: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                              Options.db_log_dir: 
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                                 Options.wal_dir: db.wal
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.advise_random_on_open: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.write_buffer_manager: 0x556284376460
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                            Options.rate_limiter: (nil)
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.unordered_write: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                               Options.row_cache: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                              Options.wal_filter: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.allow_ingest_behind: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.two_write_queues: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.manual_wal_flush: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.wal_compression: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.atomic_flush: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.log_readahead_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.allow_data_in_errors: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.db_host_id: __hostname__
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.max_background_jobs: 4
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.max_background_compactions: -1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.max_subcompactions: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.max_open_files: -1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.bytes_per_sync: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.max_background_flushes: -1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Compression algorithms supported:
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: #011kZSTD supported: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: #011kXpressCompression supported: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: #011kBZip2Compression supported: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: #011kLZ4Compression supported: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: #011kZlibCompression supported: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: #011kSnappyCompression supported: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562834602c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55628344d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:24 np0005542249 podman[89125]: 2025-12-02 10:50:24.64699196 +0000 UTC m=+0.071589156 container create 29fab40fe85b3e7d1ee7f8704714690cdb4cab6cc91d811af58e9cb3c5bcf407 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_curran, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562834602c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55628344d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562834602c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55628344d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562834602c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55628344d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562834602c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55628344d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562834602c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55628344d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562834602c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55628344d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556283460240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55628344d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556283460240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55628344d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556283460240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55628344d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 78e37c74-bcda-4742-9d9d-9db520eb79eb
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672624646928, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672624647304, "job": 1, "event": "recovery_finished"}
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: freelist init
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: freelist _read_cfg
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bluefs umount
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bdev(0x55628429d400 /var/lib/ceph/osd/ceph-0/block) close
Dec  2 05:50:24 np0005542249 systemd[1]: Started libpod-conmon-29fab40fe85b3e7d1ee7f8704714690cdb4cab6cc91d811af58e9cb3c5bcf407.scope.
Dec  2 05:50:24 np0005542249 podman[89125]: 2025-12-02 10:50:24.607723666 +0000 UTC m=+0.032320912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:24 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:24 np0005542249 podman[89125]: 2025-12-02 10:50:24.752529222 +0000 UTC m=+0.177126388 container init 29fab40fe85b3e7d1ee7f8704714690cdb4cab6cc91d811af58e9cb3c5bcf407 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  2 05:50:24 np0005542249 podman[89125]: 2025-12-02 10:50:24.766217649 +0000 UTC m=+0.190814845 container start 29fab40fe85b3e7d1ee7f8704714690cdb4cab6cc91d811af58e9cb3c5bcf407 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:50:24 np0005542249 podman[89125]: 2025-12-02 10:50:24.770834607 +0000 UTC m=+0.195431753 container attach 29fab40fe85b3e7d1ee7f8704714690cdb4cab6cc91d811af58e9cb3c5bcf407 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_curran, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:50:24 np0005542249 cranky_curran[89335]: 167 167
Dec  2 05:50:24 np0005542249 systemd[1]: libpod-29fab40fe85b3e7d1ee7f8704714690cdb4cab6cc91d811af58e9cb3c5bcf407.scope: Deactivated successfully.
Dec  2 05:50:24 np0005542249 podman[89125]: 2025-12-02 10:50:24.77491954 +0000 UTC m=+0.199516736 container died 29fab40fe85b3e7d1ee7f8704714690cdb4cab6cc91d811af58e9cb3c5bcf407 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_curran, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:50:24 np0005542249 systemd[1]: var-lib-containers-storage-overlay-e97b1fe4971ba40bcb2d58caca4ffa8dd0b1819d09af206d3092521142d5b29f-merged.mount: Deactivated successfully.
Dec  2 05:50:24 np0005542249 podman[89125]: 2025-12-02 10:50:24.824489087 +0000 UTC m=+0.249086253 container remove 29fab40fe85b3e7d1ee7f8704714690cdb4cab6cc91d811af58e9cb3c5bcf407 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_curran, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  2 05:50:24 np0005542249 systemd[1]: libpod-conmon-29fab40fe85b3e7d1ee7f8704714690cdb4cab6cc91d811af58e9cb3c5bcf407.scope: Deactivated successfully.
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bdev(0x55628429d400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bdev(0x55628429d400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bdev(0x55628429d400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bdev(0x55628429d400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bluefs mount
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bluefs mount shared_bdev_used = 4718592
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: RocksDB version: 7.9.2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Git sha 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: DB SUMMARY
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: DB Session ID:  L2UU7OFQP6CX350FQ2B2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: CURRENT file:  CURRENT
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: IDENTITY file:  IDENTITY
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                         Options.error_if_exists: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.create_if_missing: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                         Options.paranoid_checks: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                                     Options.env: 0x55628441eb60
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                                Options.info_log: 0x556284269a20
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.max_file_opening_threads: 16
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                              Options.statistics: (nil)
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                               Options.use_fsync: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.max_log_file_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                         Options.allow_fallocate: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.use_direct_reads: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.create_missing_column_families: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                              Options.db_log_dir: 
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                                 Options.wal_dir: db.wal
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.advise_random_on_open: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.write_buffer_manager: 0x5562843766e0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                            Options.rate_limiter: (nil)
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.unordered_write: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                               Options.row_cache: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                              Options.wal_filter: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.allow_ingest_behind: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.two_write_queues: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.manual_wal_flush: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.wal_compression: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.atomic_flush: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.log_readahead_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.allow_data_in_errors: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.db_host_id: __hostname__
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.max_background_jobs: 4
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.max_background_compactions: -1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.max_subcompactions: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.max_open_files: -1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.bytes_per_sync: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.max_background_flushes: -1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Compression algorithms supported:
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: #011kZSTD supported: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: #011kXpressCompression supported: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: #011kBZip2Compression supported: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: #011kLZ4Compression supported: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: #011kZlibCompression supported: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: #011kSnappyCompression supported: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556283460a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55628344d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556283460a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55628344d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556283460a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55628344d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556283460a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55628344d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556283460a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55628344d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556283460a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55628344d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556283460a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55628344d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556283460380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55628344d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556283460380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55628344d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556283460380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55628344d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 78e37c74-bcda-4742-9d9d-9db520eb79eb
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672624902571, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672624907214, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672624, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "78e37c74-bcda-4742-9d9d-9db520eb79eb", "db_session_id": "L2UU7OFQP6CX350FQ2B2", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672624909725, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672624, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "78e37c74-bcda-4742-9d9d-9db520eb79eb", "db_session_id": "L2UU7OFQP6CX350FQ2B2", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672624912337, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672624, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "78e37c74-bcda-4742-9d9d-9db520eb79eb", "db_session_id": "L2UU7OFQP6CX350FQ2B2", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672624913644, "job": 1, "event": "recovery_finished"}
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5562835ba000
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: DB pointer 0x55628435fa00
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55628344d1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55628344d1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55628344d1f0#2 capacity: 460.80 MB usag
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: _get_class not permitted to load lua
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: _get_class not permitted to load sdk
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: _get_class not permitted to load test_remote_reads
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: osd.0 0 load_pgs
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: osd.0 0 load_pgs opened 0 pgs
Dec  2 05:50:24 np0005542249 ceph-osd[88961]: osd.0 0 log_to_monitors true
Dec  2 05:50:24 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0[88957]: 2025-12-02T10:50:24.938+0000 7efdabc56740 -1 osd.0 0 log_to_monitors true
Dec  2 05:50:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Dec  2 05:50:24 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3230894680,v1:192.168.122.100:6803/3230894680]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec  2 05:50:24 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  2 05:50:25 np0005542249 podman[89581]: 2025-12-02 10:50:25.168336585 +0000 UTC m=+0.061025796 container create 421e64e52cdbc1be3d4ac2cb442f924cef9affa160da433ddec6ccc31ca83ad3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1-activate-test, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  2 05:50:25 np0005542249 systemd[1]: Started libpod-conmon-421e64e52cdbc1be3d4ac2cb442f924cef9affa160da433ddec6ccc31ca83ad3.scope.
Dec  2 05:50:25 np0005542249 podman[89581]: 2025-12-02 10:50:25.13846933 +0000 UTC m=+0.031158601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:25 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:25 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/667019082d317cd44260b9e83cd4c150d940c5e0bae17a6b093ba6d871333fee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:25 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/667019082d317cd44260b9e83cd4c150d940c5e0bae17a6b093ba6d871333fee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:25 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/667019082d317cd44260b9e83cd4c150d940c5e0bae17a6b093ba6d871333fee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:25 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/667019082d317cd44260b9e83cd4c150d940c5e0bae17a6b093ba6d871333fee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:25 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/667019082d317cd44260b9e83cd4c150d940c5e0bae17a6b093ba6d871333fee/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:25 np0005542249 podman[89581]: 2025-12-02 10:50:25.304366498 +0000 UTC m=+0.197055749 container init 421e64e52cdbc1be3d4ac2cb442f924cef9affa160da433ddec6ccc31ca83ad3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1-activate-test, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:50:25 np0005542249 podman[89581]: 2025-12-02 10:50:25.322873368 +0000 UTC m=+0.215562539 container start 421e64e52cdbc1be3d4ac2cb442f924cef9affa160da433ddec6ccc31ca83ad3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1-activate-test, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  2 05:50:25 np0005542249 podman[89581]: 2025-12-02 10:50:25.326747885 +0000 UTC m=+0.219437076 container attach 421e64e52cdbc1be3d4ac2cb442f924cef9affa160da433ddec6ccc31ca83ad3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  2 05:50:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Dec  2 05:50:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  2 05:50:25 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3230894680,v1:192.168.122.100:6803/3230894680]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec  2 05:50:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Dec  2 05:50:25 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Dec  2 05:50:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Dec  2 05:50:25 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3230894680,v1:192.168.122.100:6803/3230894680]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  2 05:50:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec  2 05:50:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  2 05:50:25 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  2 05:50:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  2 05:50:25 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  2 05:50:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  2 05:50:25 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  2 05:50:25 np0005542249 ceph-mon[75081]: Deploying daemon osd.1 on compute-0
Dec  2 05:50:25 np0005542249 ceph-mon[75081]: from='osd.0 [v2:192.168.122.100:6802/3230894680,v1:192.168.122.100:6803/3230894680]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec  2 05:50:25 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  2 05:50:25 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  2 05:50:25 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  2 05:50:25 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec  2 05:50:25 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec  2 05:50:25 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1-activate-test[89597]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec  2 05:50:25 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1-activate-test[89597]:                            [--no-systemd] [--no-tmpfs]
Dec  2 05:50:25 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1-activate-test[89597]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec  2 05:50:25 np0005542249 systemd[1]: libpod-421e64e52cdbc1be3d4ac2cb442f924cef9affa160da433ddec6ccc31ca83ad3.scope: Deactivated successfully.
Dec  2 05:50:25 np0005542249 podman[89581]: 2025-12-02 10:50:25.987206408 +0000 UTC m=+0.879895609 container died 421e64e52cdbc1be3d4ac2cb442f924cef9affa160da433ddec6ccc31ca83ad3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  2 05:50:26 np0005542249 systemd[1]: var-lib-containers-storage-overlay-667019082d317cd44260b9e83cd4c150d940c5e0bae17a6b093ba6d871333fee-merged.mount: Deactivated successfully.
Dec  2 05:50:26 np0005542249 podman[89581]: 2025-12-02 10:50:26.052489628 +0000 UTC m=+0.945178829 container remove 421e64e52cdbc1be3d4ac2cb442f924cef9affa160da433ddec6ccc31ca83ad3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1-activate-test, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:50:26 np0005542249 systemd[1]: libpod-conmon-421e64e52cdbc1be3d4ac2cb442f924cef9affa160da433ddec6ccc31ca83ad3.scope: Deactivated successfully.
Dec  2 05:50:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_10:50:26
Dec  2 05:50:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 05:50:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 05:50:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] No pools available
Dec  2 05:50:26 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 05:50:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 05:50:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:50:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:50:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 05:50:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:50:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:50:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:50:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:50:26 np0005542249 systemd[1]: Reloading.
Dec  2 05:50:26 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:50:26 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:50:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Dec  2 05:50:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  2 05:50:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3230894680,v1:192.168.122.100:6803/3230894680]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  2 05:50:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Dec  2 05:50:26 np0005542249 ceph-osd[88961]: osd.0 0 done with init, starting boot process
Dec  2 05:50:26 np0005542249 ceph-osd[88961]: osd.0 0 start_boot
Dec  2 05:50:26 np0005542249 ceph-osd[88961]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec  2 05:50:26 np0005542249 ceph-osd[88961]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec  2 05:50:26 np0005542249 ceph-osd[88961]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec  2 05:50:26 np0005542249 ceph-osd[88961]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec  2 05:50:26 np0005542249 ceph-osd[88961]: osd.0 0  bench count 12288000 bsize 4 KiB
Dec  2 05:50:26 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Dec  2 05:50:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  2 05:50:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  2 05:50:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  2 05:50:26 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  2 05:50:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  2 05:50:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  2 05:50:26 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  2 05:50:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  2 05:50:26 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  2 05:50:26 np0005542249 ceph-mgr[75372]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3230894680; not ready for session (expect reconnect)
Dec  2 05:50:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  2 05:50:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  2 05:50:26 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  2 05:50:26 np0005542249 ceph-mon[75081]: from='osd.0 [v2:192.168.122.100:6802/3230894680,v1:192.168.122.100:6803/3230894680]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec  2 05:50:26 np0005542249 ceph-mon[75081]: from='osd.0 [v2:192.168.122.100:6802/3230894680,v1:192.168.122.100:6803/3230894680]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  2 05:50:26 np0005542249 ceph-mon[75081]: from='osd.0 [v2:192.168.122.100:6802/3230894680,v1:192.168.122.100:6803/3230894680]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  2 05:50:26 np0005542249 systemd[1]: Reloading.
Dec  2 05:50:26 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:50:26 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:50:26 np0005542249 systemd[1]: Starting Ceph osd.1 for 95bc4eaa-1a14-59bf-acf2-4b3da055547d...
Dec  2 05:50:26 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  2 05:50:27 np0005542249 podman[89759]: 2025-12-02 10:50:27.295101683 +0000 UTC m=+0.074441345 container create f77dfcd148ac823e6dfad429cbc9c4e8539e9641e989f16fce4ce35842595fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1-activate, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:50:27 np0005542249 podman[89759]: 2025-12-02 10:50:27.254116892 +0000 UTC m=+0.033456554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:27 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:27 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bec47b6a283469ed062320e6f7484bea40b458eed92861c0c999cbc97065e073/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:27 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bec47b6a283469ed062320e6f7484bea40b458eed92861c0c999cbc97065e073/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:27 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bec47b6a283469ed062320e6f7484bea40b458eed92861c0c999cbc97065e073/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:27 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bec47b6a283469ed062320e6f7484bea40b458eed92861c0c999cbc97065e073/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:27 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bec47b6a283469ed062320e6f7484bea40b458eed92861c0c999cbc97065e073/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:27 np0005542249 podman[89759]: 2025-12-02 10:50:27.411202327 +0000 UTC m=+0.190541969 container init f77dfcd148ac823e6dfad429cbc9c4e8539e9641e989f16fce4ce35842595fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  2 05:50:27 np0005542249 podman[89759]: 2025-12-02 10:50:27.43378758 +0000 UTC m=+0.213127242 container start f77dfcd148ac823e6dfad429cbc9c4e8539e9641e989f16fce4ce35842595fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:50:27 np0005542249 podman[89759]: 2025-12-02 10:50:27.443521418 +0000 UTC m=+0.222861070 container attach f77dfcd148ac823e6dfad429cbc9c4e8539e9641e989f16fce4ce35842595fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  2 05:50:27 np0005542249 ceph-mgr[75372]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3230894680; not ready for session (expect reconnect)
Dec  2 05:50:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  2 05:50:27 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  2 05:50:27 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  2 05:50:28 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1-activate[89774]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  2 05:50:28 np0005542249 bash[89759]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  2 05:50:28 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1-activate[89774]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Dec  2 05:50:28 np0005542249 bash[89759]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Dec  2 05:50:28 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1-activate[89774]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Dec  2 05:50:28 np0005542249 bash[89759]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Dec  2 05:50:28 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1-activate[89774]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec  2 05:50:28 np0005542249 bash[89759]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec  2 05:50:28 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1-activate[89774]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec  2 05:50:28 np0005542249 bash[89759]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec  2 05:50:28 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1-activate[89774]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  2 05:50:28 np0005542249 bash[89759]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  2 05:50:28 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1-activate[89774]: --> ceph-volume raw activate successful for osd ID: 1
Dec  2 05:50:28 np0005542249 bash[89759]: --> ceph-volume raw activate successful for osd ID: 1
Dec  2 05:50:28 np0005542249 ceph-mgr[75372]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3230894680; not ready for session (expect reconnect)
Dec  2 05:50:28 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  2 05:50:28 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  2 05:50:28 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  2 05:50:28 np0005542249 systemd[1]: libpod-f77dfcd148ac823e6dfad429cbc9c4e8539e9641e989f16fce4ce35842595fb3.scope: Deactivated successfully.
Dec  2 05:50:28 np0005542249 podman[89759]: 2025-12-02 10:50:28.527879266 +0000 UTC m=+1.307218918 container died f77dfcd148ac823e6dfad429cbc9c4e8539e9641e989f16fce4ce35842595fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:50:28 np0005542249 systemd[1]: libpod-f77dfcd148ac823e6dfad429cbc9c4e8539e9641e989f16fce4ce35842595fb3.scope: Consumed 1.110s CPU time.
Dec  2 05:50:28 np0005542249 systemd[1]: var-lib-containers-storage-overlay-bec47b6a283469ed062320e6f7484bea40b458eed92861c0c999cbc97065e073-merged.mount: Deactivated successfully.
Dec  2 05:50:28 np0005542249 podman[89759]: 2025-12-02 10:50:28.658719857 +0000 UTC m=+1.438059479 container remove f77dfcd148ac823e6dfad429cbc9c4e8539e9641e989f16fce4ce35842595fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1-activate, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:50:28 np0005542249 podman[89947]: 2025-12-02 10:50:28.909032992 +0000 UTC m=+0.059315277 container create 2ab4bdfb1336673e273752f56f87fc01cb41214970130231b1c970afc70361db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  2 05:50:28 np0005542249 podman[89947]: 2025-12-02 10:50:28.878640334 +0000 UTC m=+0.028922669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:28 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc466066dc353ec43e3bad50d277240780b03b6e2dad9a5065a292db669592d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:28 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc466066dc353ec43e3bad50d277240780b03b6e2dad9a5065a292db669592d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:28 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc466066dc353ec43e3bad50d277240780b03b6e2dad9a5065a292db669592d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:28 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc466066dc353ec43e3bad50d277240780b03b6e2dad9a5065a292db669592d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:28 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc466066dc353ec43e3bad50d277240780b03b6e2dad9a5065a292db669592d/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:28 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  2 05:50:29 np0005542249 podman[89947]: 2025-12-02 10:50:29.019531161 +0000 UTC m=+0.169813516 container init 2ab4bdfb1336673e273752f56f87fc01cb41214970130231b1c970afc70361db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:50:29 np0005542249 podman[89947]: 2025-12-02 10:50:29.027133411 +0000 UTC m=+0.177415736 container start 2ab4bdfb1336673e273752f56f87fc01cb41214970130231b1c970afc70361db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:50:29 np0005542249 bash[89947]: 2ab4bdfb1336673e273752f56f87fc01cb41214970130231b1c970afc70361db
Dec  2 05:50:29 np0005542249 systemd[1]: Started Ceph osd.1 for 95bc4eaa-1a14-59bf-acf2-4b3da055547d.
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: set uid:gid to 167:167 (ceph:ceph)
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: pidfile_write: ignore empty --pid-file
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dc923d800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dc923d800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dc923d800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dc923d800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dca07f800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dca07f800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dca07f800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dca07f800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dca07f800 /var/lib/ceph/osd/ceph-1/block) close
Dec  2 05:50:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dc923d800 /var/lib/ceph/osd/ceph-1/block) close
Dec  2 05:50:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:50:29 np0005542249 ceph-mgr[75372]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3230894680; not ready for session (expect reconnect)
Dec  2 05:50:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  2 05:50:29 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  2 05:50:29 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  2 05:50:29 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:50:29 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Dec  2 05:50:29 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec  2 05:50:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:50:29 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:50:29 np0005542249 ceph-mgr[75372]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Dec  2 05:50:29 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: load: jerasure load: lrc 
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dca100c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dca100c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dca100c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dca100c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dca100c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  2 05:50:29 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:29 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:29 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dca100c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dca100c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dca100c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dca100c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dca100c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dca100c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dca100c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dca100c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dca100c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dca101400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dca101400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dca101400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dca101400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bluefs mount
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bluefs mount shared_bdev_used = 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: RocksDB version: 7.9.2
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Git sha 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: DB SUMMARY
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: DB Session ID:  VQ1CA55XQ014GQAUCVKC
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: CURRENT file:  CURRENT
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: IDENTITY file:  IDENTITY
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                         Options.error_if_exists: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                       Options.create_if_missing: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                         Options.paranoid_checks: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                                     Options.env: 0x556dca0d1c70
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                                Options.info_log: 0x556dc92c48a0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.max_file_opening_threads: 16
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                              Options.statistics: (nil)
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                               Options.use_fsync: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                       Options.max_log_file_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                         Options.allow_fallocate: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                        Options.use_direct_reads: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.create_missing_column_families: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                              Options.db_log_dir: 
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                                 Options.wal_dir: db.wal
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.advise_random_on_open: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                    Options.write_buffer_manager: 0x556dca1da460
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                            Options.rate_limiter: (nil)
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.unordered_write: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                               Options.row_cache: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                              Options.wal_filter: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.allow_ingest_behind: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.two_write_queues: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.manual_wal_flush: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.wal_compression: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.atomic_flush: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                 Options.log_readahead_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.allow_data_in_errors: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.db_host_id: __hostname__
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.max_background_jobs: 4
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.max_background_compactions: -1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.max_subcompactions: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                          Options.max_open_files: -1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                          Options.bytes_per_sync: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.max_background_flushes: -1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Compression algorithms supported:
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: #011kZSTD supported: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: #011kXpressCompression supported: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: #011kBZip2Compression supported: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: #011kLZ4Compression supported: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: #011kZlibCompression supported: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: #011kSnappyCompression supported: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556dc92c42c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556dc92b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556dc92c42c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556dc92b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556dc92c42c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556dc92b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556dc92c42c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556dc92b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556dc92c42c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556dc92b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556dc92c42c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556dc92b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556dc92c42c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556dc92b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556dc92c4240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556dc92b1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556dc92c4240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556dc92b1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556dc92c4240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556dc92b1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e7b68842-9973-4119-adee-ed36ff94b180
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672629950608, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672629950846, "job": 1, "event": "recovery_finished"}
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: freelist init
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: freelist _read_cfg
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bluefs umount
Dec  2 05:50:29 np0005542249 ceph-osd[89966]: bdev(0x556dca101400 /var/lib/ceph/osd/ceph-1/block) close
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: bdev(0x556dca101400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: bdev(0x556dca101400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: bdev(0x556dca101400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: bdev(0x556dca101400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: bluefs mount
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: bluefs mount shared_bdev_used = 4718592
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: RocksDB version: 7.9.2
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Git sha 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: DB SUMMARY
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: DB Session ID:  VQ1CA55XQ014GQAUCVKD
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: CURRENT file:  CURRENT
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: IDENTITY file:  IDENTITY
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                         Options.error_if_exists: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                       Options.create_if_missing: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                         Options.paranoid_checks: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                                     Options.env: 0x556dca282460
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                                Options.info_log: 0x556dc9297ca0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.max_file_opening_threads: 16
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                              Options.statistics: (nil)
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                               Options.use_fsync: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                       Options.max_log_file_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                         Options.allow_fallocate: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                        Options.use_direct_reads: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.create_missing_column_families: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                              Options.db_log_dir: 
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                                 Options.wal_dir: db.wal
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.advise_random_on_open: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                    Options.write_buffer_manager: 0x556dca1da6e0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                            Options.rate_limiter: (nil)
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.unordered_write: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                               Options.row_cache: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                              Options.wal_filter: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.allow_ingest_behind: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.two_write_queues: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.manual_wal_flush: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.wal_compression: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.atomic_flush: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                 Options.log_readahead_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.allow_data_in_errors: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.db_host_id: __hostname__
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.max_background_jobs: 4
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.max_background_compactions: -1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.max_subcompactions: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                          Options.max_open_files: -1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                          Options.bytes_per_sync: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.max_background_flushes: -1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Compression algorithms supported:
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: #011kZSTD supported: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: #011kXpressCompression supported: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: #011kBZip2Compression supported: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: #011kLZ4Compression supported: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: #011kZlibCompression supported: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: #011kSnappyCompression supported: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556dca0cd500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556dc92b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556dca0cd500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556dc92b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556dca0cd500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556dc92b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556dca0cd500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556dc92b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556dca0cd500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556dc92b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556dca0cd500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556dc92b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556dca0cd500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556dc92b11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556dca0cd580)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556dc92b1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556dca0cd580)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556dc92b1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556dca0cd580)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556dc92b1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e7b68842-9973-4119-adee-ed36ff94b180
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672630239603, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672630251682, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672630, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7b68842-9973-4119-adee-ed36ff94b180", "db_session_id": "VQ1CA55XQ014GQAUCVKD", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672630258241, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672630, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7b68842-9973-4119-adee-ed36ff94b180", "db_session_id": "VQ1CA55XQ014GQAUCVKD", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672630269252, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672630, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7b68842-9973-4119-adee-ed36ff94b180", "db_session_id": "VQ1CA55XQ014GQAUCVKD", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672630278250, "job": 1, "event": "recovery_finished"}
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec  2 05:50:30 np0005542249 podman[90503]: 2025-12-02 10:50:30.322233194 +0000 UTC m=+0.056007156 container create 623d81118ca4b1af2836327a4e7ecca3f210b52151365a0eb7dcc9d9d664770b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x556dc941fc00
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: DB pointer 0x556dca1c3a00
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556dc92b11f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556dc92b11f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556dc92b11f0#2 capacity: 460.80 MB usag
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: _get_class not permitted to load lua
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: _get_class not permitted to load sdk
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: _get_class not permitted to load test_remote_reads
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: osd.1 0 load_pgs
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: osd.1 0 load_pgs opened 0 pgs
Dec  2 05:50:30 np0005542249 ceph-osd[89966]: osd.1 0 log_to_monitors true
Dec  2 05:50:30 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1[89962]: 2025-12-02T10:50:30.341+0000 7f389686f740 -1 osd.1 0 log_to_monitors true
Dec  2 05:50:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Dec  2 05:50:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/923166297,v1:192.168.122.100:6807/923166297]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec  2 05:50:30 np0005542249 systemd[1]: Started libpod-conmon-623d81118ca4b1af2836327a4e7ecca3f210b52151365a0eb7dcc9d9d664770b.scope.
Dec  2 05:50:30 np0005542249 podman[90503]: 2025-12-02 10:50:30.293406858 +0000 UTC m=+0.027180850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:30 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:30 np0005542249 podman[90503]: 2025-12-02 10:50:30.427808006 +0000 UTC m=+0.161582018 container init 623d81118ca4b1af2836327a4e7ecca3f210b52151365a0eb7dcc9d9d664770b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 05:50:30 np0005542249 podman[90503]: 2025-12-02 10:50:30.438552283 +0000 UTC m=+0.172326255 container start 623d81118ca4b1af2836327a4e7ecca3f210b52151365a0eb7dcc9d9d664770b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_noether, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:50:30 np0005542249 sad_noether[90551]: 167 167
Dec  2 05:50:30 np0005542249 systemd[1]: libpod-623d81118ca4b1af2836327a4e7ecca3f210b52151365a0eb7dcc9d9d664770b.scope: Deactivated successfully.
Dec  2 05:50:30 np0005542249 conmon[90551]: conmon 623d81118ca4b1af2836 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-623d81118ca4b1af2836327a4e7ecca3f210b52151365a0eb7dcc9d9d664770b.scope/container/memory.events
Dec  2 05:50:30 np0005542249 podman[90503]: 2025-12-02 10:50:30.449404232 +0000 UTC m=+0.183178214 container attach 623d81118ca4b1af2836327a4e7ecca3f210b52151365a0eb7dcc9d9d664770b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  2 05:50:30 np0005542249 podman[90503]: 2025-12-02 10:50:30.450471421 +0000 UTC m=+0.184245403 container died 623d81118ca4b1af2836327a4e7ecca3f210b52151365a0eb7dcc9d9d664770b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_noether, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Dec  2 05:50:30 np0005542249 systemd[1]: var-lib-containers-storage-overlay-b54b0caeb19fc396b836202b6303abdcb0d0cceca199b4cc419290520c746a5d-merged.mount: Deactivated successfully.
Dec  2 05:50:30 np0005542249 ceph-mgr[75372]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3230894680; not ready for session (expect reconnect)
Dec  2 05:50:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  2 05:50:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  2 05:50:30 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  2 05:50:30 np0005542249 podman[90503]: 2025-12-02 10:50:30.5124119 +0000 UTC m=+0.246185862 container remove 623d81118ca4b1af2836327a4e7ecca3f210b52151365a0eb7dcc9d9d664770b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  2 05:50:30 np0005542249 systemd[1]: libpod-conmon-623d81118ca4b1af2836327a4e7ecca3f210b52151365a0eb7dcc9d9d664770b.scope: Deactivated successfully.
Dec  2 05:50:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Dec  2 05:50:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  2 05:50:30 np0005542249 ceph-mon[75081]: Deploying daemon osd.2 on compute-0
Dec  2 05:50:30 np0005542249 ceph-mon[75081]: from='osd.1 [v2:192.168.122.100:6806/923166297,v1:192.168.122.100:6807/923166297]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec  2 05:50:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/923166297,v1:192.168.122.100:6807/923166297]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec  2 05:50:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Dec  2 05:50:30 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Dec  2 05:50:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Dec  2 05:50:30 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  2 05:50:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/923166297,v1:192.168.122.100:6807/923166297]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  2 05:50:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec  2 05:50:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  2 05:50:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  2 05:50:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  2 05:50:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  2 05:50:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  2 05:50:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  2 05:50:30 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  2 05:50:30 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  2 05:50:30 np0005542249 ceph-osd[88961]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 25.376 iops: 6496.322 elapsed_sec: 0.462
Dec  2 05:50:30 np0005542249 ceph-osd[88961]: log_channel(cluster) log [WRN] : OSD bench result of 6496.321833 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  2 05:50:30 np0005542249 ceph-osd[88961]: osd.0 0 waiting for initial osdmap
Dec  2 05:50:30 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0[88957]: 2025-12-02T10:50:30.648+0000 7efda83ed640 -1 osd.0 0 waiting for initial osdmap
Dec  2 05:50:30 np0005542249 ceph-osd[88961]: osd.0 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec  2 05:50:30 np0005542249 ceph-osd[88961]: osd.0 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec  2 05:50:30 np0005542249 ceph-osd[88961]: osd.0 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec  2 05:50:30 np0005542249 ceph-osd[88961]: osd.0 9 check_osdmap_features require_osd_release unknown -> reef
Dec  2 05:50:30 np0005542249 ceph-osd[88961]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  2 05:50:30 np0005542249 ceph-osd[88961]: osd.0 9 set_numa_affinity not setting numa affinity
Dec  2 05:50:30 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-0[88957]: 2025-12-02T10:50:30.668+0000 7efda31fe640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  2 05:50:30 np0005542249 ceph-osd[88961]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Dec  2 05:50:30 np0005542249 python3[90604]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:50:30 np0005542249 podman[90609]: 2025-12-02 10:50:30.841351056 +0000 UTC m=+0.069078017 container create 21db183795afc2c8452ab35507222e482fa3a12719d176b08f71283b049594cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2-activate-test, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:50:30 np0005542249 podman[90621]: 2025-12-02 10:50:30.862332855 +0000 UTC m=+0.048506549 container create 1d4cf33e2a9c728eac70e757b86088ff9f760de1ba8981c0edac301c6ccd53b0 (image=quay.io/ceph/ceph:v18, name=eager_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  2 05:50:30 np0005542249 systemd[1]: Started libpod-conmon-21db183795afc2c8452ab35507222e482fa3a12719d176b08f71283b049594cd.scope.
Dec  2 05:50:30 np0005542249 systemd[1]: Started libpod-conmon-1d4cf33e2a9c728eac70e757b86088ff9f760de1ba8981c0edac301c6ccd53b0.scope.
Dec  2 05:50:30 np0005542249 podman[90609]: 2025-12-02 10:50:30.819541715 +0000 UTC m=+0.047268646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:30 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:30 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:30 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b431d659bfc0ebbb4ab6aa3afa6fba19f45e1d74b768609ea7738ea02428b0aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:30 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0165cf6bca420826e1e39d2f81dd2def58d5fb3f7ea9eeaab12f6e8b802d76e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:30 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0165cf6bca420826e1e39d2f81dd2def58d5fb3f7ea9eeaab12f6e8b802d76e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:30 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0165cf6bca420826e1e39d2f81dd2def58d5fb3f7ea9eeaab12f6e8b802d76e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:30 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b431d659bfc0ebbb4ab6aa3afa6fba19f45e1d74b768609ea7738ea02428b0aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:30 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b431d659bfc0ebbb4ab6aa3afa6fba19f45e1d74b768609ea7738ea02428b0aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:30 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b431d659bfc0ebbb4ab6aa3afa6fba19f45e1d74b768609ea7738ea02428b0aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:30 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b431d659bfc0ebbb4ab6aa3afa6fba19f45e1d74b768609ea7738ea02428b0aa/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:30 np0005542249 podman[90621]: 2025-12-02 10:50:30.835940927 +0000 UTC m=+0.022114641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:50:30 np0005542249 podman[90609]: 2025-12-02 10:50:30.937414667 +0000 UTC m=+0.165141598 container init 21db183795afc2c8452ab35507222e482fa3a12719d176b08f71283b049594cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2-activate-test, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:50:30 np0005542249 podman[90609]: 2025-12-02 10:50:30.946397644 +0000 UTC m=+0.174124595 container start 21db183795afc2c8452ab35507222e482fa3a12719d176b08f71283b049594cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:50:30 np0005542249 podman[90621]: 2025-12-02 10:50:30.948159534 +0000 UTC m=+0.134333248 container init 1d4cf33e2a9c728eac70e757b86088ff9f760de1ba8981c0edac301c6ccd53b0 (image=quay.io/ceph/ceph:v18, name=eager_brattain, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:50:30 np0005542249 podman[90609]: 2025-12-02 10:50:30.951097314 +0000 UTC m=+0.178824265 container attach 21db183795afc2c8452ab35507222e482fa3a12719d176b08f71283b049594cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2-activate-test, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  2 05:50:30 np0005542249 podman[90621]: 2025-12-02 10:50:30.957178322 +0000 UTC m=+0.143352016 container start 1d4cf33e2a9c728eac70e757b86088ff9f760de1ba8981c0edac301c6ccd53b0 (image=quay.io/ceph/ceph:v18, name=eager_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  2 05:50:30 np0005542249 podman[90621]: 2025-12-02 10:50:30.960330439 +0000 UTC m=+0.146504133 container attach 1d4cf33e2a9c728eac70e757b86088ff9f760de1ba8981c0edac301c6ccd53b0 (image=quay.io/ceph/ceph:v18, name=eager_brattain, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:50:30 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  2 05:50:31 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec  2 05:50:31 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec  2 05:50:31 np0005542249 ceph-mgr[75372]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3230894680; not ready for session (expect reconnect)
Dec  2 05:50:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  2 05:50:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  2 05:50:31 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  2 05:50:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec  2 05:50:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1988790101' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  2 05:50:31 np0005542249 eager_brattain[90645]: 
Dec  2 05:50:31 np0005542249 eager_brattain[90645]: {"fsid":"95bc4eaa-1a14-59bf-acf2-4b3da055547d","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":112,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":9,"num_osds":3,"num_up_osds":0,"osd_up_since":0,"num_in_osds":3,"osd_in_since":1764672613,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-02T10:50:28.981301+0000","services":{}},"progress_events":{}}
Dec  2 05:50:31 np0005542249 systemd[1]: libpod-1d4cf33e2a9c728eac70e757b86088ff9f760de1ba8981c0edac301c6ccd53b0.scope: Deactivated successfully.
Dec  2 05:50:31 np0005542249 podman[90621]: 2025-12-02 10:50:31.541513614 +0000 UTC m=+0.727687348 container died 1d4cf33e2a9c728eac70e757b86088ff9f760de1ba8981c0edac301c6ccd53b0 (image=quay.io/ceph/ceph:v18, name=eager_brattain, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:50:31 np0005542249 systemd[1]: var-lib-containers-storage-overlay-f0165cf6bca420826e1e39d2f81dd2def58d5fb3f7ea9eeaab12f6e8b802d76e-merged.mount: Deactivated successfully.
Dec  2 05:50:31 np0005542249 podman[90621]: 2025-12-02 10:50:31.600901443 +0000 UTC m=+0.787075137 container remove 1d4cf33e2a9c728eac70e757b86088ff9f760de1ba8981c0edac301c6ccd53b0 (image=quay.io/ceph/ceph:v18, name=eager_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:50:31 np0005542249 systemd[1]: libpod-conmon-1d4cf33e2a9c728eac70e757b86088ff9f760de1ba8981c0edac301c6ccd53b0.scope: Deactivated successfully.
Dec  2 05:50:31 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2-activate-test[90643]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec  2 05:50:31 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2-activate-test[90643]:                            [--no-systemd] [--no-tmpfs]
Dec  2 05:50:31 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2-activate-test[90643]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec  2 05:50:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Dec  2 05:50:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  2 05:50:31 np0005542249 ceph-osd[88961]: osd.0 9 tick checking mon for new map
Dec  2 05:50:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/923166297,v1:192.168.122.100:6807/923166297]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  2 05:50:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Dec  2 05:50:31 np0005542249 ceph-osd[89966]: osd.1 0 done with init, starting boot process
Dec  2 05:50:31 np0005542249 ceph-osd[89966]: osd.1 0 start_boot
Dec  2 05:50:31 np0005542249 ceph-osd[89966]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec  2 05:50:31 np0005542249 ceph-osd[89966]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec  2 05:50:31 np0005542249 ceph-osd[89966]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec  2 05:50:31 np0005542249 ceph-osd[89966]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec  2 05:50:31 np0005542249 ceph-osd[89966]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec  2 05:50:31 np0005542249 systemd[1]: libpod-21db183795afc2c8452ab35507222e482fa3a12719d176b08f71283b049594cd.scope: Deactivated successfully.
Dec  2 05:50:31 np0005542249 podman[90609]: 2025-12-02 10:50:31.647318003 +0000 UTC m=+0.875044974 container died 21db183795afc2c8452ab35507222e482fa3a12719d176b08f71283b049594cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2-activate-test, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  2 05:50:31 np0005542249 ceph-mon[75081]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3230894680,v1:192.168.122.100:6803/3230894680] boot
Dec  2 05:50:31 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Dec  2 05:50:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  2 05:50:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  2 05:50:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  2 05:50:31 np0005542249 ceph-osd[88961]: osd.0 10 state: booting -> active
Dec  2 05:50:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  2 05:50:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  2 05:50:31 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  2 05:50:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  2 05:50:31 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  2 05:50:31 np0005542249 ceph-mgr[75372]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/923166297; not ready for session (expect reconnect)
Dec  2 05:50:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  2 05:50:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  2 05:50:31 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  2 05:50:31 np0005542249 ceph-mon[75081]: from='osd.1 [v2:192.168.122.100:6806/923166297,v1:192.168.122.100:6807/923166297]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec  2 05:50:31 np0005542249 ceph-mon[75081]: from='osd.1 [v2:192.168.122.100:6806/923166297,v1:192.168.122.100:6807/923166297]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  2 05:50:31 np0005542249 ceph-mon[75081]: OSD bench result of 6496.321833 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  2 05:50:31 np0005542249 systemd[1]: var-lib-containers-storage-overlay-b431d659bfc0ebbb4ab6aa3afa6fba19f45e1d74b768609ea7738ea02428b0aa-merged.mount: Deactivated successfully.
Dec  2 05:50:31 np0005542249 podman[90609]: 2025-12-02 10:50:31.793677752 +0000 UTC m=+1.021404743 container remove 21db183795afc2c8452ab35507222e482fa3a12719d176b08f71283b049594cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2-activate-test, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:50:31 np0005542249 systemd[1]: libpod-conmon-21db183795afc2c8452ab35507222e482fa3a12719d176b08f71283b049594cd.scope: Deactivated successfully.
Dec  2 05:50:32 np0005542249 systemd[1]: Reloading.
Dec  2 05:50:32 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:50:32 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:50:32 np0005542249 ceph-mgr[75372]: [devicehealth INFO root] creating mgr pool
Dec  2 05:50:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Dec  2 05:50:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec  2 05:50:32 np0005542249 systemd[1]: Reloading.
Dec  2 05:50:32 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:50:32 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:50:32 np0005542249 ceph-mgr[75372]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/923166297; not ready for session (expect reconnect)
Dec  2 05:50:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  2 05:50:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  2 05:50:32 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  2 05:50:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Dec  2 05:50:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  2 05:50:32 np0005542249 ceph-mon[75081]: from='osd.1 [v2:192.168.122.100:6806/923166297,v1:192.168.122.100:6807/923166297]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  2 05:50:32 np0005542249 ceph-mon[75081]: osd.0 [v2:192.168.122.100:6802/3230894680,v1:192.168.122.100:6803/3230894680] boot
Dec  2 05:50:32 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec  2 05:50:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec  2 05:50:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Dec  2 05:50:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Dec  2 05:50:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec  2 05:50:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec  2 05:50:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec  2 05:50:32 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Dec  2 05:50:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  2 05:50:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  2 05:50:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  2 05:50:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  2 05:50:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Dec  2 05:50:32 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  2 05:50:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec  2 05:50:32 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  2 05:50:32 np0005542249 ceph-osd[88961]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec  2 05:50:32 np0005542249 ceph-osd[88961]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec  2 05:50:32 np0005542249 ceph-osd[88961]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec  2 05:50:32 np0005542249 systemd[1]: Starting Ceph osd.2 for 95bc4eaa-1a14-59bf-acf2-4b3da055547d...
Dec  2 05:50:32 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec  2 05:50:33 np0005542249 podman[90840]: 2025-12-02 10:50:33.054283753 +0000 UTC m=+0.067183265 container create eaba27aa7f07029deeddab3db45168d47dbd6a9ca6a4fbc1b1de7501c941bbe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2-activate, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  2 05:50:33 np0005542249 podman[90840]: 2025-12-02 10:50:33.020328436 +0000 UTC m=+0.033227958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:33 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea0d753f7b8c96eebf478fd93d8508c8c5e8b0524af589890f302756bbb8f6b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea0d753f7b8c96eebf478fd93d8508c8c5e8b0524af589890f302756bbb8f6b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea0d753f7b8c96eebf478fd93d8508c8c5e8b0524af589890f302756bbb8f6b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea0d753f7b8c96eebf478fd93d8508c8c5e8b0524af589890f302756bbb8f6b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea0d753f7b8c96eebf478fd93d8508c8c5e8b0524af589890f302756bbb8f6b7/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:33 np0005542249 podman[90840]: 2025-12-02 10:50:33.15424911 +0000 UTC m=+0.167148712 container init eaba27aa7f07029deeddab3db45168d47dbd6a9ca6a4fbc1b1de7501c941bbe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  2 05:50:33 np0005542249 podman[90840]: 2025-12-02 10:50:33.164605557 +0000 UTC m=+0.177505059 container start eaba27aa7f07029deeddab3db45168d47dbd6a9ca6a4fbc1b1de7501c941bbe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  2 05:50:33 np0005542249 podman[90840]: 2025-12-02 10:50:33.179563959 +0000 UTC m=+0.192463471 container attach eaba27aa7f07029deeddab3db45168d47dbd6a9ca6a4fbc1b1de7501c941bbe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2-activate, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:50:33 np0005542249 ceph-mgr[75372]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/923166297; not ready for session (expect reconnect)
Dec  2 05:50:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  2 05:50:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  2 05:50:33 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  2 05:50:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Dec  2 05:50:33 np0005542249 ceph-mon[75081]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  2 05:50:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec  2 05:50:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Dec  2 05:50:33 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Dec  2 05:50:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  2 05:50:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  2 05:50:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  2 05:50:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  2 05:50:33 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  2 05:50:33 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  2 05:50:33 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec  2 05:50:33 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec  2 05:50:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:50:34 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2-activate[90855]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  2 05:50:34 np0005542249 bash[90840]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  2 05:50:34 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2-activate[90855]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Dec  2 05:50:34 np0005542249 bash[90840]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Dec  2 05:50:34 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2-activate[90855]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Dec  2 05:50:34 np0005542249 bash[90840]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Dec  2 05:50:34 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2-activate[90855]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec  2 05:50:34 np0005542249 bash[90840]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec  2 05:50:34 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2-activate[90855]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec  2 05:50:34 np0005542249 bash[90840]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec  2 05:50:34 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2-activate[90855]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  2 05:50:34 np0005542249 bash[90840]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  2 05:50:34 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2-activate[90855]: --> ceph-volume raw activate successful for osd ID: 2
Dec  2 05:50:34 np0005542249 bash[90840]: --> ceph-volume raw activate successful for osd ID: 2
Dec  2 05:50:34 np0005542249 systemd[1]: libpod-eaba27aa7f07029deeddab3db45168d47dbd6a9ca6a4fbc1b1de7501c941bbe3.scope: Deactivated successfully.
Dec  2 05:50:34 np0005542249 systemd[1]: libpod-eaba27aa7f07029deeddab3db45168d47dbd6a9ca6a4fbc1b1de7501c941bbe3.scope: Consumed 1.125s CPU time.
Dec  2 05:50:34 np0005542249 podman[90980]: 2025-12-02 10:50:34.316687473 +0000 UTC m=+0.024334403 container died eaba27aa7f07029deeddab3db45168d47dbd6a9ca6a4fbc1b1de7501c941bbe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  2 05:50:34 np0005542249 systemd[1]: var-lib-containers-storage-overlay-ea0d753f7b8c96eebf478fd93d8508c8c5e8b0524af589890f302756bbb8f6b7-merged.mount: Deactivated successfully.
Dec  2 05:50:34 np0005542249 podman[90980]: 2025-12-02 10:50:34.441220678 +0000 UTC m=+0.148867528 container remove eaba27aa7f07029deeddab3db45168d47dbd6a9ca6a4fbc1b1de7501c941bbe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2-activate, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:50:34 np0005542249 ceph-mgr[75372]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/923166297; not ready for session (expect reconnect)
Dec  2 05:50:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  2 05:50:34 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  2 05:50:34 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  2 05:50:34 np0005542249 podman[91037]: 2025-12-02 10:50:34.670981999 +0000 UTC m=+0.071727801 container create 227e7141028a66a33b9f9b116360c43ad8422375cf0151e5d0bdde0e35411ed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 05:50:34 np0005542249 podman[91037]: 2025-12-02 10:50:34.627156719 +0000 UTC m=+0.027902571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:34 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79ca6d6ef7c22e8d30558ba37567215a2a297d0db6498b67805df7088a24006f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:34 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79ca6d6ef7c22e8d30558ba37567215a2a297d0db6498b67805df7088a24006f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:34 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79ca6d6ef7c22e8d30558ba37567215a2a297d0db6498b67805df7088a24006f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:34 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79ca6d6ef7c22e8d30558ba37567215a2a297d0db6498b67805df7088a24006f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:34 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79ca6d6ef7c22e8d30558ba37567215a2a297d0db6498b67805df7088a24006f/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:34 np0005542249 podman[91037]: 2025-12-02 10:50:34.775686577 +0000 UTC m=+0.176432399 container init 227e7141028a66a33b9f9b116360c43ad8422375cf0151e5d0bdde0e35411ed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:50:34 np0005542249 podman[91037]: 2025-12-02 10:50:34.78125472 +0000 UTC m=+0.182000502 container start 227e7141028a66a33b9f9b116360c43ad8422375cf0151e5d0bdde0e35411ed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:50:34 np0005542249 bash[91037]: 227e7141028a66a33b9f9b116360c43ad8422375cf0151e5d0bdde0e35411ed5
Dec  2 05:50:34 np0005542249 ceph-mon[75081]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  2 05:50:34 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec  2 05:50:34 np0005542249 systemd[1]: Started Ceph osd.2 for 95bc4eaa-1a14-59bf-acf2-4b3da055547d.
Dec  2 05:50:34 np0005542249 ceph-osd[91055]: set uid:gid to 167:167 (ceph:ceph)
Dec  2 05:50:34 np0005542249 ceph-osd[91055]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Dec  2 05:50:34 np0005542249 ceph-osd[91055]: pidfile_write: ignore empty --pid-file
Dec  2 05:50:34 np0005542249 ceph-osd[91055]: bdev(0x55c083ab5800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  2 05:50:34 np0005542249 ceph-osd[91055]: bdev(0x55c083ab5800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  2 05:50:34 np0005542249 ceph-osd[91055]: bdev(0x55c083ab5800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  2 05:50:34 np0005542249 ceph-osd[91055]: bdev(0x55c083ab5800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  2 05:50:34 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  2 05:50:34 np0005542249 ceph-osd[91055]: bdev(0x55c0848f7800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  2 05:50:34 np0005542249 ceph-osd[91055]: bdev(0x55c0848f7800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  2 05:50:34 np0005542249 ceph-osd[91055]: bdev(0x55c0848f7800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  2 05:50:34 np0005542249 ceph-osd[91055]: bdev(0x55c0848f7800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  2 05:50:34 np0005542249 ceph-osd[91055]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec  2 05:50:34 np0005542249 ceph-osd[91055]: bdev(0x55c0848f7800 /var/lib/ceph/osd/ceph-2/block) close
Dec  2 05:50:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:50:34 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:50:34 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:34 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v37: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bdev(0x55c083ab5800 /var/lib/ceph/osd/ceph-2/block) close
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: load: jerasure load: lrc 
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bdev(0x55c084978c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bdev(0x55c084978c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bdev(0x55c084978c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bdev(0x55c084978c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bdev(0x55c084978c00 /var/lib/ceph/osd/ceph-2/block) close
Dec  2 05:50:35 np0005542249 podman[91216]: 2025-12-02 10:50:35.548604662 +0000 UTC m=+0.079106843 container create bc98f1f5b8652e79c78839598cc1317873f3892cdd863d7b188bf1e7d62ff0d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  2 05:50:35 np0005542249 podman[91216]: 2025-12-02 10:50:35.498730286 +0000 UTC m=+0.029232537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:35 np0005542249 systemd[1]: Started libpod-conmon-bc98f1f5b8652e79c78839598cc1317873f3892cdd863d7b188bf1e7d62ff0d5.scope.
Dec  2 05:50:35 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bdev(0x55c084978c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bdev(0x55c084978c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bdev(0x55c084978c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bdev(0x55c084978c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bdev(0x55c084978c00 /var/lib/ceph/osd/ceph-2/block) close
Dec  2 05:50:35 np0005542249 podman[91216]: 2025-12-02 10:50:35.654306439 +0000 UTC m=+0.184808630 container init bc98f1f5b8652e79c78839598cc1317873f3892cdd863d7b188bf1e7d62ff0d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_khayyam, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:50:35 np0005542249 ceph-mgr[75372]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/923166297; not ready for session (expect reconnect)
Dec  2 05:50:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  2 05:50:35 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  2 05:50:35 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  2 05:50:35 np0005542249 podman[91216]: 2025-12-02 10:50:35.666123504 +0000 UTC m=+0.196625695 container start bc98f1f5b8652e79c78839598cc1317873f3892cdd863d7b188bf1e7d62ff0d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  2 05:50:35 np0005542249 hardcore_khayyam[91232]: 167 167
Dec  2 05:50:35 np0005542249 systemd[1]: libpod-bc98f1f5b8652e79c78839598cc1317873f3892cdd863d7b188bf1e7d62ff0d5.scope: Deactivated successfully.
Dec  2 05:50:35 np0005542249 conmon[91232]: conmon bc98f1f5b8652e79c788 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc98f1f5b8652e79c78839598cc1317873f3892cdd863d7b188bf1e7d62ff0d5.scope/container/memory.events
Dec  2 05:50:35 np0005542249 podman[91216]: 2025-12-02 10:50:35.673673093 +0000 UTC m=+0.204175264 container attach bc98f1f5b8652e79c78839598cc1317873f3892cdd863d7b188bf1e7d62ff0d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:50:35 np0005542249 podman[91216]: 2025-12-02 10:50:35.674424943 +0000 UTC m=+0.204927114 container died bc98f1f5b8652e79c78839598cc1317873f3892cdd863d7b188bf1e7d62ff0d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  2 05:50:35 np0005542249 systemd[1]: var-lib-containers-storage-overlay-0f5b996e623267ba5ae4af174caad8a7e1e5311e422def8d9baf89d0c15bbf15-merged.mount: Deactivated successfully.
Dec  2 05:50:35 np0005542249 podman[91216]: 2025-12-02 10:50:35.733197755 +0000 UTC m=+0.263699916 container remove bc98f1f5b8652e79c78839598cc1317873f3892cdd863d7b188bf1e7d62ff0d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_khayyam, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  2 05:50:35 np0005542249 ceph-osd[89966]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 19.103 iops: 4890.478 elapsed_sec: 0.613
Dec  2 05:50:35 np0005542249 ceph-osd[89966]: log_channel(cluster) log [WRN] : OSD bench result of 4890.477870 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  2 05:50:35 np0005542249 systemd[1]: libpod-conmon-bc98f1f5b8652e79c78839598cc1317873f3892cdd863d7b188bf1e7d62ff0d5.scope: Deactivated successfully.
Dec  2 05:50:35 np0005542249 ceph-osd[89966]: osd.1 0 waiting for initial osdmap
Dec  2 05:50:35 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1[89962]: 2025-12-02T10:50:35.738+0000 7f38927ef640 -1 osd.1 0 waiting for initial osdmap
Dec  2 05:50:35 np0005542249 ceph-osd[89966]: osd.1 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec  2 05:50:35 np0005542249 ceph-osd[89966]: osd.1 12 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec  2 05:50:35 np0005542249 ceph-osd[89966]: osd.1 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec  2 05:50:35 np0005542249 ceph-osd[89966]: osd.1 12 check_osdmap_features require_osd_release unknown -> reef
Dec  2 05:50:35 np0005542249 ceph-osd[89966]: osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  2 05:50:35 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-1[89962]: 2025-12-02T10:50:35.757+0000 7f388de17640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  2 05:50:35 np0005542249 ceph-osd[89966]: osd.1 12 set_numa_affinity not setting numa affinity
Dec  2 05:50:35 np0005542249 ceph-osd[89966]: osd.1 12 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Dec  2 05:50:35 np0005542249 podman[91260]: 2025-12-02 10:50:35.88850777 +0000 UTC m=+0.053164308 container create 265d1a0228863b6d99e6f0a70f0f780545c53450c3435d0f9b3aa73855c455d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_moser, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  2 05:50:35 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:35 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bdev(0x55c084978c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bdev(0x55c084978c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bdev(0x55c084978c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bdev(0x55c084978c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bdev(0x55c084979400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bdev(0x55c084979400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bdev(0x55c084979400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bdev(0x55c084979400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bluefs mount
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bluefs mount shared_bdev_used = 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: RocksDB version: 7.9.2
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Git sha 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: DB SUMMARY
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: DB Session ID:  CZID4YXMKX0M857VVPK3
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: CURRENT file:  CURRENT
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: IDENTITY file:  IDENTITY
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                         Options.error_if_exists: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                       Options.create_if_missing: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                         Options.paranoid_checks: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                                     Options.env: 0x55c084949c70
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                                Options.info_log: 0x55c083b3c8a0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.max_file_opening_threads: 16
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                              Options.statistics: (nil)
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                               Options.use_fsync: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                       Options.max_log_file_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                         Options.allow_fallocate: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                        Options.use_direct_reads: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.create_missing_column_families: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                              Options.db_log_dir: 
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                                 Options.wal_dir: db.wal
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.advise_random_on_open: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                    Options.write_buffer_manager: 0x55c084a52460
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                            Options.rate_limiter: (nil)
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.unordered_write: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                               Options.row_cache: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                              Options.wal_filter: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.allow_ingest_behind: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.two_write_queues: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.manual_wal_flush: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.wal_compression: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.atomic_flush: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                 Options.log_readahead_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.allow_data_in_errors: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.db_host_id: __hostname__
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.max_background_jobs: 4
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.max_background_compactions: -1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.max_subcompactions: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                          Options.max_open_files: -1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                          Options.bytes_per_sync: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.max_background_flushes: -1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Compression algorithms supported:
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: #011kZSTD supported: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: #011kXpressCompression supported: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: #011kBZip2Compression supported: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: #011kLZ4Compression supported: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: #011kZlibCompression supported: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: #011kSnappyCompression supported: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c083b3c2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c083b291f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c083b3c2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c083b291f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:35 np0005542249 ceph-mon[75081]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  2 05:50:35 np0005542249 ceph-mon[75081]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c083b3c2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c083b291f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c083b3c2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c083b291f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c083b3c2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c083b291f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:35 np0005542249 ceph-mon[75081]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/923166297,v1:192.168.122.100:6807/923166297] boot
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:35 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c083b3c2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c083b291f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c083b3c2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c083b291f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c083b3c240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c083b29090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c083b3c240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c083b29090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:35 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:35 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:35 np0005542249 systemd[1]: Started libpod-conmon-265d1a0228863b6d99e6f0a70f0f780545c53450c3435d0f9b3aa73855c455d4.scope.
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:35 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c083b3c240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c083b29090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:35 np0005542249 ceph-osd[89966]: osd.1 13 state: booting -> active
Dec  2 05:50:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b2402bbd-d320-44aa-88fb-a49dfeae8eae
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672635947159, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672635947429, "job": 1, "event": "recovery_finished"}
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: freelist init
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: freelist _read_cfg
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bluefs umount
Dec  2 05:50:35 np0005542249 ceph-osd[91055]: bdev(0x55c084979400 /var/lib/ceph/osd/ceph-2/block) close
Dec  2 05:50:35 np0005542249 podman[91260]: 2025-12-02 10:50:35.861706991 +0000 UTC m=+0.026363569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:35 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:35 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/357b756584df6e00995095d95b5919789ca3b4edeceaa66a2769a9fd1669874d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:35 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/357b756584df6e00995095d95b5919789ca3b4edeceaa66a2769a9fd1669874d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:35 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/357b756584df6e00995095d95b5919789ca3b4edeceaa66a2769a9fd1669874d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:35 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/357b756584df6e00995095d95b5919789ca3b4edeceaa66a2769a9fd1669874d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:35 np0005542249 podman[91260]: 2025-12-02 10:50:35.988122909 +0000 UTC m=+0.152779427 container init 265d1a0228863b6d99e6f0a70f0f780545c53450c3435d0f9b3aa73855c455d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_moser, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  2 05:50:35 np0005542249 podman[91260]: 2025-12-02 10:50:35.998162365 +0000 UTC m=+0.162818863 container start 265d1a0228863b6d99e6f0a70f0f780545c53450c3435d0f9b3aa73855c455d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_moser, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  2 05:50:36 np0005542249 podman[91260]: 2025-12-02 10:50:36.117461028 +0000 UTC m=+0.282117546 container attach 265d1a0228863b6d99e6f0a70f0f780545c53450c3435d0f9b3aa73855c455d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: bdev(0x55c084979400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: bdev(0x55c084979400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: bdev(0x55c084979400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: bdev(0x55c084979400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: bluefs mount
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: bluefs mount shared_bdev_used = 4718592
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: RocksDB version: 7.9.2
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Git sha 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: DB SUMMARY
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: DB Session ID:  CZID4YXMKX0M857VVPK2
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: CURRENT file:  CURRENT
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: IDENTITY file:  IDENTITY
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                         Options.error_if_exists: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                       Options.create_if_missing: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                         Options.paranoid_checks: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                                     Options.env: 0x55c084afa460
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                                Options.info_log: 0x55c083b3c600
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.max_file_opening_threads: 16
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                              Options.statistics: (nil)
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                               Options.use_fsync: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                       Options.max_log_file_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                         Options.allow_fallocate: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                        Options.use_direct_reads: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.create_missing_column_families: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                              Options.db_log_dir: 
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                                 Options.wal_dir: db.wal
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.advise_random_on_open: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                    Options.write_buffer_manager: 0x55c084a52460
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                            Options.rate_limiter: (nil)
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.unordered_write: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                               Options.row_cache: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                              Options.wal_filter: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.allow_ingest_behind: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.two_write_queues: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.manual_wal_flush: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.wal_compression: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.atomic_flush: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                 Options.log_readahead_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.allow_data_in_errors: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.db_host_id: __hostname__
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.max_background_jobs: 4
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.max_background_compactions: -1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.max_subcompactions: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                          Options.max_open_files: -1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                          Options.bytes_per_sync: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.max_background_flushes: -1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Compression algorithms supported:
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: #011kZSTD supported: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: #011kXpressCompression supported: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: #011kBZip2Compression supported: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: #011kLZ4Compression supported: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: #011kZlibCompression supported: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: #011kSnappyCompression supported: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c083b3ca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c083b291f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c083b3ca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c083b291f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c083b3ca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c083b291f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c083b3ca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c083b291f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c083b3ca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c083b291f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c083b3ca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c083b291f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c083b3ca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c083b291f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c083b3c380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c083b29090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c083b3c380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c083b29090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:           Options.merge_operator: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.compaction_filter_factory: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.sst_partitioner_factory: None
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c083b3c380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c083b29090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.write_buffer_size: 16777216
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.max_write_buffer_number: 64
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.compression: LZ4
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:       Options.prefix_extractor: nullptr
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.num_levels: 7
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.level: 32767
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.compression_opts.strategy: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                  Options.compression_opts.enabled: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                        Options.arena_block_size: 1048576
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.disable_auto_compactions: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.inplace_update_support: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                           Options.bloom_locality: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                    Options.max_successive_merges: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.paranoid_file_checks: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.force_consistency_checks: 1
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.report_bg_io_stats: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                               Options.ttl: 2592000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                       Options.enable_blob_files: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                           Options.min_blob_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                          Options.blob_file_size: 268435456
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb:                Options.blob_file_starting_level: 0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b2402bbd-d320-44aa-88fb-a49dfeae8eae
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672636217346, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672636235187, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672636, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b2402bbd-d320-44aa-88fb-a49dfeae8eae", "db_session_id": "CZID4YXMKX0M857VVPK2", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672636239444, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672636, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b2402bbd-d320-44aa-88fb-a49dfeae8eae", "db_session_id": "CZID4YXMKX0M857VVPK2", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672636242767, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672636, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b2402bbd-d320-44aa-88fb-a49dfeae8eae", "db_session_id": "CZID4YXMKX0M857VVPK2", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764672636244560, "job": 1, "event": "recovery_finished"}
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c083c96000
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: DB pointer 0x55c084a3ba00
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c083b291f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c083b291f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: _get_class not permitted to load lua
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: _get_class not permitted to load sdk
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: _get_class not permitted to load test_remote_reads
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: osd.2 0 load_pgs
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: osd.2 0 load_pgs opened 0 pgs
Dec  2 05:50:36 np0005542249 ceph-osd[91055]: osd.2 0 log_to_monitors true
Dec  2 05:50:36 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2[91050]: 2025-12-02T10:50:36.291+0000 7faa6e3ec740 -1 osd.2 0 log_to_monitors true
Dec  2 05:50:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Dec  2 05:50:36 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1777218221,v1:192.168.122.100:6811/1777218221]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec  2 05:50:36 np0005542249 ceph-mon[75081]: OSD bench result of 4890.477870 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  2 05:50:36 np0005542249 ceph-mon[75081]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  2 05:50:36 np0005542249 ceph-mon[75081]: Cluster is now healthy
Dec  2 05:50:36 np0005542249 ceph-mon[75081]: osd.1 [v2:192.168.122.100:6806/923166297,v1:192.168.122.100:6807/923166297] boot
Dec  2 05:50:36 np0005542249 ceph-mon[75081]: from='osd.2 [v2:192.168.122.100:6810/1777218221,v1:192.168.122.100:6811/1777218221]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec  2 05:50:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Dec  2 05:50:36 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1777218221,v1:192.168.122.100:6811/1777218221]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec  2 05:50:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Dec  2 05:50:36 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Dec  2 05:50:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Dec  2 05:50:36 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1777218221,v1:192.168.122.100:6811/1777218221]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  2 05:50:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e14 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec  2 05:50:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  2 05:50:36 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  2 05:50:36 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  2 05:50:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=13/14 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:50:36 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v40: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  2 05:50:36 np0005542249 condescending_moser[91400]: {
Dec  2 05:50:37 np0005542249 condescending_moser[91400]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 05:50:37 np0005542249 condescending_moser[91400]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:50:37 np0005542249 condescending_moser[91400]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 05:50:37 np0005542249 condescending_moser[91400]:        "osd_id": 0,
Dec  2 05:50:37 np0005542249 condescending_moser[91400]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 05:50:37 np0005542249 condescending_moser[91400]:        "type": "bluestore"
Dec  2 05:50:37 np0005542249 condescending_moser[91400]:    },
Dec  2 05:50:37 np0005542249 condescending_moser[91400]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 05:50:37 np0005542249 condescending_moser[91400]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:50:37 np0005542249 condescending_moser[91400]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 05:50:37 np0005542249 condescending_moser[91400]:        "osd_id": 2,
Dec  2 05:50:37 np0005542249 condescending_moser[91400]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 05:50:37 np0005542249 condescending_moser[91400]:        "type": "bluestore"
Dec  2 05:50:37 np0005542249 condescending_moser[91400]:    },
Dec  2 05:50:37 np0005542249 condescending_moser[91400]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 05:50:37 np0005542249 condescending_moser[91400]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:50:37 np0005542249 condescending_moser[91400]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 05:50:37 np0005542249 condescending_moser[91400]:        "osd_id": 1,
Dec  2 05:50:37 np0005542249 condescending_moser[91400]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 05:50:37 np0005542249 condescending_moser[91400]:        "type": "bluestore"
Dec  2 05:50:37 np0005542249 condescending_moser[91400]:    }
Dec  2 05:50:37 np0005542249 condescending_moser[91400]: }
Dec  2 05:50:37 np0005542249 ceph-mgr[75372]: [devicehealth INFO root] creating main.db for devicehealth
Dec  2 05:50:37 np0005542249 systemd[1]: libpod-265d1a0228863b6d99e6f0a70f0f780545c53450c3435d0f9b3aa73855c455d4.scope: Deactivated successfully.
Dec  2 05:50:37 np0005542249 podman[91260]: 2025-12-02 10:50:37.046974604 +0000 UTC m=+1.211631122 container died 265d1a0228863b6d99e6f0a70f0f780545c53450c3435d0f9b3aa73855c455d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  2 05:50:37 np0005542249 systemd[1]: libpod-265d1a0228863b6d99e6f0a70f0f780545c53450c3435d0f9b3aa73855c455d4.scope: Consumed 1.040s CPU time.
Dec  2 05:50:37 np0005542249 ceph-mgr[75372]: [devicehealth INFO root] Check health
Dec  2 05:50:37 np0005542249 systemd[1]: var-lib-containers-storage-overlay-357b756584df6e00995095d95b5919789ca3b4edeceaa66a2769a9fd1669874d-merged.mount: Deactivated successfully.
Dec  2 05:50:37 np0005542249 ceph-mgr[75372]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Dec  2 05:50:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Dec  2 05:50:37 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  2 05:50:37 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec  2 05:50:37 np0005542249 podman[91260]: 2025-12-02 10:50:37.12044722 +0000 UTC m=+1.285103728 container remove 265d1a0228863b6d99e6f0a70f0f780545c53450c3435d0f9b3aa73855c455d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  2 05:50:37 np0005542249 systemd[1]: libpod-conmon-265d1a0228863b6d99e6f0a70f0f780545c53450c3435d0f9b3aa73855c455d4.scope: Deactivated successfully.
Dec  2 05:50:37 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec  2 05:50:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:50:37 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:50:37 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:37 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec  2 05:50:37 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec  2 05:50:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Dec  2 05:50:37 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1777218221,v1:192.168.122.100:6811/1777218221]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  2 05:50:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Dec  2 05:50:37 np0005542249 ceph-osd[91055]: osd.2 0 done with init, starting boot process
Dec  2 05:50:37 np0005542249 ceph-osd[91055]: osd.2 0 start_boot
Dec  2 05:50:37 np0005542249 ceph-osd[91055]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec  2 05:50:37 np0005542249 ceph-osd[91055]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec  2 05:50:37 np0005542249 ceph-osd[91055]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec  2 05:50:37 np0005542249 ceph-osd[91055]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec  2 05:50:37 np0005542249 ceph-osd[91055]: osd.2 0  bench count 12288000 bsize 4 KiB
Dec  2 05:50:37 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Dec  2 05:50:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  2 05:50:37 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  2 05:50:37 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  2 05:50:37 np0005542249 ceph-mgr[75372]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1777218221; not ready for session (expect reconnect)
Dec  2 05:50:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  2 05:50:37 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  2 05:50:37 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  2 05:50:37 np0005542249 ceph-mon[75081]: from='osd.2 [v2:192.168.122.100:6810/1777218221,v1:192.168.122.100:6811/1777218221]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec  2 05:50:37 np0005542249 ceph-mon[75081]: from='osd.2 [v2:192.168.122.100:6810/1777218221,v1:192.168.122.100:6811/1777218221]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  2 05:50:37 np0005542249 ceph-mon[75081]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec  2 05:50:37 np0005542249 ceph-mon[75081]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec  2 05:50:37 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:37 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:38 np0005542249 podman[91967]: 2025-12-02 10:50:38.295996215 +0000 UTC m=+0.273251361 container exec cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:50:38 np0005542249 podman[91967]: 2025-12-02 10:50:38.426625629 +0000 UTC m=+0.403880795 container exec_died cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Dec  2 05:50:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:50:38 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:50:38 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:38 np0005542249 ceph-mgr[75372]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1777218221; not ready for session (expect reconnect)
Dec  2 05:50:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  2 05:50:38 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  2 05:50:38 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  2 05:50:38 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v42: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  2 05:50:38 np0005542249 ceph-mon[75081]: from='osd.2 [v2:192.168.122.100:6810/1777218221,v1:192.168.122.100:6811/1777218221]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  2 05:50:38 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:38 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:39 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.ntxcvs(active, since 73s)
Dec  2 05:50:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:50:39 np0005542249 podman[92231]: 2025-12-02 10:50:39.601770441 +0000 UTC m=+0.065848227 container create 3fd972aeaad2ffcbf7feecd11a54a93768654f2a08c1d933dc2f43bacf176935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_moore, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:50:39 np0005542249 podman[92231]: 2025-12-02 10:50:39.557710516 +0000 UTC m=+0.021788332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:39 np0005542249 systemd[1]: Started libpod-conmon-3fd972aeaad2ffcbf7feecd11a54a93768654f2a08c1d933dc2f43bacf176935.scope.
Dec  2 05:50:39 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:39 np0005542249 podman[92231]: 2025-12-02 10:50:39.728814887 +0000 UTC m=+0.192892693 container init 3fd972aeaad2ffcbf7feecd11a54a93768654f2a08c1d933dc2f43bacf176935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:50:39 np0005542249 podman[92231]: 2025-12-02 10:50:39.73655774 +0000 UTC m=+0.200635526 container start 3fd972aeaad2ffcbf7feecd11a54a93768654f2a08c1d933dc2f43bacf176935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_moore, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  2 05:50:39 np0005542249 practical_moore[92248]: 167 167
Dec  2 05:50:39 np0005542249 systemd[1]: libpod-3fd972aeaad2ffcbf7feecd11a54a93768654f2a08c1d933dc2f43bacf176935.scope: Deactivated successfully.
Dec  2 05:50:39 np0005542249 podman[92231]: 2025-12-02 10:50:39.768188993 +0000 UTC m=+0.232266779 container attach 3fd972aeaad2ffcbf7feecd11a54a93768654f2a08c1d933dc2f43bacf176935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_moore, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:50:39 np0005542249 podman[92231]: 2025-12-02 10:50:39.769427787 +0000 UTC m=+0.233505573 container died 3fd972aeaad2ffcbf7feecd11a54a93768654f2a08c1d933dc2f43bacf176935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_moore, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:50:39 np0005542249 systemd[1]: var-lib-containers-storage-overlay-0c572f2cf1c974f1d62122fa7ecc8bb6392366ce8533fa6c89a68290c5f4e2ab-merged.mount: Deactivated successfully.
Dec  2 05:50:39 np0005542249 podman[92231]: 2025-12-02 10:50:39.902094828 +0000 UTC m=+0.366172614 container remove 3fd972aeaad2ffcbf7feecd11a54a93768654f2a08c1d933dc2f43bacf176935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_moore, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  2 05:50:39 np0005542249 systemd[1]: libpod-conmon-3fd972aeaad2ffcbf7feecd11a54a93768654f2a08c1d933dc2f43bacf176935.scope: Deactivated successfully.
Dec  2 05:50:39 np0005542249 ceph-mgr[75372]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1777218221; not ready for session (expect reconnect)
Dec  2 05:50:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  2 05:50:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  2 05:50:39 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  2 05:50:40 np0005542249 podman[92274]: 2025-12-02 10:50:40.092864391 +0000 UTC m=+0.070935539 container create d75728071ebaab6429cb285ebbd65a0adc6c2d179634bf2b6b402d6117223680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  2 05:50:40 np0005542249 podman[92274]: 2025-12-02 10:50:40.053894346 +0000 UTC m=+0.031965534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:40 np0005542249 systemd[1]: Started libpod-conmon-d75728071ebaab6429cb285ebbd65a0adc6c2d179634bf2b6b402d6117223680.scope.
Dec  2 05:50:40 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:40 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/798da11708c63f927d8fe9b36e788834fd9b71d90d450800ce26a7891ceaede9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:40 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/798da11708c63f927d8fe9b36e788834fd9b71d90d450800ce26a7891ceaede9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:40 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/798da11708c63f927d8fe9b36e788834fd9b71d90d450800ce26a7891ceaede9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:40 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/798da11708c63f927d8fe9b36e788834fd9b71d90d450800ce26a7891ceaede9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:40 np0005542249 podman[92274]: 2025-12-02 10:50:40.226194029 +0000 UTC m=+0.204265227 container init d75728071ebaab6429cb285ebbd65a0adc6c2d179634bf2b6b402d6117223680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  2 05:50:40 np0005542249 podman[92274]: 2025-12-02 10:50:40.239235959 +0000 UTC m=+0.217307147 container start d75728071ebaab6429cb285ebbd65a0adc6c2d179634bf2b6b402d6117223680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  2 05:50:40 np0005542249 podman[92274]: 2025-12-02 10:50:40.25628691 +0000 UTC m=+0.234358068 container attach d75728071ebaab6429cb285ebbd65a0adc6c2d179634bf2b6b402d6117223680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_shockley, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  2 05:50:40 np0005542249 ceph-mgr[75372]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1777218221; not ready for session (expect reconnect)
Dec  2 05:50:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  2 05:50:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  2 05:50:40 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  2 05:50:40 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v43: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]: [
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:    {
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:        "available": false,
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:        "ceph_device": false,
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:        "lsm_data": {},
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:        "lvs": [],
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:        "path": "/dev/sr0",
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:        "rejected_reasons": [
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:            "Insufficient space (<5GB)",
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:            "Has a FileSystem"
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:        ],
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:        "sys_api": {
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:            "actuators": null,
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:            "device_nodes": "sr0",
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:            "devname": "sr0",
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:            "human_readable_size": "482.00 KB",
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:            "id_bus": "ata",
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:            "model": "QEMU DVD-ROM",
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:            "nr_requests": "2",
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:            "parent": "/dev/sr0",
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:            "partitions": {},
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:            "path": "/dev/sr0",
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:            "removable": "1",
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:            "rev": "2.5+",
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:            "ro": "0",
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:            "rotational": "1",
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:            "sas_address": "",
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:            "sas_device_handle": "",
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:            "scheduler_mode": "mq-deadline",
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:            "sectors": 0,
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:            "sectorsize": "2048",
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:            "size": 493568.0,
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:            "support_discard": "2048",
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:            "type": "disk",
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:            "vendor": "QEMU"
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:        }
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]:    }
Dec  2 05:50:41 np0005542249 condescending_shockley[92290]: ]
Dec  2 05:50:41 np0005542249 systemd[1]: libpod-d75728071ebaab6429cb285ebbd65a0adc6c2d179634bf2b6b402d6117223680.scope: Deactivated successfully.
Dec  2 05:50:41 np0005542249 systemd[1]: libpod-d75728071ebaab6429cb285ebbd65a0adc6c2d179634bf2b6b402d6117223680.scope: Consumed 1.536s CPU time.
Dec  2 05:50:41 np0005542249 podman[92274]: 2025-12-02 10:50:41.72273907 +0000 UTC m=+1.700810258 container died d75728071ebaab6429cb285ebbd65a0adc6c2d179634bf2b6b402d6117223680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_shockley, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:50:41 np0005542249 systemd[1]: var-lib-containers-storage-overlay-798da11708c63f927d8fe9b36e788834fd9b71d90d450800ce26a7891ceaede9-merged.mount: Deactivated successfully.
Dec  2 05:50:41 np0005542249 podman[92274]: 2025-12-02 10:50:41.895102126 +0000 UTC m=+1.873173274 container remove d75728071ebaab6429cb285ebbd65a0adc6c2d179634bf2b6b402d6117223680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_shockley, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  2 05:50:41 np0005542249 systemd[1]: libpod-conmon-d75728071ebaab6429cb285ebbd65a0adc6c2d179634bf2b6b402d6117223680.scope: Deactivated successfully.
Dec  2 05:50:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:50:41 np0005542249 ceph-mgr[75372]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1777218221; not ready for session (expect reconnect)
Dec  2 05:50:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  2 05:50:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  2 05:50:41 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  2 05:50:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:50:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Dec  2 05:50:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  2 05:50:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Dec  2 05:50:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  2 05:50:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Dec  2 05:50:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  2 05:50:41 np0005542249 ceph-mgr[75372]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43690k
Dec  2 05:50:41 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43690k
Dec  2 05:50:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Dec  2 05:50:41 np0005542249 ceph-mgr[75372]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Dec  2 05:50:41 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Dec  2 05:50:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:50:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:50:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 05:50:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:50:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 05:50:42 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:42 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 5c3ecf81-22c8-4159-a942-4d97eb6a052f does not exist
Dec  2 05:50:42 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 42d1bfa3-9a7b-4b17-9363-945052d8591e does not exist
Dec  2 05:50:42 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 4d97d27e-9179-43a6-89b4-58d383be08ba does not exist
Dec  2 05:50:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 05:50:42 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 05:50:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 05:50:42 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 05:50:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:50:42 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:50:42 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:42 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:42 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  2 05:50:42 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  2 05:50:42 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  2 05:50:42 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:50:42 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:42 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 05:50:42 np0005542249 ceph-osd[91055]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 25.037 iops: 6409.401 elapsed_sec: 0.468
Dec  2 05:50:42 np0005542249 ceph-osd[91055]: log_channel(cluster) log [WRN] : OSD bench result of 6409.401207 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  2 05:50:42 np0005542249 ceph-osd[91055]: osd.2 0 waiting for initial osdmap
Dec  2 05:50:42 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2[91050]: 2025-12-02T10:50:42.447+0000 7faa6ab83640 -1 osd.2 0 waiting for initial osdmap
Dec  2 05:50:42 np0005542249 ceph-osd[91055]: osd.2 15 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec  2 05:50:42 np0005542249 ceph-osd[91055]: osd.2 15 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec  2 05:50:42 np0005542249 ceph-osd[91055]: osd.2 15 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec  2 05:50:42 np0005542249 ceph-osd[91055]: osd.2 15 check_osdmap_features require_osd_release unknown -> reef
Dec  2 05:50:42 np0005542249 ceph-osd[91055]: osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  2 05:50:42 np0005542249 ceph-osd[91055]: osd.2 15 set_numa_affinity not setting numa affinity
Dec  2 05:50:42 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-osd-2[91050]: 2025-12-02T10:50:42.475+0000 7faa65994640 -1 osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  2 05:50:42 np0005542249 ceph-osd[91055]: osd.2 15 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Dec  2 05:50:42 np0005542249 podman[94214]: 2025-12-02 10:50:42.638908118 +0000 UTC m=+0.047134351 container create 2533d4e5cd189ef815231d75da594e60f33f187d8db58b0741400cac630f4154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:50:42 np0005542249 systemd[1]: Started libpod-conmon-2533d4e5cd189ef815231d75da594e60f33f187d8db58b0741400cac630f4154.scope.
Dec  2 05:50:42 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:42 np0005542249 podman[94214]: 2025-12-02 10:50:42.619851482 +0000 UTC m=+0.028077725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:42 np0005542249 podman[94214]: 2025-12-02 10:50:42.723386339 +0000 UTC m=+0.131612582 container init 2533d4e5cd189ef815231d75da594e60f33f187d8db58b0741400cac630f4154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  2 05:50:42 np0005542249 podman[94214]: 2025-12-02 10:50:42.733277782 +0000 UTC m=+0.141504005 container start 2533d4e5cd189ef815231d75da594e60f33f187d8db58b0741400cac630f4154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:50:42 np0005542249 modest_carson[94230]: 167 167
Dec  2 05:50:42 np0005542249 systemd[1]: libpod-2533d4e5cd189ef815231d75da594e60f33f187d8db58b0741400cac630f4154.scope: Deactivated successfully.
Dec  2 05:50:42 np0005542249 conmon[94230]: conmon 2533d4e5cd189ef81523 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2533d4e5cd189ef815231d75da594e60f33f187d8db58b0741400cac630f4154.scope/container/memory.events
Dec  2 05:50:42 np0005542249 podman[94214]: 2025-12-02 10:50:42.739434911 +0000 UTC m=+0.147661154 container attach 2533d4e5cd189ef815231d75da594e60f33f187d8db58b0741400cac630f4154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  2 05:50:42 np0005542249 podman[94214]: 2025-12-02 10:50:42.740780188 +0000 UTC m=+0.149006411 container died 2533d4e5cd189ef815231d75da594e60f33f187d8db58b0741400cac630f4154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:50:42 np0005542249 systemd[1]: var-lib-containers-storage-overlay-80cc5358f8bdc2109e0683e361def92f17c1bcc545575a9446efcea8bca3f64a-merged.mount: Deactivated successfully.
Dec  2 05:50:42 np0005542249 podman[94214]: 2025-12-02 10:50:42.787164488 +0000 UTC m=+0.195390721 container remove 2533d4e5cd189ef815231d75da594e60f33f187d8db58b0741400cac630f4154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  2 05:50:42 np0005542249 systemd[1]: libpod-conmon-2533d4e5cd189ef815231d75da594e60f33f187d8db58b0741400cac630f4154.scope: Deactivated successfully.
Dec  2 05:50:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  2 05:50:42 np0005542249 ceph-mgr[75372]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1777218221; not ready for session (expect reconnect)
Dec  2 05:50:42 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  2 05:50:42 np0005542249 ceph-mgr[75372]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  2 05:50:42 np0005542249 podman[94254]: 2025-12-02 10:50:42.98186716 +0000 UTC m=+0.058386061 container create eacb693845dd3606c2a87ddc78e95e6f860b0b296a960f642a6a09a0d6da5359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:50:42 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  2 05:50:43 np0005542249 systemd[1]: Started libpod-conmon-eacb693845dd3606c2a87ddc78e95e6f860b0b296a960f642a6a09a0d6da5359.scope.
Dec  2 05:50:43 np0005542249 podman[94254]: 2025-12-02 10:50:42.952207612 +0000 UTC m=+0.028726553 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:43 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:43 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a07cab85e7148b6e8512e46544cd8c1f801c68aadd8da32346323fa5025eb4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:43 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a07cab85e7148b6e8512e46544cd8c1f801c68aadd8da32346323fa5025eb4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:43 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a07cab85e7148b6e8512e46544cd8c1f801c68aadd8da32346323fa5025eb4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:43 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a07cab85e7148b6e8512e46544cd8c1f801c68aadd8da32346323fa5025eb4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:43 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a07cab85e7148b6e8512e46544cd8c1f801c68aadd8da32346323fa5025eb4e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:43 np0005542249 podman[94254]: 2025-12-02 10:50:43.083583697 +0000 UTC m=+0.160102598 container init eacb693845dd3606c2a87ddc78e95e6f860b0b296a960f642a6a09a0d6da5359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_satoshi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:50:43 np0005542249 podman[94254]: 2025-12-02 10:50:43.099441884 +0000 UTC m=+0.175960745 container start eacb693845dd3606c2a87ddc78e95e6f860b0b296a960f642a6a09a0d6da5359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_satoshi, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:50:43 np0005542249 podman[94254]: 2025-12-02 10:50:43.102990962 +0000 UTC m=+0.179509823 container attach eacb693845dd3606c2a87ddc78e95e6f860b0b296a960f642a6a09a0d6da5359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_satoshi, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:50:43 np0005542249 ceph-osd[91055]: osd.2 15 tick checking mon for new map
Dec  2 05:50:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Dec  2 05:50:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Dec  2 05:50:43 np0005542249 ceph-mon[75081]: Adjusting osd_memory_target on compute-0 to 43690k
Dec  2 05:50:43 np0005542249 ceph-mon[75081]: Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Dec  2 05:50:43 np0005542249 ceph-mon[75081]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/1777218221,v1:192.168.122.100:6811/1777218221] boot
Dec  2 05:50:43 np0005542249 ceph-osd[91055]: osd.2 16 state: booting -> active
Dec  2 05:50:43 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Dec  2 05:50:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  2 05:50:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  2 05:50:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:50:44 np0005542249 epic_satoshi[94270]: --> passed data devices: 0 physical, 3 LVM
Dec  2 05:50:44 np0005542249 epic_satoshi[94270]: --> relative data size: 1.0
Dec  2 05:50:44 np0005542249 epic_satoshi[94270]: --> All data devices are unavailable
Dec  2 05:50:44 np0005542249 systemd[1]: libpod-eacb693845dd3606c2a87ddc78e95e6f860b0b296a960f642a6a09a0d6da5359.scope: Deactivated successfully.
Dec  2 05:50:44 np0005542249 systemd[1]: libpod-eacb693845dd3606c2a87ddc78e95e6f860b0b296a960f642a6a09a0d6da5359.scope: Consumed 1.163s CPU time.
Dec  2 05:50:44 np0005542249 podman[94254]: 2025-12-02 10:50:44.310000924 +0000 UTC m=+1.386519835 container died eacb693845dd3606c2a87ddc78e95e6f860b0b296a960f642a6a09a0d6da5359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_satoshi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  2 05:50:44 np0005542249 systemd[1]: var-lib-containers-storage-overlay-2a07cab85e7148b6e8512e46544cd8c1f801c68aadd8da32346323fa5025eb4e-merged.mount: Deactivated successfully.
Dec  2 05:50:44 np0005542249 podman[94254]: 2025-12-02 10:50:44.387153833 +0000 UTC m=+1.463672704 container remove eacb693845dd3606c2a87ddc78e95e6f860b0b296a960f642a6a09a0d6da5359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_satoshi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  2 05:50:44 np0005542249 systemd[1]: libpod-conmon-eacb693845dd3606c2a87ddc78e95e6f860b0b296a960f642a6a09a0d6da5359.scope: Deactivated successfully.
Dec  2 05:50:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Dec  2 05:50:44 np0005542249 ceph-mon[75081]: OSD bench result of 6409.401207 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  2 05:50:44 np0005542249 ceph-mon[75081]: osd.2 [v2:192.168.122.100:6810/1777218221,v1:192.168.122.100:6811/1777218221] boot
Dec  2 05:50:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Dec  2 05:50:44 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Dec  2 05:50:44 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Dec  2 05:50:45 np0005542249 podman[94450]: 2025-12-02 10:50:45.182871847 +0000 UTC m=+0.047190142 container create 1c9bd22663be75bc45ec825f7ef38010cf8550824067d080f0059fc13de6b483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cannon, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  2 05:50:45 np0005542249 systemd[1]: Started libpod-conmon-1c9bd22663be75bc45ec825f7ef38010cf8550824067d080f0059fc13de6b483.scope.
Dec  2 05:50:45 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:45 np0005542249 podman[94450]: 2025-12-02 10:50:45.259561813 +0000 UTC m=+0.123880148 container init 1c9bd22663be75bc45ec825f7ef38010cf8550824067d080f0059fc13de6b483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:50:45 np0005542249 podman[94450]: 2025-12-02 10:50:45.164527071 +0000 UTC m=+0.028845346 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:45 np0005542249 podman[94450]: 2025-12-02 10:50:45.264922051 +0000 UTC m=+0.129240306 container start 1c9bd22663be75bc45ec825f7ef38010cf8550824067d080f0059fc13de6b483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  2 05:50:45 np0005542249 podman[94450]: 2025-12-02 10:50:45.268818719 +0000 UTC m=+0.133137054 container attach 1c9bd22663be75bc45ec825f7ef38010cf8550824067d080f0059fc13de6b483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:50:45 np0005542249 charming_cannon[94466]: 167 167
Dec  2 05:50:45 np0005542249 systemd[1]: libpod-1c9bd22663be75bc45ec825f7ef38010cf8550824067d080f0059fc13de6b483.scope: Deactivated successfully.
Dec  2 05:50:45 np0005542249 podman[94450]: 2025-12-02 10:50:45.270252078 +0000 UTC m=+0.134570333 container died 1c9bd22663be75bc45ec825f7ef38010cf8550824067d080f0059fc13de6b483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cannon, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:50:45 np0005542249 systemd[1]: var-lib-containers-storage-overlay-72842c8b745e4872c97b11f805076c2f8d12899a0bb87548b5c9627736773e12-merged.mount: Deactivated successfully.
Dec  2 05:50:45 np0005542249 podman[94450]: 2025-12-02 10:50:45.311065724 +0000 UTC m=+0.175383999 container remove 1c9bd22663be75bc45ec825f7ef38010cf8550824067d080f0059fc13de6b483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cannon, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  2 05:50:45 np0005542249 systemd[1]: libpod-conmon-1c9bd22663be75bc45ec825f7ef38010cf8550824067d080f0059fc13de6b483.scope: Deactivated successfully.
Dec  2 05:50:45 np0005542249 podman[94490]: 2025-12-02 10:50:45.487960765 +0000 UTC m=+0.046190056 container create 37c9d8b97fa20cd2d5cf4b9832eac0ec8c131b2468c810120a67d50cbe4d5642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wright, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  2 05:50:45 np0005542249 systemd[1]: Started libpod-conmon-37c9d8b97fa20cd2d5cf4b9832eac0ec8c131b2468c810120a67d50cbe4d5642.scope.
Dec  2 05:50:45 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:45 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b92c69b9e76b9623351e721935d0a146f16b113c62576fa31d94667406097e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:45 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b92c69b9e76b9623351e721935d0a146f16b113c62576fa31d94667406097e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:45 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b92c69b9e76b9623351e721935d0a146f16b113c62576fa31d94667406097e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:45 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b92c69b9e76b9623351e721935d0a146f16b113c62576fa31d94667406097e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:45 np0005542249 podman[94490]: 2025-12-02 10:50:45.561221297 +0000 UTC m=+0.119450588 container init 37c9d8b97fa20cd2d5cf4b9832eac0ec8c131b2468c810120a67d50cbe4d5642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wright, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:50:45 np0005542249 podman[94490]: 2025-12-02 10:50:45.4664005 +0000 UTC m=+0.024629891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:45 np0005542249 podman[94490]: 2025-12-02 10:50:45.574809351 +0000 UTC m=+0.133038662 container start 37c9d8b97fa20cd2d5cf4b9832eac0ec8c131b2468c810120a67d50cbe4d5642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Dec  2 05:50:45 np0005542249 podman[94490]: 2025-12-02 10:50:45.585862916 +0000 UTC m=+0.144092217 container attach 37c9d8b97fa20cd2d5cf4b9832eac0ec8c131b2468c810120a67d50cbe4d5642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wright, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:50:46 np0005542249 silly_wright[94506]: {
Dec  2 05:50:46 np0005542249 silly_wright[94506]:    "0": [
Dec  2 05:50:46 np0005542249 silly_wright[94506]:        {
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "devices": [
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "/dev/loop3"
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            ],
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "lv_name": "ceph_lv0",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "lv_size": "21470642176",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "name": "ceph_lv0",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "tags": {
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.cluster_name": "ceph",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.crush_device_class": "",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.encrypted": "0",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.osd_id": "0",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.type": "block",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.vdo": "0"
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            },
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "type": "block",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "vg_name": "ceph_vg0"
Dec  2 05:50:46 np0005542249 silly_wright[94506]:        }
Dec  2 05:50:46 np0005542249 silly_wright[94506]:    ],
Dec  2 05:50:46 np0005542249 silly_wright[94506]:    "1": [
Dec  2 05:50:46 np0005542249 silly_wright[94506]:        {
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "devices": [
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "/dev/loop4"
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            ],
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "lv_name": "ceph_lv1",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "lv_size": "21470642176",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "name": "ceph_lv1",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "tags": {
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.cluster_name": "ceph",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.crush_device_class": "",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.encrypted": "0",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.osd_id": "1",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.type": "block",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.vdo": "0"
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            },
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "type": "block",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "vg_name": "ceph_vg1"
Dec  2 05:50:46 np0005542249 silly_wright[94506]:        }
Dec  2 05:50:46 np0005542249 silly_wright[94506]:    ],
Dec  2 05:50:46 np0005542249 silly_wright[94506]:    "2": [
Dec  2 05:50:46 np0005542249 silly_wright[94506]:        {
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "devices": [
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "/dev/loop5"
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            ],
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "lv_name": "ceph_lv2",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "lv_size": "21470642176",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "name": "ceph_lv2",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "tags": {
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.cluster_name": "ceph",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.crush_device_class": "",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.encrypted": "0",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.osd_id": "2",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.type": "block",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:                "ceph.vdo": "0"
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            },
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "type": "block",
Dec  2 05:50:46 np0005542249 silly_wright[94506]:            "vg_name": "ceph_vg2"
Dec  2 05:50:46 np0005542249 silly_wright[94506]:        }
Dec  2 05:50:46 np0005542249 silly_wright[94506]:    ]
Dec  2 05:50:46 np0005542249 silly_wright[94506]: }
Dec  2 05:50:46 np0005542249 systemd[1]: libpod-37c9d8b97fa20cd2d5cf4b9832eac0ec8c131b2468c810120a67d50cbe4d5642.scope: Deactivated successfully.
Dec  2 05:50:46 np0005542249 podman[94490]: 2025-12-02 10:50:46.402683402 +0000 UTC m=+0.960912703 container died 37c9d8b97fa20cd2d5cf4b9832eac0ec8c131b2468c810120a67d50cbe4d5642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:50:46 np0005542249 systemd[1]: var-lib-containers-storage-overlay-3b92c69b9e76b9623351e721935d0a146f16b113c62576fa31d94667406097e7-merged.mount: Deactivated successfully.
Dec  2 05:50:46 np0005542249 podman[94490]: 2025-12-02 10:50:46.468971291 +0000 UTC m=+1.027200602 container remove 37c9d8b97fa20cd2d5cf4b9832eac0ec8c131b2468c810120a67d50cbe4d5642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wright, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  2 05:50:46 np0005542249 systemd[1]: libpod-conmon-37c9d8b97fa20cd2d5cf4b9832eac0ec8c131b2468c810120a67d50cbe4d5642.scope: Deactivated successfully.
Dec  2 05:50:46 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 879 MiB used, 59 GiB / 60 GiB avail
Dec  2 05:50:47 np0005542249 podman[94668]: 2025-12-02 10:50:47.178283311 +0000 UTC m=+0.052073027 container create cbe44ca6e323b8c8668440301998613b4b407ab622a18910ef2cd383e4ebe7d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brattain, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  2 05:50:47 np0005542249 systemd[1]: Started libpod-conmon-cbe44ca6e323b8c8668440301998613b4b407ab622a18910ef2cd383e4ebe7d6.scope.
Dec  2 05:50:47 np0005542249 podman[94668]: 2025-12-02 10:50:47.154227718 +0000 UTC m=+0.028017464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:47 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:47 np0005542249 podman[94668]: 2025-12-02 10:50:47.27499458 +0000 UTC m=+0.148784386 container init cbe44ca6e323b8c8668440301998613b4b407ab622a18910ef2cd383e4ebe7d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brattain, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  2 05:50:47 np0005542249 podman[94668]: 2025-12-02 10:50:47.285739976 +0000 UTC m=+0.159529692 container start cbe44ca6e323b8c8668440301998613b4b407ab622a18910ef2cd383e4ebe7d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  2 05:50:47 np0005542249 podman[94668]: 2025-12-02 10:50:47.289666615 +0000 UTC m=+0.163456361 container attach cbe44ca6e323b8c8668440301998613b4b407ab622a18910ef2cd383e4ebe7d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brattain, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  2 05:50:47 np0005542249 charming_brattain[94685]: 167 167
Dec  2 05:50:47 np0005542249 systemd[1]: libpod-cbe44ca6e323b8c8668440301998613b4b407ab622a18910ef2cd383e4ebe7d6.scope: Deactivated successfully.
Dec  2 05:50:47 np0005542249 podman[94668]: 2025-12-02 10:50:47.293244254 +0000 UTC m=+0.167034070 container died cbe44ca6e323b8c8668440301998613b4b407ab622a18910ef2cd383e4ebe7d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brattain, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:50:47 np0005542249 systemd[1]: var-lib-containers-storage-overlay-fbe1d1a738a4b6d9682bade4ad7b5eaf5d98a93daa02c2f2f8405c2c3a2b6b6c-merged.mount: Deactivated successfully.
Dec  2 05:50:47 np0005542249 podman[94668]: 2025-12-02 10:50:47.341402072 +0000 UTC m=+0.215191788 container remove cbe44ca6e323b8c8668440301998613b4b407ab622a18910ef2cd383e4ebe7d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brattain, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:50:47 np0005542249 systemd[1]: libpod-conmon-cbe44ca6e323b8c8668440301998613b4b407ab622a18910ef2cd383e4ebe7d6.scope: Deactivated successfully.
Dec  2 05:50:47 np0005542249 podman[94709]: 2025-12-02 10:50:47.529743518 +0000 UTC m=+0.053711682 container create 2f5c0f8da2cfd21276ba52c67cfe21229c02ff7e478505fb81cfe00c9beb9c26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_easley, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:50:47 np0005542249 systemd[1]: Started libpod-conmon-2f5c0f8da2cfd21276ba52c67cfe21229c02ff7e478505fb81cfe00c9beb9c26.scope.
Dec  2 05:50:47 np0005542249 podman[94709]: 2025-12-02 10:50:47.505401947 +0000 UTC m=+0.029370091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:47 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:47 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab35894b1a1ef46794c421b05d4b97c2c24bbc296489f77e95b2034c9ff58b32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:47 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab35894b1a1ef46794c421b05d4b97c2c24bbc296489f77e95b2034c9ff58b32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:47 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab35894b1a1ef46794c421b05d4b97c2c24bbc296489f77e95b2034c9ff58b32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:47 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab35894b1a1ef46794c421b05d4b97c2c24bbc296489f77e95b2034c9ff58b32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:47 np0005542249 podman[94709]: 2025-12-02 10:50:47.642573032 +0000 UTC m=+0.166541216 container init 2f5c0f8da2cfd21276ba52c67cfe21229c02ff7e478505fb81cfe00c9beb9c26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_easley, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:50:47 np0005542249 podman[94709]: 2025-12-02 10:50:47.651058556 +0000 UTC m=+0.175026680 container start 2f5c0f8da2cfd21276ba52c67cfe21229c02ff7e478505fb81cfe00c9beb9c26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  2 05:50:47 np0005542249 podman[94709]: 2025-12-02 10:50:47.655608161 +0000 UTC m=+0.179576375 container attach 2f5c0f8da2cfd21276ba52c67cfe21229c02ff7e478505fb81cfe00c9beb9c26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:50:48 np0005542249 loving_easley[94726]: {
Dec  2 05:50:48 np0005542249 loving_easley[94726]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 05:50:48 np0005542249 loving_easley[94726]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:50:48 np0005542249 loving_easley[94726]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 05:50:48 np0005542249 loving_easley[94726]:        "osd_id": 0,
Dec  2 05:50:48 np0005542249 loving_easley[94726]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 05:50:48 np0005542249 loving_easley[94726]:        "type": "bluestore"
Dec  2 05:50:48 np0005542249 loving_easley[94726]:    },
Dec  2 05:50:48 np0005542249 loving_easley[94726]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 05:50:48 np0005542249 loving_easley[94726]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:50:48 np0005542249 loving_easley[94726]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 05:50:48 np0005542249 loving_easley[94726]:        "osd_id": 2,
Dec  2 05:50:48 np0005542249 loving_easley[94726]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 05:50:48 np0005542249 loving_easley[94726]:        "type": "bluestore"
Dec  2 05:50:48 np0005542249 loving_easley[94726]:    },
Dec  2 05:50:48 np0005542249 loving_easley[94726]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 05:50:48 np0005542249 loving_easley[94726]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:50:48 np0005542249 loving_easley[94726]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 05:50:48 np0005542249 loving_easley[94726]:        "osd_id": 1,
Dec  2 05:50:48 np0005542249 loving_easley[94726]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 05:50:48 np0005542249 loving_easley[94726]:        "type": "bluestore"
Dec  2 05:50:48 np0005542249 loving_easley[94726]:    }
Dec  2 05:50:48 np0005542249 loving_easley[94726]: }
Dec  2 05:50:48 np0005542249 systemd[1]: libpod-2f5c0f8da2cfd21276ba52c67cfe21229c02ff7e478505fb81cfe00c9beb9c26.scope: Deactivated successfully.
Dec  2 05:50:48 np0005542249 systemd[1]: libpod-2f5c0f8da2cfd21276ba52c67cfe21229c02ff7e478505fb81cfe00c9beb9c26.scope: Consumed 1.017s CPU time.
Dec  2 05:50:48 np0005542249 podman[94709]: 2025-12-02 10:50:48.655547391 +0000 UTC m=+1.179515535 container died 2f5c0f8da2cfd21276ba52c67cfe21229c02ff7e478505fb81cfe00c9beb9c26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_easley, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:50:48 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:50:49 np0005542249 systemd[1]: var-lib-containers-storage-overlay-ab35894b1a1ef46794c421b05d4b97c2c24bbc296489f77e95b2034c9ff58b32-merged.mount: Deactivated successfully.
Dec  2 05:50:49 np0005542249 podman[94709]: 2025-12-02 10:50:49.076564386 +0000 UTC m=+1.600532510 container remove 2f5c0f8da2cfd21276ba52c67cfe21229c02ff7e478505fb81cfe00c9beb9c26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  2 05:50:49 np0005542249 systemd[1]: libpod-conmon-2f5c0f8da2cfd21276ba52c67cfe21229c02ff7e478505fb81cfe00c9beb9c26.scope: Deactivated successfully.
Dec  2 05:50:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:50:49 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:50:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:50:49 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:50 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:50 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:50 np0005542249 podman[94988]: 2025-12-02 10:50:50.181980736 +0000 UTC m=+0.081745157 container exec cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Dec  2 05:50:50 np0005542249 podman[94988]: 2025-12-02 10:50:50.289340907 +0000 UTC m=+0.189105338 container exec_died cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  2 05:50:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:50:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:50:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:50 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:50:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:50:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:50:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 05:50:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:50:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 05:50:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:51 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 2650bbdd-c054-4b56-a89f-da97fb66c1bb does not exist
Dec  2 05:50:51 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev ccf85803-6c66-4c75-816d-d2a2ff756de8 does not exist
Dec  2 05:50:51 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 7035bd3f-b0d1-409b-b866-3791929acbd4 does not exist
Dec  2 05:50:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 05:50:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 05:50:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 05:50:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 05:50:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:50:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:50:51 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:51 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:51 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:50:51 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:51 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 05:50:52 np0005542249 podman[95380]: 2025-12-02 10:50:52.518943934 +0000 UTC m=+0.056708906 container create 8b65ea409760ec169771e29f0dce5ae1a345bf3a83f6284ed9a08d041b09f1de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_tharp, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  2 05:50:52 np0005542249 systemd[1]: Started libpod-conmon-8b65ea409760ec169771e29f0dce5ae1a345bf3a83f6284ed9a08d041b09f1de.scope.
Dec  2 05:50:52 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:52 np0005542249 podman[95380]: 2025-12-02 10:50:52.499467307 +0000 UTC m=+0.037232259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:52 np0005542249 podman[95380]: 2025-12-02 10:50:52.612980558 +0000 UTC m=+0.150745490 container init 8b65ea409760ec169771e29f0dce5ae1a345bf3a83f6284ed9a08d041b09f1de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_tharp, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:50:52 np0005542249 podman[95380]: 2025-12-02 10:50:52.621423601 +0000 UTC m=+0.159188523 container start 8b65ea409760ec169771e29f0dce5ae1a345bf3a83f6284ed9a08d041b09f1de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  2 05:50:52 np0005542249 podman[95380]: 2025-12-02 10:50:52.625179185 +0000 UTC m=+0.162944137 container attach 8b65ea409760ec169771e29f0dce5ae1a345bf3a83f6284ed9a08d041b09f1de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 05:50:52 np0005542249 busy_tharp[95397]: 167 167
Dec  2 05:50:52 np0005542249 systemd[1]: libpod-8b65ea409760ec169771e29f0dce5ae1a345bf3a83f6284ed9a08d041b09f1de.scope: Deactivated successfully.
Dec  2 05:50:52 np0005542249 conmon[95397]: conmon 8b65ea409760ec169771 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8b65ea409760ec169771e29f0dce5ae1a345bf3a83f6284ed9a08d041b09f1de.scope/container/memory.events
Dec  2 05:50:52 np0005542249 podman[95380]: 2025-12-02 10:50:52.630030649 +0000 UTC m=+0.167795611 container died 8b65ea409760ec169771e29f0dce5ae1a345bf3a83f6284ed9a08d041b09f1de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_tharp, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:50:52 np0005542249 systemd[1]: var-lib-containers-storage-overlay-88ea376ccd823385e4d4ac1dab158351ada60c9188f8da80174d1641f91dd6a6-merged.mount: Deactivated successfully.
Dec  2 05:50:52 np0005542249 podman[95380]: 2025-12-02 10:50:52.699601418 +0000 UTC m=+0.237366360 container remove 8b65ea409760ec169771e29f0dce5ae1a345bf3a83f6284ed9a08d041b09f1de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_tharp, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Dec  2 05:50:52 np0005542249 systemd[1]: libpod-conmon-8b65ea409760ec169771e29f0dce5ae1a345bf3a83f6284ed9a08d041b09f1de.scope: Deactivated successfully.
Dec  2 05:50:52 np0005542249 podman[95423]: 2025-12-02 10:50:52.894577838 +0000 UTC m=+0.061812087 container create aa8f90795c71f7b2c1c63fecd1848216be87e90113ea6b4f3ca55ff881abbf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  2 05:50:52 np0005542249 podman[95423]: 2025-12-02 10:50:52.862215105 +0000 UTC m=+0.029449434 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:52 np0005542249 systemd[1]: Started libpod-conmon-aa8f90795c71f7b2c1c63fecd1848216be87e90113ea6b4f3ca55ff881abbf93.scope.
Dec  2 05:50:52 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:50:52 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf86c6b7eabb7dee6cd0963eca7d5c6fdb8d2303277f38fd7c5d68726c15e4bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf86c6b7eabb7dee6cd0963eca7d5c6fdb8d2303277f38fd7c5d68726c15e4bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf86c6b7eabb7dee6cd0963eca7d5c6fdb8d2303277f38fd7c5d68726c15e4bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf86c6b7eabb7dee6cd0963eca7d5c6fdb8d2303277f38fd7c5d68726c15e4bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf86c6b7eabb7dee6cd0963eca7d5c6fdb8d2303277f38fd7c5d68726c15e4bb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:53 np0005542249 podman[95423]: 2025-12-02 10:50:53.01642679 +0000 UTC m=+0.183661059 container init aa8f90795c71f7b2c1c63fecd1848216be87e90113ea6b4f3ca55ff881abbf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:50:53 np0005542249 podman[95423]: 2025-12-02 10:50:53.030385875 +0000 UTC m=+0.197620114 container start aa8f90795c71f7b2c1c63fecd1848216be87e90113ea6b4f3ca55ff881abbf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chandrasekhar, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Dec  2 05:50:53 np0005542249 podman[95423]: 2025-12-02 10:50:53.033855041 +0000 UTC m=+0.201089280 container attach aa8f90795c71f7b2c1c63fecd1848216be87e90113ea6b4f3ca55ff881abbf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chandrasekhar, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:50:54 np0005542249 nifty_chandrasekhar[95439]: --> passed data devices: 0 physical, 3 LVM
Dec  2 05:50:54 np0005542249 nifty_chandrasekhar[95439]: --> relative data size: 1.0
Dec  2 05:50:54 np0005542249 nifty_chandrasekhar[95439]: --> All data devices are unavailable
Dec  2 05:50:54 np0005542249 systemd[1]: libpod-aa8f90795c71f7b2c1c63fecd1848216be87e90113ea6b4f3ca55ff881abbf93.scope: Deactivated successfully.
Dec  2 05:50:54 np0005542249 systemd[1]: libpod-aa8f90795c71f7b2c1c63fecd1848216be87e90113ea6b4f3ca55ff881abbf93.scope: Consumed 1.025s CPU time.
Dec  2 05:50:54 np0005542249 podman[95423]: 2025-12-02 10:50:54.102403423 +0000 UTC m=+1.269637662 container died aa8f90795c71f7b2c1c63fecd1848216be87e90113ea6b4f3ca55ff881abbf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chandrasekhar, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  2 05:50:54 np0005542249 systemd[1]: var-lib-containers-storage-overlay-cf86c6b7eabb7dee6cd0963eca7d5c6fdb8d2303277f38fd7c5d68726c15e4bb-merged.mount: Deactivated successfully.
Dec  2 05:50:54 np0005542249 podman[95423]: 2025-12-02 10:50:54.153524053 +0000 UTC m=+1.320758292 container remove aa8f90795c71f7b2c1c63fecd1848216be87e90113ea6b4f3ca55ff881abbf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  2 05:50:54 np0005542249 systemd[1]: libpod-conmon-aa8f90795c71f7b2c1c63fecd1848216be87e90113ea6b4f3ca55ff881abbf93.scope: Deactivated successfully.
Dec  2 05:50:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:50:54 np0005542249 podman[95619]: 2025-12-02 10:50:54.876585753 +0000 UTC m=+0.050320960 container create 279b00e7f6d564f32ec619014cc21bf84ef9c37616f173967486b96be87d6c37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec  2 05:50:54 np0005542249 systemd[1]: Started libpod-conmon-279b00e7f6d564f32ec619014cc21bf84ef9c37616f173967486b96be87d6c37.scope.
Dec  2 05:50:54 np0005542249 podman[95619]: 2025-12-02 10:50:54.850429191 +0000 UTC m=+0.024164468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:54 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:54 np0005542249 podman[95619]: 2025-12-02 10:50:54.976553441 +0000 UTC m=+0.150288688 container init 279b00e7f6d564f32ec619014cc21bf84ef9c37616f173967486b96be87d6c37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:50:54 np0005542249 podman[95619]: 2025-12-02 10:50:54.986738901 +0000 UTC m=+0.160474128 container start 279b00e7f6d564f32ec619014cc21bf84ef9c37616f173967486b96be87d6c37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bohr, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:50:54 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:50:54 np0005542249 podman[95619]: 2025-12-02 10:50:54.990980368 +0000 UTC m=+0.164715595 container attach 279b00e7f6d564f32ec619014cc21bf84ef9c37616f173967486b96be87d6c37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:50:54 np0005542249 fervent_bohr[95636]: 167 167
Dec  2 05:50:54 np0005542249 systemd[1]: libpod-279b00e7f6d564f32ec619014cc21bf84ef9c37616f173967486b96be87d6c37.scope: Deactivated successfully.
Dec  2 05:50:54 np0005542249 conmon[95636]: conmon 279b00e7f6d564f32ec6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-279b00e7f6d564f32ec619014cc21bf84ef9c37616f173967486b96be87d6c37.scope/container/memory.events
Dec  2 05:50:54 np0005542249 podman[95619]: 2025-12-02 10:50:54.993589861 +0000 UTC m=+0.167325088 container died 279b00e7f6d564f32ec619014cc21bf84ef9c37616f173967486b96be87d6c37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:50:55 np0005542249 systemd[1]: var-lib-containers-storage-overlay-dfc1e9051747ef92c61bfe5371651485a352635c9627fb900d554855b7310ce4-merged.mount: Deactivated successfully.
Dec  2 05:50:55 np0005542249 podman[95619]: 2025-12-02 10:50:55.04830183 +0000 UTC m=+0.222037047 container remove 279b00e7f6d564f32ec619014cc21bf84ef9c37616f173967486b96be87d6c37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  2 05:50:55 np0005542249 systemd[1]: libpod-conmon-279b00e7f6d564f32ec619014cc21bf84ef9c37616f173967486b96be87d6c37.scope: Deactivated successfully.
Dec  2 05:50:55 np0005542249 podman[95659]: 2025-12-02 10:50:55.238503718 +0000 UTC m=+0.060574872 container create 6d50f2d07a896ceec338b7f9c1ffd9c84b0d4f4926a13488aba98563dcc055e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mestorf, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:50:55 np0005542249 systemd[1]: Started libpod-conmon-6d50f2d07a896ceec338b7f9c1ffd9c84b0d4f4926a13488aba98563dcc055e9.scope.
Dec  2 05:50:55 np0005542249 podman[95659]: 2025-12-02 10:50:55.215083792 +0000 UTC m=+0.037154956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:55 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:55 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b85790e6d7fb42e29599506536f36415fe293440f618ce2d8123524ce5e5b6af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:55 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b85790e6d7fb42e29599506536f36415fe293440f618ce2d8123524ce5e5b6af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:55 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b85790e6d7fb42e29599506536f36415fe293440f618ce2d8123524ce5e5b6af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:55 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b85790e6d7fb42e29599506536f36415fe293440f618ce2d8123524ce5e5b6af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:55 np0005542249 podman[95659]: 2025-12-02 10:50:55.332655055 +0000 UTC m=+0.154726189 container init 6d50f2d07a896ceec338b7f9c1ffd9c84b0d4f4926a13488aba98563dcc055e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mestorf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  2 05:50:55 np0005542249 podman[95659]: 2025-12-02 10:50:55.34620296 +0000 UTC m=+0.168274114 container start 6d50f2d07a896ceec338b7f9c1ffd9c84b0d4f4926a13488aba98563dcc055e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mestorf, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 05:50:55 np0005542249 podman[95659]: 2025-12-02 10:50:55.350636972 +0000 UTC m=+0.172708116 container attach 6d50f2d07a896ceec338b7f9c1ffd9c84b0d4f4926a13488aba98563dcc055e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mestorf, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]: {
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:    "0": [
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:        {
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "devices": [
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "/dev/loop3"
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            ],
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "lv_name": "ceph_lv0",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "lv_size": "21470642176",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "name": "ceph_lv0",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "tags": {
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.cluster_name": "ceph",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.crush_device_class": "",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.encrypted": "0",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.osd_id": "0",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.type": "block",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.vdo": "0"
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            },
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "type": "block",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "vg_name": "ceph_vg0"
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:        }
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:    ],
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:    "1": [
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:        {
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "devices": [
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "/dev/loop4"
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            ],
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "lv_name": "ceph_lv1",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "lv_size": "21470642176",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "name": "ceph_lv1",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "tags": {
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.cluster_name": "ceph",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.crush_device_class": "",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.encrypted": "0",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.osd_id": "1",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.type": "block",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.vdo": "0"
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            },
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "type": "block",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "vg_name": "ceph_vg1"
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:        }
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:    ],
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:    "2": [
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:        {
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "devices": [
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "/dev/loop5"
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            ],
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "lv_name": "ceph_lv2",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "lv_size": "21470642176",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "name": "ceph_lv2",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "tags": {
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.cluster_name": "ceph",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.crush_device_class": "",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.encrypted": "0",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.osd_id": "2",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.type": "block",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:                "ceph.vdo": "0"
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            },
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "type": "block",
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:            "vg_name": "ceph_vg2"
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:        }
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]:    ]
Dec  2 05:50:56 np0005542249 brave_mestorf[95675]: }
Dec  2 05:50:56 np0005542249 systemd[1]: libpod-6d50f2d07a896ceec338b7f9c1ffd9c84b0d4f4926a13488aba98563dcc055e9.scope: Deactivated successfully.
Dec  2 05:50:56 np0005542249 podman[95659]: 2025-12-02 10:50:56.115988629 +0000 UTC m=+0.938059743 container died 6d50f2d07a896ceec338b7f9c1ffd9c84b0d4f4926a13488aba98563dcc055e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mestorf, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  2 05:50:56 np0005542249 systemd[1]: var-lib-containers-storage-overlay-b85790e6d7fb42e29599506536f36415fe293440f618ce2d8123524ce5e5b6af-merged.mount: Deactivated successfully.
Dec  2 05:50:56 np0005542249 podman[95659]: 2025-12-02 10:50:56.168325662 +0000 UTC m=+0.990396776 container remove 6d50f2d07a896ceec338b7f9c1ffd9c84b0d4f4926a13488aba98563dcc055e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mestorf, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:50:56 np0005542249 systemd[1]: libpod-conmon-6d50f2d07a896ceec338b7f9c1ffd9c84b0d4f4926a13488aba98563dcc055e9.scope: Deactivated successfully.
Dec  2 05:50:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:50:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:50:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:50:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:50:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:50:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:50:56 np0005542249 podman[95839]: 2025-12-02 10:50:56.980939453 +0000 UTC m=+0.072063450 container create 29c4afb6e045a1df8dc017e65b72c939dfc1a83b3b60ec33066d618fc9b32a08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:50:56 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:50:57 np0005542249 systemd[1]: Started libpod-conmon-29c4afb6e045a1df8dc017e65b72c939dfc1a83b3b60ec33066d618fc9b32a08.scope.
Dec  2 05:50:57 np0005542249 podman[95839]: 2025-12-02 10:50:56.951900032 +0000 UTC m=+0.043024089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:57 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:57 np0005542249 podman[95839]: 2025-12-02 10:50:57.073287041 +0000 UTC m=+0.164411038 container init 29c4afb6e045a1df8dc017e65b72c939dfc1a83b3b60ec33066d618fc9b32a08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_margulis, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:50:57 np0005542249 podman[95839]: 2025-12-02 10:50:57.082088484 +0000 UTC m=+0.173212451 container start 29c4afb6e045a1df8dc017e65b72c939dfc1a83b3b60ec33066d618fc9b32a08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_margulis, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  2 05:50:57 np0005542249 podman[95839]: 2025-12-02 10:50:57.086382072 +0000 UTC m=+0.177506119 container attach 29c4afb6e045a1df8dc017e65b72c939dfc1a83b3b60ec33066d618fc9b32a08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_margulis, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  2 05:50:57 np0005542249 hopeful_margulis[95855]: 167 167
Dec  2 05:50:57 np0005542249 systemd[1]: libpod-29c4afb6e045a1df8dc017e65b72c939dfc1a83b3b60ec33066d618fc9b32a08.scope: Deactivated successfully.
Dec  2 05:50:57 np0005542249 podman[95839]: 2025-12-02 10:50:57.089656502 +0000 UTC m=+0.180780489 container died 29c4afb6e045a1df8dc017e65b72c939dfc1a83b3b60ec33066d618fc9b32a08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_margulis, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  2 05:50:57 np0005542249 systemd[1]: var-lib-containers-storage-overlay-72f8b47aac7916be7c64a8a3541c72ae8c85f34de963ae29bfed1f62cfb0cb47-merged.mount: Deactivated successfully.
Dec  2 05:50:57 np0005542249 podman[95839]: 2025-12-02 10:50:57.124895564 +0000 UTC m=+0.216019531 container remove 29c4afb6e045a1df8dc017e65b72c939dfc1a83b3b60ec33066d618fc9b32a08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_margulis, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:50:57 np0005542249 systemd[1]: libpod-conmon-29c4afb6e045a1df8dc017e65b72c939dfc1a83b3b60ec33066d618fc9b32a08.scope: Deactivated successfully.
Dec  2 05:50:57 np0005542249 podman[95879]: 2025-12-02 10:50:57.327609767 +0000 UTC m=+0.069968291 container create 6a8b9d1a9e42e8b4f62a36cf2e6b5c3146f246a790033bfe9fbeb5915908ff6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chandrasekhar, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:50:57 np0005542249 systemd[1]: Started libpod-conmon-6a8b9d1a9e42e8b4f62a36cf2e6b5c3146f246a790033bfe9fbeb5915908ff6e.scope.
Dec  2 05:50:57 np0005542249 podman[95879]: 2025-12-02 10:50:57.299490642 +0000 UTC m=+0.041849226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:50:57 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:50:57 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d71404a87b4424ee972e3bef7c6eab1d037e51a89cc41265de95e1e91c23079/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:57 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d71404a87b4424ee972e3bef7c6eab1d037e51a89cc41265de95e1e91c23079/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:57 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d71404a87b4424ee972e3bef7c6eab1d037e51a89cc41265de95e1e91c23079/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:57 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d71404a87b4424ee972e3bef7c6eab1d037e51a89cc41265de95e1e91c23079/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:50:57 np0005542249 podman[95879]: 2025-12-02 10:50:57.427899635 +0000 UTC m=+0.170258239 container init 6a8b9d1a9e42e8b4f62a36cf2e6b5c3146f246a790033bfe9fbeb5915908ff6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:50:57 np0005542249 podman[95879]: 2025-12-02 10:50:57.439043862 +0000 UTC m=+0.181402406 container start 6a8b9d1a9e42e8b4f62a36cf2e6b5c3146f246a790033bfe9fbeb5915908ff6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:50:57 np0005542249 podman[95879]: 2025-12-02 10:50:57.443483975 +0000 UTC m=+0.185842469 container attach 6a8b9d1a9e42e8b4f62a36cf2e6b5c3146f246a790033bfe9fbeb5915908ff6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  2 05:50:58 np0005542249 crazy_chandrasekhar[95895]: {
Dec  2 05:50:58 np0005542249 crazy_chandrasekhar[95895]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 05:50:58 np0005542249 crazy_chandrasekhar[95895]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:50:58 np0005542249 crazy_chandrasekhar[95895]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 05:50:58 np0005542249 crazy_chandrasekhar[95895]:        "osd_id": 0,
Dec  2 05:50:58 np0005542249 crazy_chandrasekhar[95895]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 05:50:58 np0005542249 crazy_chandrasekhar[95895]:        "type": "bluestore"
Dec  2 05:50:58 np0005542249 crazy_chandrasekhar[95895]:    },
Dec  2 05:50:58 np0005542249 crazy_chandrasekhar[95895]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 05:50:58 np0005542249 crazy_chandrasekhar[95895]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:50:58 np0005542249 crazy_chandrasekhar[95895]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 05:50:58 np0005542249 crazy_chandrasekhar[95895]:        "osd_id": 2,
Dec  2 05:50:58 np0005542249 crazy_chandrasekhar[95895]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 05:50:58 np0005542249 crazy_chandrasekhar[95895]:        "type": "bluestore"
Dec  2 05:50:58 np0005542249 crazy_chandrasekhar[95895]:    },
Dec  2 05:50:58 np0005542249 crazy_chandrasekhar[95895]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 05:50:58 np0005542249 crazy_chandrasekhar[95895]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:50:58 np0005542249 crazy_chandrasekhar[95895]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 05:50:58 np0005542249 crazy_chandrasekhar[95895]:        "osd_id": 1,
Dec  2 05:50:58 np0005542249 crazy_chandrasekhar[95895]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 05:50:58 np0005542249 crazy_chandrasekhar[95895]:        "type": "bluestore"
Dec  2 05:50:58 np0005542249 crazy_chandrasekhar[95895]:    }
Dec  2 05:50:58 np0005542249 crazy_chandrasekhar[95895]: }
Dec  2 05:50:58 np0005542249 systemd[1]: libpod-6a8b9d1a9e42e8b4f62a36cf2e6b5c3146f246a790033bfe9fbeb5915908ff6e.scope: Deactivated successfully.
Dec  2 05:50:58 np0005542249 podman[95879]: 2025-12-02 10:50:58.428229344 +0000 UTC m=+1.170587908 container died 6a8b9d1a9e42e8b4f62a36cf2e6b5c3146f246a790033bfe9fbeb5915908ff6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chandrasekhar, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:50:58 np0005542249 systemd[1]: var-lib-containers-storage-overlay-7d71404a87b4424ee972e3bef7c6eab1d037e51a89cc41265de95e1e91c23079-merged.mount: Deactivated successfully.
Dec  2 05:50:58 np0005542249 podman[95879]: 2025-12-02 10:50:58.980405019 +0000 UTC m=+1.722763533 container remove 6a8b9d1a9e42e8b4f62a36cf2e6b5c3146f246a790033bfe9fbeb5915908ff6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  2 05:50:58 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:50:58 np0005542249 systemd[1]: libpod-conmon-6a8b9d1a9e42e8b4f62a36cf2e6b5c3146f246a790033bfe9fbeb5915908ff6e.scope: Deactivated successfully.
Dec  2 05:50:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:50:59 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:50:59 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:50:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:51:00 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:00 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:00 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:51:01 np0005542249 python3[96017]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:01 np0005542249 podman[96019]: 2025-12-02 10:51:01.95460149 +0000 UTC m=+0.050741781 container create 25a13d21030c024d8239cee8008e32ea45a5adddfcb4bad369cfe5d18324858e (image=quay.io/ceph/ceph:v18, name=hopeful_austin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:01 np0005542249 systemd[1]: Started libpod-conmon-25a13d21030c024d8239cee8008e32ea45a5adddfcb4bad369cfe5d18324858e.scope.
Dec  2 05:51:02 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:02 np0005542249 podman[96019]: 2025-12-02 10:51:01.93321069 +0000 UTC m=+0.029351001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:02 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae02c8ff26359f986eea389b8c0424e3db2ac6cfa6226013d558293404b050d1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:02 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae02c8ff26359f986eea389b8c0424e3db2ac6cfa6226013d558293404b050d1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:02 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae02c8ff26359f986eea389b8c0424e3db2ac6cfa6226013d558293404b050d1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:02 np0005542249 podman[96019]: 2025-12-02 10:51:02.049576 +0000 UTC m=+0.145716321 container init 25a13d21030c024d8239cee8008e32ea45a5adddfcb4bad369cfe5d18324858e (image=quay.io/ceph/ceph:v18, name=hopeful_austin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  2 05:51:02 np0005542249 podman[96019]: 2025-12-02 10:51:02.062908148 +0000 UTC m=+0.159048439 container start 25a13d21030c024d8239cee8008e32ea45a5adddfcb4bad369cfe5d18324858e (image=quay.io/ceph/ceph:v18, name=hopeful_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:51:02 np0005542249 podman[96019]: 2025-12-02 10:51:02.066755924 +0000 UTC m=+0.162896215 container attach 25a13d21030c024d8239cee8008e32ea45a5adddfcb4bad369cfe5d18324858e (image=quay.io/ceph/ceph:v18, name=hopeful_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:51:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec  2 05:51:02 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3000175846' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  2 05:51:02 np0005542249 hopeful_austin[96035]: 
Dec  2 05:51:02 np0005542249 hopeful_austin[96035]: {"fsid":"95bc4eaa-1a14-59bf-acf2-4b3da055547d","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":143,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":17,"num_osds":3,"num_up_osds":3,"osd_up_since":1764672643,"num_in_osds":3,"osd_in_since":1764672613,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":502763520,"bytes_avail":63909163008,"bytes_total":64411926528},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-02T10:50:28.981301+0000","services":{}},"progress_events":{}}
Dec  2 05:51:02 np0005542249 systemd[1]: libpod-25a13d21030c024d8239cee8008e32ea45a5adddfcb4bad369cfe5d18324858e.scope: Deactivated successfully.
Dec  2 05:51:02 np0005542249 podman[96019]: 2025-12-02 10:51:02.697329081 +0000 UTC m=+0.793469372 container died 25a13d21030c024d8239cee8008e32ea45a5adddfcb4bad369cfe5d18324858e (image=quay.io/ceph/ceph:v18, name=hopeful_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  2 05:51:02 np0005542249 systemd[1]: var-lib-containers-storage-overlay-ae02c8ff26359f986eea389b8c0424e3db2ac6cfa6226013d558293404b050d1-merged.mount: Deactivated successfully.
Dec  2 05:51:02 np0005542249 podman[96019]: 2025-12-02 10:51:02.755809825 +0000 UTC m=+0.851950176 container remove 25a13d21030c024d8239cee8008e32ea45a5adddfcb4bad369cfe5d18324858e (image=quay.io/ceph/ceph:v18, name=hopeful_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:51:02 np0005542249 systemd[1]: libpod-conmon-25a13d21030c024d8239cee8008e32ea45a5adddfcb4bad369cfe5d18324858e.scope: Deactivated successfully.
Dec  2 05:51:02 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:51:03 np0005542249 python3[96095]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:03 np0005542249 podman[96096]: 2025-12-02 10:51:03.316397462 +0000 UTC m=+0.061869668 container create 0328d2690b7f6fa608750feb68d9dcca56f0093f472ff84dfc50c9d015a2ee74 (image=quay.io/ceph/ceph:v18, name=amazing_ardinghelli, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  2 05:51:03 np0005542249 podman[96096]: 2025-12-02 10:51:03.295570198 +0000 UTC m=+0.041042404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:03 np0005542249 systemd[1]: Started libpod-conmon-0328d2690b7f6fa608750feb68d9dcca56f0093f472ff84dfc50c9d015a2ee74.scope.
Dec  2 05:51:03 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:03 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649015ca24f03d767d30df8f7728329dfd8abdd44a048859a55d9d609b5cdf42/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:03 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649015ca24f03d767d30df8f7728329dfd8abdd44a048859a55d9d609b5cdf42/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:03 np0005542249 podman[96096]: 2025-12-02 10:51:03.456107157 +0000 UTC m=+0.201579453 container init 0328d2690b7f6fa608750feb68d9dcca56f0093f472ff84dfc50c9d015a2ee74 (image=quay.io/ceph/ceph:v18, name=amazing_ardinghelli, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  2 05:51:03 np0005542249 podman[96096]: 2025-12-02 10:51:03.466042391 +0000 UTC m=+0.211514587 container start 0328d2690b7f6fa608750feb68d9dcca56f0093f472ff84dfc50c9d015a2ee74 (image=quay.io/ceph/ceph:v18, name=amazing_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  2 05:51:03 np0005542249 podman[96096]: 2025-12-02 10:51:03.470533815 +0000 UTC m=+0.216006081 container attach 0328d2690b7f6fa608750feb68d9dcca56f0093f472ff84dfc50c9d015a2ee74 (image=quay.io/ceph/ceph:v18, name=amazing_ardinghelli, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:51:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec  2 05:51:04 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4113408886' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  2 05:51:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Dec  2 05:51:04 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4113408886' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  2 05:51:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Dec  2 05:51:04 np0005542249 amazing_ardinghelli[96111]: pool 'vms' created
Dec  2 05:51:04 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Dec  2 05:51:04 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/4113408886' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  2 05:51:04 np0005542249 systemd[1]: libpod-0328d2690b7f6fa608750feb68d9dcca56f0093f472ff84dfc50c9d015a2ee74.scope: Deactivated successfully.
Dec  2 05:51:04 np0005542249 podman[96096]: 2025-12-02 10:51:04.790127004 +0000 UTC m=+1.535599180 container died 0328d2690b7f6fa608750feb68d9dcca56f0093f472ff84dfc50c9d015a2ee74 (image=quay.io/ceph/ceph:v18, name=amazing_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  2 05:51:04 np0005542249 systemd[1]: var-lib-containers-storage-overlay-649015ca24f03d767d30df8f7728329dfd8abdd44a048859a55d9d609b5cdf42-merged.mount: Deactivated successfully.
Dec  2 05:51:04 np0005542249 podman[96096]: 2025-12-02 10:51:04.861165136 +0000 UTC m=+1.606637302 container remove 0328d2690b7f6fa608750feb68d9dcca56f0093f472ff84dfc50c9d015a2ee74 (image=quay.io/ceph/ceph:v18, name=amazing_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  2 05:51:04 np0005542249 systemd[1]: libpod-conmon-0328d2690b7f6fa608750feb68d9dcca56f0093f472ff84dfc50c9d015a2ee74.scope: Deactivated successfully.
Dec  2 05:51:04 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v58: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:51:05 np0005542249 python3[96174]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:05 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:05 np0005542249 podman[96175]: 2025-12-02 10:51:05.273734452 +0000 UTC m=+0.061296097 container create 262cb0552196426d5eb05dd6548a5c8dec6ba7e63b10b714972b1d1399f386d4 (image=quay.io/ceph/ceph:v18, name=adoring_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  2 05:51:05 np0005542249 systemd[1]: Started libpod-conmon-262cb0552196426d5eb05dd6548a5c8dec6ba7e63b10b714972b1d1399f386d4.scope.
Dec  2 05:51:05 np0005542249 podman[96175]: 2025-12-02 10:51:05.2460512 +0000 UTC m=+0.033612885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:05 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:05 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/449fa7025f1006b4418673fe54f7366af9acfdaa4defec380b02663c560798b5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:05 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/449fa7025f1006b4418673fe54f7366af9acfdaa4defec380b02663c560798b5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:05 np0005542249 podman[96175]: 2025-12-02 10:51:05.361477918 +0000 UTC m=+0.149039653 container init 262cb0552196426d5eb05dd6548a5c8dec6ba7e63b10b714972b1d1399f386d4 (image=quay.io/ceph/ceph:v18, name=adoring_swartz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:51:05 np0005542249 podman[96175]: 2025-12-02 10:51:05.36670544 +0000 UTC m=+0.154267065 container start 262cb0552196426d5eb05dd6548a5c8dec6ba7e63b10b714972b1d1399f386d4 (image=quay.io/ceph/ceph:v18, name=adoring_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  2 05:51:05 np0005542249 podman[96175]: 2025-12-02 10:51:05.371665284 +0000 UTC m=+0.159226949 container attach 262cb0552196426d5eb05dd6548a5c8dec6ba7e63b10b714972b1d1399f386d4 (image=quay.io/ceph/ceph:v18, name=adoring_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:05 np0005542249 ceph-mon[75081]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  2 05:51:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Dec  2 05:51:05 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/4113408886' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  2 05:51:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Dec  2 05:51:05 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Dec  2 05:51:05 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec  2 05:51:05 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3640695003' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  2 05:51:06 np0005542249 ceph-mon[75081]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  2 05:51:06 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/3640695003' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  2 05:51:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Dec  2 05:51:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3640695003' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  2 05:51:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Dec  2 05:51:06 np0005542249 adoring_swartz[96190]: pool 'volumes' created
Dec  2 05:51:06 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Dec  2 05:51:06 np0005542249 systemd[1]: libpod-262cb0552196426d5eb05dd6548a5c8dec6ba7e63b10b714972b1d1399f386d4.scope: Deactivated successfully.
Dec  2 05:51:06 np0005542249 podman[96175]: 2025-12-02 10:51:06.853152233 +0000 UTC m=+1.640713898 container died 262cb0552196426d5eb05dd6548a5c8dec6ba7e63b10b714972b1d1399f386d4 (image=quay.io/ceph/ceph:v18, name=adoring_swartz, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:51:06 np0005542249 systemd[1]: var-lib-containers-storage-overlay-449fa7025f1006b4418673fe54f7366af9acfdaa4defec380b02663c560798b5-merged.mount: Deactivated successfully.
Dec  2 05:51:06 np0005542249 podman[96175]: 2025-12-02 10:51:06.90266464 +0000 UTC m=+1.690226305 container remove 262cb0552196426d5eb05dd6548a5c8dec6ba7e63b10b714972b1d1399f386d4 (image=quay.io/ceph/ceph:v18, name=adoring_swartz, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  2 05:51:06 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 20 pg[3.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:06 np0005542249 systemd[1]: libpod-conmon-262cb0552196426d5eb05dd6548a5c8dec6ba7e63b10b714972b1d1399f386d4.scope: Deactivated successfully.
Dec  2 05:51:06 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v61: 3 pgs: 1 active+clean, 2 unknown; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:51:07 np0005542249 python3[96254]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:07 np0005542249 podman[96255]: 2025-12-02 10:51:07.357059953 +0000 UTC m=+0.069634073 container create 8f31df45f26e516ac83ec748b7521b78b52172790d3131ff754f1934d22c4dde (image=quay.io/ceph/ceph:v18, name=ecstatic_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:51:07 np0005542249 systemd[1]: Started libpod-conmon-8f31df45f26e516ac83ec748b7521b78b52172790d3131ff754f1934d22c4dde.scope.
Dec  2 05:51:07 np0005542249 podman[96255]: 2025-12-02 10:51:07.337302196 +0000 UTC m=+0.049876306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:07 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:07 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8147763e37921703de257e595c0b9489b9975f614fa14aebd20f7a7d6128535c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:07 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8147763e37921703de257e595c0b9489b9975f614fa14aebd20f7a7d6128535c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:07 np0005542249 podman[96255]: 2025-12-02 10:51:07.449337842 +0000 UTC m=+0.161911952 container init 8f31df45f26e516ac83ec748b7521b78b52172790d3131ff754f1934d22c4dde (image=quay.io/ceph/ceph:v18, name=ecstatic_cori, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  2 05:51:07 np0005542249 podman[96255]: 2025-12-02 10:51:07.455431068 +0000 UTC m=+0.168005178 container start 8f31df45f26e516ac83ec748b7521b78b52172790d3131ff754f1934d22c4dde (image=quay.io/ceph/ceph:v18, name=ecstatic_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  2 05:51:07 np0005542249 podman[96255]: 2025-12-02 10:51:07.4588208 +0000 UTC m=+0.171394910 container attach 8f31df45f26e516ac83ec748b7521b78b52172790d3131ff754f1934d22c4dde (image=quay.io/ceph/ceph:v18, name=ecstatic_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  2 05:51:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Dec  2 05:51:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Dec  2 05:51:07 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/3640695003' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  2 05:51:07 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Dec  2 05:51:07 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 21 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec  2 05:51:07 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2938393774' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  2 05:51:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Dec  2 05:51:08 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/2938393774' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  2 05:51:08 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2938393774' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  2 05:51:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Dec  2 05:51:08 np0005542249 ecstatic_cori[96270]: pool 'backups' created
Dec  2 05:51:08 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Dec  2 05:51:08 np0005542249 systemd[1]: libpod-8f31df45f26e516ac83ec748b7521b78b52172790d3131ff754f1934d22c4dde.scope: Deactivated successfully.
Dec  2 05:51:08 np0005542249 podman[96255]: 2025-12-02 10:51:08.861590959 +0000 UTC m=+1.574165109 container died 8f31df45f26e516ac83ec748b7521b78b52172790d3131ff754f1934d22c4dde (image=quay.io/ceph/ceph:v18, name=ecstatic_cori, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  2 05:51:08 np0005542249 systemd[1]: var-lib-containers-storage-overlay-8147763e37921703de257e595c0b9489b9975f614fa14aebd20f7a7d6128535c-merged.mount: Deactivated successfully.
Dec  2 05:51:08 np0005542249 podman[96255]: 2025-12-02 10:51:08.901314088 +0000 UTC m=+1.613888198 container remove 8f31df45f26e516ac83ec748b7521b78b52172790d3131ff754f1934d22c4dde (image=quay.io/ceph/ceph:v18, name=ecstatic_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  2 05:51:08 np0005542249 systemd[1]: libpod-conmon-8f31df45f26e516ac83ec748b7521b78b52172790d3131ff754f1934d22c4dde.scope: Deactivated successfully.
Dec  2 05:51:08 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v64: 4 pgs: 2 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:51:09 np0005542249 python3[96334]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:09 np0005542249 podman[96335]: 2025-12-02 10:51:09.295187788 +0000 UTC m=+0.061842714 container create d702b330d4b8970f8d3b5ba5b717307e78d1672f650b81d73b5bf8c48c8edafb (image=quay.io/ceph/ceph:v18, name=vibrant_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  2 05:51:09 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 22 pg[4.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:09 np0005542249 systemd[1]: Started libpod-conmon-d702b330d4b8970f8d3b5ba5b717307e78d1672f650b81d73b5bf8c48c8edafb.scope.
Dec  2 05:51:09 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:09 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0958152c930e4ee20fd7820a337dedd63b6e2ebdf093e574180123ecbf82f05a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:09 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0958152c930e4ee20fd7820a337dedd63b6e2ebdf093e574180123ecbf82f05a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:09 np0005542249 podman[96335]: 2025-12-02 10:51:09.366097015 +0000 UTC m=+0.132752021 container init d702b330d4b8970f8d3b5ba5b717307e78d1672f650b81d73b5bf8c48c8edafb (image=quay.io/ceph/ceph:v18, name=vibrant_sutherland, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Dec  2 05:51:09 np0005542249 podman[96335]: 2025-12-02 10:51:09.274827904 +0000 UTC m=+0.041482870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:09 np0005542249 podman[96335]: 2025-12-02 10:51:09.373123346 +0000 UTC m=+0.139778292 container start d702b330d4b8970f8d3b5ba5b717307e78d1672f650b81d73b5bf8c48c8edafb (image=quay.io/ceph/ceph:v18, name=vibrant_sutherland, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  2 05:51:09 np0005542249 podman[96335]: 2025-12-02 10:51:09.376892379 +0000 UTC m=+0.143547385 container attach d702b330d4b8970f8d3b5ba5b717307e78d1672f650b81d73b5bf8c48c8edafb (image=quay.io/ceph/ceph:v18, name=vibrant_sutherland, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  2 05:51:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e22 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:51:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Dec  2 05:51:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Dec  2 05:51:09 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/2938393774' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  2 05:51:09 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Dec  2 05:51:09 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 23 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec  2 05:51:09 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1173090735' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  2 05:51:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Dec  2 05:51:10 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/1173090735' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  2 05:51:10 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1173090735' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  2 05:51:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Dec  2 05:51:10 np0005542249 vibrant_sutherland[96350]: pool 'images' created
Dec  2 05:51:10 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Dec  2 05:51:10 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 24 pg[5.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [2] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:10 np0005542249 systemd[1]: libpod-d702b330d4b8970f8d3b5ba5b717307e78d1672f650b81d73b5bf8c48c8edafb.scope: Deactivated successfully.
Dec  2 05:51:10 np0005542249 podman[96335]: 2025-12-02 10:51:10.909788445 +0000 UTC m=+1.676443381 container died d702b330d4b8970f8d3b5ba5b717307e78d1672f650b81d73b5bf8c48c8edafb (image=quay.io/ceph/ceph:v18, name=vibrant_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Dec  2 05:51:10 np0005542249 systemd[1]: var-lib-containers-storage-overlay-0958152c930e4ee20fd7820a337dedd63b6e2ebdf093e574180123ecbf82f05a-merged.mount: Deactivated successfully.
Dec  2 05:51:10 np0005542249 podman[96335]: 2025-12-02 10:51:10.96331633 +0000 UTC m=+1.729971266 container remove d702b330d4b8970f8d3b5ba5b717307e78d1672f650b81d73b5bf8c48c8edafb (image=quay.io/ceph/ceph:v18, name=vibrant_sutherland, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  2 05:51:10 np0005542249 systemd[1]: libpod-conmon-d702b330d4b8970f8d3b5ba5b717307e78d1672f650b81d73b5bf8c48c8edafb.scope: Deactivated successfully.
Dec  2 05:51:10 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v67: 5 pgs: 2 active+clean, 3 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:51:11 np0005542249 python3[96414]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:11 np0005542249 podman[96415]: 2025-12-02 10:51:11.323258756 +0000 UTC m=+0.038312542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:11 np0005542249 podman[96415]: 2025-12-02 10:51:11.697867001 +0000 UTC m=+0.412920767 container create 58e2ebddbb5c17a6809f32361de50ab7c1829589a00c9797f9ea8085e7e6c74e (image=quay.io/ceph/ceph:v18, name=beautiful_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:11 np0005542249 systemd[1]: Started libpod-conmon-58e2ebddbb5c17a6809f32361de50ab7c1829589a00c9797f9ea8085e7e6c74e.scope.
Dec  2 05:51:11 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:11 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea89c6088406af9a356d5b96e7f3bdcc487f73d446ca606a19e2bf17e38673b6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:11 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea89c6088406af9a356d5b96e7f3bdcc487f73d446ca606a19e2bf17e38673b6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:11 np0005542249 podman[96415]: 2025-12-02 10:51:11.788503035 +0000 UTC m=+0.503556881 container init 58e2ebddbb5c17a6809f32361de50ab7c1829589a00c9797f9ea8085e7e6c74e (image=quay.io/ceph/ceph:v18, name=beautiful_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  2 05:51:11 np0005542249 podman[96415]: 2025-12-02 10:51:11.799667308 +0000 UTC m=+0.514721104 container start 58e2ebddbb5c17a6809f32361de50ab7c1829589a00c9797f9ea8085e7e6c74e (image=quay.io/ceph/ceph:v18, name=beautiful_murdock, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  2 05:51:11 np0005542249 podman[96415]: 2025-12-02 10:51:11.803501763 +0000 UTC m=+0.518555599 container attach 58e2ebddbb5c17a6809f32361de50ab7c1829589a00c9797f9ea8085e7e6c74e (image=quay.io/ceph/ceph:v18, name=beautiful_murdock, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  2 05:51:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Dec  2 05:51:11 np0005542249 ceph-mon[75081]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  2 05:51:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Dec  2 05:51:11 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Dec  2 05:51:11 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/1173090735' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  2 05:51:11 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 25 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [2] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec  2 05:51:12 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1569189202' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  2 05:51:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Dec  2 05:51:12 np0005542249 ceph-mon[75081]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  2 05:51:12 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/1569189202' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  2 05:51:12 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1569189202' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  2 05:51:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Dec  2 05:51:12 np0005542249 beautiful_murdock[96430]: pool 'cephfs.cephfs.meta' created
Dec  2 05:51:12 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Dec  2 05:51:12 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 26 pg[6.0( empty local-lis/les=0/0 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [0] r=0 lpr=26 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:12 np0005542249 systemd[1]: libpod-58e2ebddbb5c17a6809f32361de50ab7c1829589a00c9797f9ea8085e7e6c74e.scope: Deactivated successfully.
Dec  2 05:51:12 np0005542249 podman[96415]: 2025-12-02 10:51:12.949076038 +0000 UTC m=+1.664129804 container died 58e2ebddbb5c17a6809f32361de50ab7c1829589a00c9797f9ea8085e7e6c74e (image=quay.io/ceph/ceph:v18, name=beautiful_murdock, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:51:12 np0005542249 systemd[1]: var-lib-containers-storage-overlay-ea89c6088406af9a356d5b96e7f3bdcc487f73d446ca606a19e2bf17e38673b6-merged.mount: Deactivated successfully.
Dec  2 05:51:12 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v70: 6 pgs: 4 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:51:12 np0005542249 podman[96415]: 2025-12-02 10:51:12.993345162 +0000 UTC m=+1.708398928 container remove 58e2ebddbb5c17a6809f32361de50ab7c1829589a00c9797f9ea8085e7e6c74e (image=quay.io/ceph/ceph:v18, name=beautiful_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  2 05:51:13 np0005542249 systemd[1]: libpod-conmon-58e2ebddbb5c17a6809f32361de50ab7c1829589a00c9797f9ea8085e7e6c74e.scope: Deactivated successfully.
Dec  2 05:51:13 np0005542249 python3[96492]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:13 np0005542249 podman[96493]: 2025-12-02 10:51:13.356002032 +0000 UTC m=+0.059052006 container create 01a231733e597eb90df5dfe171bfed770207351a917592cb0a95eec8d305d7ab (image=quay.io/ceph/ceph:v18, name=jolly_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  2 05:51:13 np0005542249 systemd[1]: Started libpod-conmon-01a231733e597eb90df5dfe171bfed770207351a917592cb0a95eec8d305d7ab.scope.
Dec  2 05:51:13 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:13 np0005542249 podman[96493]: 2025-12-02 10:51:13.332872033 +0000 UTC m=+0.035922047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:13 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a128caadf3a2f30cfe25bf6eeb2ad077a9e9dd5cc92701fee8e8497e438998b0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:13 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a128caadf3a2f30cfe25bf6eeb2ad077a9e9dd5cc92701fee8e8497e438998b0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:13 np0005542249 podman[96493]: 2025-12-02 10:51:13.447717655 +0000 UTC m=+0.150767649 container init 01a231733e597eb90df5dfe171bfed770207351a917592cb0a95eec8d305d7ab (image=quay.io/ceph/ceph:v18, name=jolly_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:51:13 np0005542249 podman[96493]: 2025-12-02 10:51:13.453899904 +0000 UTC m=+0.156949908 container start 01a231733e597eb90df5dfe171bfed770207351a917592cb0a95eec8d305d7ab (image=quay.io/ceph/ceph:v18, name=jolly_keller, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  2 05:51:13 np0005542249 podman[96493]: 2025-12-02 10:51:13.4577824 +0000 UTC m=+0.160832394 container attach 01a231733e597eb90df5dfe171bfed770207351a917592cb0a95eec8d305d7ab (image=quay.io/ceph/ceph:v18, name=jolly_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Dec  2 05:51:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Dec  2 05:51:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Dec  2 05:51:13 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/1569189202' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  2 05:51:13 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Dec  2 05:51:13 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 27 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [0] r=0 lpr=26 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec  2 05:51:13 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3743761468' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  2 05:51:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:51:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Dec  2 05:51:14 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3743761468' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  2 05:51:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Dec  2 05:51:14 np0005542249 jolly_keller[96509]: pool 'cephfs.cephfs.data' created
Dec  2 05:51:14 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Dec  2 05:51:14 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 28 pg[7.0( empty local-lis/les=0/0 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:14 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/3743761468' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  2 05:51:14 np0005542249 systemd[1]: libpod-01a231733e597eb90df5dfe171bfed770207351a917592cb0a95eec8d305d7ab.scope: Deactivated successfully.
Dec  2 05:51:14 np0005542249 podman[96493]: 2025-12-02 10:51:14.963287711 +0000 UTC m=+1.666337675 container died 01a231733e597eb90df5dfe171bfed770207351a917592cb0a95eec8d305d7ab (image=quay.io/ceph/ceph:v18, name=jolly_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  2 05:51:14 np0005542249 systemd[1]: var-lib-containers-storage-overlay-a128caadf3a2f30cfe25bf6eeb2ad077a9e9dd5cc92701fee8e8497e438998b0-merged.mount: Deactivated successfully.
Dec  2 05:51:14 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 5 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:51:15 np0005542249 podman[96493]: 2025-12-02 10:51:15.009188379 +0000 UTC m=+1.712238353 container remove 01a231733e597eb90df5dfe171bfed770207351a917592cb0a95eec8d305d7ab (image=quay.io/ceph/ceph:v18, name=jolly_keller, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 05:51:15 np0005542249 systemd[1]: libpod-conmon-01a231733e597eb90df5dfe171bfed770207351a917592cb0a95eec8d305d7ab.scope: Deactivated successfully.
Dec  2 05:51:15 np0005542249 python3[96573]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:15 np0005542249 podman[96574]: 2025-12-02 10:51:15.394140765 +0000 UTC m=+0.045360075 container create 1b2df653a3b44c5266899130a7811fd0c321f6a7bd48faa43a0ffb8b9e6bce76 (image=quay.io/ceph/ceph:v18, name=eager_agnesi, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:15 np0005542249 systemd[1]: Started libpod-conmon-1b2df653a3b44c5266899130a7811fd0c321f6a7bd48faa43a0ffb8b9e6bce76.scope.
Dec  2 05:51:15 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:15 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/931c9221b1f499e37c6bbd13bae27d61976ccf9170988c8bed289bddc7f63579/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:15 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/931c9221b1f499e37c6bbd13bae27d61976ccf9170988c8bed289bddc7f63579/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:15 np0005542249 podman[96574]: 2025-12-02 10:51:15.371990433 +0000 UTC m=+0.023209703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:15 np0005542249 podman[96574]: 2025-12-02 10:51:15.478129469 +0000 UTC m=+0.129348809 container init 1b2df653a3b44c5266899130a7811fd0c321f6a7bd48faa43a0ffb8b9e6bce76 (image=quay.io/ceph/ceph:v18, name=eager_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:51:15 np0005542249 podman[96574]: 2025-12-02 10:51:15.489844267 +0000 UTC m=+0.141063567 container start 1b2df653a3b44c5266899130a7811fd0c321f6a7bd48faa43a0ffb8b9e6bce76 (image=quay.io/ceph/ceph:v18, name=eager_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:51:15 np0005542249 podman[96574]: 2025-12-02 10:51:15.493257049 +0000 UTC m=+0.144476359 container attach 1b2df653a3b44c5266899130a7811fd0c321f6a7bd48faa43a0ffb8b9e6bce76 (image=quay.io/ceph/ceph:v18, name=eager_agnesi, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  2 05:51:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Dec  2 05:51:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Dec  2 05:51:15 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Dec  2 05:51:15 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 29 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:15 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/3743761468' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  2 05:51:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Dec  2 05:51:16 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2084262370' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec  2 05:51:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Dec  2 05:51:16 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2084262370' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec  2 05:51:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Dec  2 05:51:16 np0005542249 eager_agnesi[96588]: enabled application 'rbd' on pool 'vms'
Dec  2 05:51:16 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Dec  2 05:51:16 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/2084262370' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec  2 05:51:16 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/2084262370' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec  2 05:51:16 np0005542249 systemd[1]: libpod-1b2df653a3b44c5266899130a7811fd0c321f6a7bd48faa43a0ffb8b9e6bce76.scope: Deactivated successfully.
Dec  2 05:51:16 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 6 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:51:16 np0005542249 podman[96574]: 2025-12-02 10:51:16.993122238 +0000 UTC m=+1.644341518 container died 1b2df653a3b44c5266899130a7811fd0c321f6a7bd48faa43a0ffb8b9e6bce76 (image=quay.io/ceph/ceph:v18, name=eager_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  2 05:51:16 np0005542249 conmon[96588]: conmon 1b2df653a3b44c526689 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1b2df653a3b44c5266899130a7811fd0c321f6a7bd48faa43a0ffb8b9e6bce76.scope/container/memory.events
Dec  2 05:51:17 np0005542249 systemd[1]: var-lib-containers-storage-overlay-931c9221b1f499e37c6bbd13bae27d61976ccf9170988c8bed289bddc7f63579-merged.mount: Deactivated successfully.
Dec  2 05:51:17 np0005542249 podman[96574]: 2025-12-02 10:51:17.037082383 +0000 UTC m=+1.688301643 container remove 1b2df653a3b44c5266899130a7811fd0c321f6a7bd48faa43a0ffb8b9e6bce76 (image=quay.io/ceph/ceph:v18, name=eager_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:51:17 np0005542249 systemd[1]: libpod-conmon-1b2df653a3b44c5266899130a7811fd0c321f6a7bd48faa43a0ffb8b9e6bce76.scope: Deactivated successfully.
Dec  2 05:51:17 np0005542249 python3[96651]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:17 np0005542249 podman[96652]: 2025-12-02 10:51:17.390966334 +0000 UTC m=+0.049586090 container create abedc8a07e79082963116590340723973439fe50c2b12148736bdfd58d2bed5d (image=quay.io/ceph/ceph:v18, name=sharp_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  2 05:51:17 np0005542249 systemd[1]: Started libpod-conmon-abedc8a07e79082963116590340723973439fe50c2b12148736bdfd58d2bed5d.scope.
Dec  2 05:51:17 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:17 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d0ca0f6f7f396e3d10ac23c2b49f26d829bab7da01a792d8d074bbe75ec14c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:17 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d0ca0f6f7f396e3d10ac23c2b49f26d829bab7da01a792d8d074bbe75ec14c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:17 np0005542249 podman[96652]: 2025-12-02 10:51:17.373467168 +0000 UTC m=+0.032086914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:17 np0005542249 podman[96652]: 2025-12-02 10:51:17.467284979 +0000 UTC m=+0.125904745 container init abedc8a07e79082963116590340723973439fe50c2b12148736bdfd58d2bed5d (image=quay.io/ceph/ceph:v18, name=sharp_meitner, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  2 05:51:17 np0005542249 podman[96652]: 2025-12-02 10:51:17.473649552 +0000 UTC m=+0.132269338 container start abedc8a07e79082963116590340723973439fe50c2b12148736bdfd58d2bed5d (image=quay.io/ceph/ceph:v18, name=sharp_meitner, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:17 np0005542249 podman[96652]: 2025-12-02 10:51:17.478103763 +0000 UTC m=+0.136723559 container attach abedc8a07e79082963116590340723973439fe50c2b12148736bdfd58d2bed5d (image=quay.io/ceph/ceph:v18, name=sharp_meitner, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  2 05:51:17 np0005542249 ceph-mon[75081]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  2 05:51:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Dec  2 05:51:18 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1985144572' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec  2 05:51:18 np0005542249 ceph-mon[75081]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  2 05:51:18 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/1985144572' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec  2 05:51:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Dec  2 05:51:18 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:51:18 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1985144572' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec  2 05:51:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Dec  2 05:51:18 np0005542249 sharp_meitner[96668]: enabled application 'rbd' on pool 'volumes'
Dec  2 05:51:18 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Dec  2 05:51:19 np0005542249 systemd[1]: libpod-abedc8a07e79082963116590340723973439fe50c2b12148736bdfd58d2bed5d.scope: Deactivated successfully.
Dec  2 05:51:19 np0005542249 podman[96693]: 2025-12-02 10:51:19.077531818 +0000 UTC m=+0.032968267 container died abedc8a07e79082963116590340723973439fe50c2b12148736bdfd58d2bed5d (image=quay.io/ceph/ceph:v18, name=sharp_meitner, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 05:51:19 np0005542249 systemd[1]: var-lib-containers-storage-overlay-b5d0ca0f6f7f396e3d10ac23c2b49f26d829bab7da01a792d8d074bbe75ec14c-merged.mount: Deactivated successfully.
Dec  2 05:51:19 np0005542249 podman[96693]: 2025-12-02 10:51:19.133262963 +0000 UTC m=+0.088699312 container remove abedc8a07e79082963116590340723973439fe50c2b12148736bdfd58d2bed5d (image=quay.io/ceph/ceph:v18, name=sharp_meitner, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:51:19 np0005542249 systemd[1]: libpod-conmon-abedc8a07e79082963116590340723973439fe50c2b12148736bdfd58d2bed5d.scope: Deactivated successfully.
Dec  2 05:51:19 np0005542249 python3[96733]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:19 np0005542249 podman[96734]: 2025-12-02 10:51:19.477669897 +0000 UTC m=+0.038578321 container create 40bd8ff2161d7c0569daa1de6a72a215bfba5bf85a04c5ae00a0bde0c04223f2 (image=quay.io/ceph/ceph:v18, name=elated_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  2 05:51:19 np0005542249 systemd[1]: Started libpod-conmon-40bd8ff2161d7c0569daa1de6a72a215bfba5bf85a04c5ae00a0bde0c04223f2.scope.
Dec  2 05:51:19 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:19 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4043a4e9024831fa0f563136e6874bba1a47beb9a896ae62707e4ed3ff1551/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:19 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4043a4e9024831fa0f563136e6874bba1a47beb9a896ae62707e4ed3ff1551/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:19 np0005542249 podman[96734]: 2025-12-02 10:51:19.459863472 +0000 UTC m=+0.020771876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:19 np0005542249 podman[96734]: 2025-12-02 10:51:19.568596299 +0000 UTC m=+0.129504693 container init 40bd8ff2161d7c0569daa1de6a72a215bfba5bf85a04c5ae00a0bde0c04223f2 (image=quay.io/ceph/ceph:v18, name=elated_cerf, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:51:19 np0005542249 podman[96734]: 2025-12-02 10:51:19.57858821 +0000 UTC m=+0.139496604 container start 40bd8ff2161d7c0569daa1de6a72a215bfba5bf85a04c5ae00a0bde0c04223f2 (image=quay.io/ceph/ceph:v18, name=elated_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 05:51:19 np0005542249 podman[96734]: 2025-12-02 10:51:19.583848663 +0000 UTC m=+0.144757107 container attach 40bd8ff2161d7c0569daa1de6a72a215bfba5bf85a04c5ae00a0bde0c04223f2 (image=quay.io/ceph/ceph:v18, name=elated_cerf, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:51:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:51:19 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/1985144572' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec  2 05:51:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Dec  2 05:51:20 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3226537830' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec  2 05:51:20 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:51:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Dec  2 05:51:21 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/3226537830' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec  2 05:51:21 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3226537830' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec  2 05:51:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Dec  2 05:51:21 np0005542249 elated_cerf[96750]: enabled application 'rbd' on pool 'backups'
Dec  2 05:51:21 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Dec  2 05:51:21 np0005542249 systemd[1]: libpod-40bd8ff2161d7c0569daa1de6a72a215bfba5bf85a04c5ae00a0bde0c04223f2.scope: Deactivated successfully.
Dec  2 05:51:21 np0005542249 podman[96734]: 2025-12-02 10:51:21.048400602 +0000 UTC m=+1.609308996 container died 40bd8ff2161d7c0569daa1de6a72a215bfba5bf85a04c5ae00a0bde0c04223f2 (image=quay.io/ceph/ceph:v18, name=elated_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:51:21 np0005542249 systemd[1]: var-lib-containers-storage-overlay-5b4043a4e9024831fa0f563136e6874bba1a47beb9a896ae62707e4ed3ff1551-merged.mount: Deactivated successfully.
Dec  2 05:51:21 np0005542249 podman[96734]: 2025-12-02 10:51:21.092058958 +0000 UTC m=+1.652967352 container remove 40bd8ff2161d7c0569daa1de6a72a215bfba5bf85a04c5ae00a0bde0c04223f2 (image=quay.io/ceph/ceph:v18, name=elated_cerf, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:21 np0005542249 systemd[1]: libpod-conmon-40bd8ff2161d7c0569daa1de6a72a215bfba5bf85a04c5ae00a0bde0c04223f2.scope: Deactivated successfully.
Dec  2 05:51:21 np0005542249 python3[96811]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:21 np0005542249 podman[96812]: 2025-12-02 10:51:21.46618663 +0000 UTC m=+0.051193882 container create c777b236b32511709748fb262ef3037e84b7a03af82e99ebcc950623777b2a87 (image=quay.io/ceph/ceph:v18, name=vigilant_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:21 np0005542249 systemd[1]: Started libpod-conmon-c777b236b32511709748fb262ef3037e84b7a03af82e99ebcc950623777b2a87.scope.
Dec  2 05:51:21 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:21 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07f58f99e2f4b8f42dc9cc4acf513c8d9c472c50b663eeb9a1153bbecfbbe4f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:21 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07f58f99e2f4b8f42dc9cc4acf513c8d9c472c50b663eeb9a1153bbecfbbe4f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:21 np0005542249 podman[96812]: 2025-12-02 10:51:21.444342737 +0000 UTC m=+0.029349959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:21 np0005542249 podman[96812]: 2025-12-02 10:51:21.552679632 +0000 UTC m=+0.137686854 container init c777b236b32511709748fb262ef3037e84b7a03af82e99ebcc950623777b2a87 (image=quay.io/ceph/ceph:v18, name=vigilant_driscoll, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 05:51:21 np0005542249 podman[96812]: 2025-12-02 10:51:21.559514428 +0000 UTC m=+0.144521650 container start c777b236b32511709748fb262ef3037e84b7a03af82e99ebcc950623777b2a87 (image=quay.io/ceph/ceph:v18, name=vigilant_driscoll, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:21 np0005542249 podman[96812]: 2025-12-02 10:51:21.563476916 +0000 UTC m=+0.148484168 container attach c777b236b32511709748fb262ef3037e84b7a03af82e99ebcc950623777b2a87 (image=quay.io/ceph/ceph:v18, name=vigilant_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  2 05:51:22 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/3226537830' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec  2 05:51:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Dec  2 05:51:22 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1330909155' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec  2 05:51:22 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v81: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:51:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec  2 05:51:23 np0005542249 ceph-mon[75081]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  2 05:51:23 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/1330909155' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec  2 05:51:23 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1330909155' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec  2 05:51:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Dec  2 05:51:23 np0005542249 vigilant_driscoll[96828]: enabled application 'rbd' on pool 'images'
Dec  2 05:51:23 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Dec  2 05:51:23 np0005542249 systemd[1]: libpod-c777b236b32511709748fb262ef3037e84b7a03af82e99ebcc950623777b2a87.scope: Deactivated successfully.
Dec  2 05:51:23 np0005542249 podman[96853]: 2025-12-02 10:51:23.156776214 +0000 UTC m=+0.040218685 container died c777b236b32511709748fb262ef3037e84b7a03af82e99ebcc950623777b2a87 (image=quay.io/ceph/ceph:v18, name=vigilant_driscoll, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  2 05:51:23 np0005542249 systemd[1]: var-lib-containers-storage-overlay-e07f58f99e2f4b8f42dc9cc4acf513c8d9c472c50b663eeb9a1153bbecfbbe4f-merged.mount: Deactivated successfully.
Dec  2 05:51:23 np0005542249 podman[96853]: 2025-12-02 10:51:23.198809807 +0000 UTC m=+0.082252298 container remove c777b236b32511709748fb262ef3037e84b7a03af82e99ebcc950623777b2a87 (image=quay.io/ceph/ceph:v18, name=vigilant_driscoll, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:51:23 np0005542249 systemd[1]: libpod-conmon-c777b236b32511709748fb262ef3037e84b7a03af82e99ebcc950623777b2a87.scope: Deactivated successfully.
Dec  2 05:51:23 np0005542249 python3[96893]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:23 np0005542249 podman[96894]: 2025-12-02 10:51:23.627892422 +0000 UTC m=+0.069019267 container create f2ec3080d1084e9e8f87a5fad6d92861d8ac1d68cc16f31563c67ef89c211ff2 (image=quay.io/ceph/ceph:v18, name=great_diffie, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 05:51:23 np0005542249 systemd[1]: Started libpod-conmon-f2ec3080d1084e9e8f87a5fad6d92861d8ac1d68cc16f31563c67ef89c211ff2.scope.
Dec  2 05:51:23 np0005542249 podman[96894]: 2025-12-02 10:51:23.598334989 +0000 UTC m=+0.039461854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:23 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:23 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4495e7cb34bf6424adbdd7e68b7b472a296c0d2b50e4621225d7e63209d6ba57/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:23 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4495e7cb34bf6424adbdd7e68b7b472a296c0d2b50e4621225d7e63209d6ba57/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:23 np0005542249 podman[96894]: 2025-12-02 10:51:23.72896302 +0000 UTC m=+0.170089935 container init f2ec3080d1084e9e8f87a5fad6d92861d8ac1d68cc16f31563c67ef89c211ff2 (image=quay.io/ceph/ceph:v18, name=great_diffie, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  2 05:51:23 np0005542249 podman[96894]: 2025-12-02 10:51:23.738713455 +0000 UTC m=+0.179840330 container start f2ec3080d1084e9e8f87a5fad6d92861d8ac1d68cc16f31563c67ef89c211ff2 (image=quay.io/ceph/ceph:v18, name=great_diffie, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  2 05:51:23 np0005542249 podman[96894]: 2025-12-02 10:51:23.742670193 +0000 UTC m=+0.183797068 container attach f2ec3080d1084e9e8f87a5fad6d92861d8ac1d68cc16f31563c67ef89c211ff2 (image=quay.io/ceph/ceph:v18, name=great_diffie, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  2 05:51:24 np0005542249 ceph-mon[75081]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  2 05:51:24 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/1330909155' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec  2 05:51:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Dec  2 05:51:24 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/674809844' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec  2 05:51:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:51:24 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v83: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:51:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Dec  2 05:51:25 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/674809844' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec  2 05:51:25 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/674809844' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec  2 05:51:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Dec  2 05:51:25 np0005542249 great_diffie[96909]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Dec  2 05:51:25 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Dec  2 05:51:25 np0005542249 systemd[1]: libpod-f2ec3080d1084e9e8f87a5fad6d92861d8ac1d68cc16f31563c67ef89c211ff2.scope: Deactivated successfully.
Dec  2 05:51:25 np0005542249 conmon[96909]: conmon f2ec3080d1084e9e8f87 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f2ec3080d1084e9e8f87a5fad6d92861d8ac1d68cc16f31563c67ef89c211ff2.scope/container/memory.events
Dec  2 05:51:25 np0005542249 podman[96894]: 2025-12-02 10:51:25.106347179 +0000 UTC m=+1.547474044 container died f2ec3080d1084e9e8f87a5fad6d92861d8ac1d68cc16f31563c67ef89c211ff2 (image=quay.io/ceph/ceph:v18, name=great_diffie, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  2 05:51:25 np0005542249 systemd[1]: var-lib-containers-storage-overlay-4495e7cb34bf6424adbdd7e68b7b472a296c0d2b50e4621225d7e63209d6ba57-merged.mount: Deactivated successfully.
Dec  2 05:51:25 np0005542249 podman[96894]: 2025-12-02 10:51:25.173414802 +0000 UTC m=+1.614541667 container remove f2ec3080d1084e9e8f87a5fad6d92861d8ac1d68cc16f31563c67ef89c211ff2 (image=quay.io/ceph/ceph:v18, name=great_diffie, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:25 np0005542249 systemd[1]: libpod-conmon-f2ec3080d1084e9e8f87a5fad6d92861d8ac1d68cc16f31563c67ef89c211ff2.scope: Deactivated successfully.
Dec  2 05:51:25 np0005542249 python3[96969]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:25 np0005542249 podman[96970]: 2025-12-02 10:51:25.59632586 +0000 UTC m=+0.063013784 container create 255d7d065db6042fde8b7035165e01f151c650ebb7a0526ce0465c7359401e10 (image=quay.io/ceph/ceph:v18, name=affectionate_heisenberg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  2 05:51:25 np0005542249 systemd[1]: Started libpod-conmon-255d7d065db6042fde8b7035165e01f151c650ebb7a0526ce0465c7359401e10.scope.
Dec  2 05:51:25 np0005542249 podman[96970]: 2025-12-02 10:51:25.564505515 +0000 UTC m=+0.031193499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:25 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:25 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd5bfd30838ab17d376b68a3e9d85abe340cf14a54e902297c9ce9a603f50754/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:25 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd5bfd30838ab17d376b68a3e9d85abe340cf14a54e902297c9ce9a603f50754/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:25 np0005542249 podman[96970]: 2025-12-02 10:51:25.692162345 +0000 UTC m=+0.158850259 container init 255d7d065db6042fde8b7035165e01f151c650ebb7a0526ce0465c7359401e10 (image=quay.io/ceph/ceph:v18, name=affectionate_heisenberg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:51:25 np0005542249 podman[96970]: 2025-12-02 10:51:25.701933171 +0000 UTC m=+0.168621065 container start 255d7d065db6042fde8b7035165e01f151c650ebb7a0526ce0465c7359401e10 (image=quay.io/ceph/ceph:v18, name=affectionate_heisenberg, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  2 05:51:25 np0005542249 podman[96970]: 2025-12-02 10:51:25.706406613 +0000 UTC m=+0.173094507 container attach 255d7d065db6042fde8b7035165e01f151c650ebb7a0526ce0465c7359401e10 (image=quay.io/ceph/ceph:v18, name=affectionate_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:51:26 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/674809844' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_10:51:26
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['.mgr', 'volumes', 'backups', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'images']
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 05:51:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Dec  2 05:51:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/392152973' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  2 05:51:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Dec  2 05:51:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 05:51:26 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v85: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:51:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Dec  2 05:51:27 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/392152973' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec  2 05:51:27 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec  2 05:51:27 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/392152973' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec  2 05:51:27 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec  2 05:51:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Dec  2 05:51:27 np0005542249 affectionate_heisenberg[96985]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Dec  2 05:51:27 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Dec  2 05:51:27 np0005542249 ceph-mgr[75372]: [progress INFO root] update: starting ev 62a521a3-805d-41cb-a075-6e96b797854a (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec  2 05:51:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Dec  2 05:51:27 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec  2 05:51:27 np0005542249 systemd[1]: libpod-255d7d065db6042fde8b7035165e01f151c650ebb7a0526ce0465c7359401e10.scope: Deactivated successfully.
Dec  2 05:51:27 np0005542249 podman[96970]: 2025-12-02 10:51:27.11098239 +0000 UTC m=+1.577670284 container died 255d7d065db6042fde8b7035165e01f151c650ebb7a0526ce0465c7359401e10 (image=quay.io/ceph/ceph:v18, name=affectionate_heisenberg, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:51:27 np0005542249 systemd[1]: var-lib-containers-storage-overlay-cd5bfd30838ab17d376b68a3e9d85abe340cf14a54e902297c9ce9a603f50754-merged.mount: Deactivated successfully.
Dec  2 05:51:27 np0005542249 podman[96970]: 2025-12-02 10:51:27.159426287 +0000 UTC m=+1.626114191 container remove 255d7d065db6042fde8b7035165e01f151c650ebb7a0526ce0465c7359401e10 (image=quay.io/ceph/ceph:v18, name=affectionate_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:51:27 np0005542249 systemd[1]: libpod-conmon-255d7d065db6042fde8b7035165e01f151c650ebb7a0526ce0465c7359401e10.scope: Deactivated successfully.
Dec  2 05:51:28 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Dec  2 05:51:28 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec  2 05:51:28 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Dec  2 05:51:28 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Dec  2 05:51:28 np0005542249 ceph-mgr[75372]: [progress INFO root] update: starting ev c295313d-889b-4f7f-8e5c-13257f25ea2b (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec  2 05:51:28 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Dec  2 05:51:28 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec  2 05:51:28 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/392152973' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec  2 05:51:28 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec  2 05:51:28 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec  2 05:51:28 np0005542249 python3[97097]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:51:28 np0005542249 python3[97168]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764672687.8791518-36675-96107471641180/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:51:28 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v88: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:51:28 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  2 05:51:28 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  2 05:51:28 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  2 05:51:28 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  2 05:51:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Dec  2 05:51:29 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec  2 05:51:29 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec  2 05:51:29 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec  2 05:51:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Dec  2 05:51:29 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Dec  2 05:51:29 np0005542249 ceph-mon[75081]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  2 05:51:29 np0005542249 ceph-mon[75081]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  2 05:51:29 np0005542249 ceph-mgr[75372]: [progress INFO root] update: starting ev a4d3f78f-15d1-46db-a9ab-e81f8896468f (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec  2 05:51:29 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 37 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=10.724641800s) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active pruub 69.508850098s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Dec  2 05:51:29 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec  2 05:51:29 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec  2 05:51:29 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec  2 05:51:29 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  2 05:51:29 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  2 05:51:29 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 37 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=10.724641800s) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown pruub 69.508850098s@ mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:29 np0005542249 python3[97270]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:51:29 np0005542249 python3[97345]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764672688.8133364-36689-28918596003732/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=e76c92c4a5734df99ad8e977d06bc77f0e5cc134 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:51:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:51:29 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 37 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37 pruub=8.106033325s) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active pruub 61.533485413s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:29 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 37 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37 pruub=8.106033325s) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown pruub 61.533485413s@ mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:29 np0005542249 python3[97395]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:30 np0005542249 podman[97396]: 2025-12-02 10:51:30.046281154 +0000 UTC m=+0.051153671 container create c67e06d563e1e13dcc53f4d1e097fc58c01799caf3763528f5f4f8d1f293218d (image=quay.io/ceph/ceph:v18, name=condescending_ritchie, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  2 05:51:30 np0005542249 systemd[1]: Started libpod-conmon-c67e06d563e1e13dcc53f4d1e097fc58c01799caf3763528f5f4f8d1f293218d.scope.
Dec  2 05:51:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Dec  2 05:51:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec  2 05:51:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Dec  2 05:51:30 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:30 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Dec  2 05:51:30 np0005542249 ceph-mgr[75372]: [progress INFO root] update: starting ev 48c814f5-fd1d-4c32-b4d3-6ebe8b10810f (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec  2 05:51:30 np0005542249 podman[97396]: 2025-12-02 10:51:30.025454828 +0000 UTC m=+0.030327365 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:30 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9b0b8b78e712e3d6637ca31e751b6a0d8186f14289c23a9f874a530f407f18/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:30 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9b0b8b78e712e3d6637ca31e751b6a0d8186f14289c23a9f874a530f407f18/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:30 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9b0b8b78e712e3d6637ca31e751b6a0d8186f14289c23a9f874a530f407f18/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0) v1
Dec  2 05:51:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.1f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.1d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.1e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.1c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.1b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.9( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.8( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.7( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.6( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.5( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.1( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.4( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.2( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.10( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.11( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.12( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.13( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.14( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.15( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.16( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.17( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.18( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.19( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.1a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.3( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.1c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.1f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.1e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.1d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.9( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.8( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.6( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.5( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.4( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.2( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.3( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.1( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.7( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.10( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.11( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.12( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.13( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.14( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.15( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.16( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.17( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.18( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.19( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.1a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.1b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec  2 05:51:30 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec  2 05:51:30 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec  2 05:51:30 np0005542249 ceph-mon[75081]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  2 05:51:30 np0005542249 ceph-mon[75081]: Cluster is now healthy
Dec  2 05:51:30 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec  2 05:51:30 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.0( empty local-lis/les=37/38 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.19( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.19( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 38 pg[2.0( empty local-lis/les=37/38 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 38 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:30 np0005542249 podman[97396]: 2025-12-02 10:51:30.143737564 +0000 UTC m=+0.148610081 container init c67e06d563e1e13dcc53f4d1e097fc58c01799caf3763528f5f4f8d1f293218d (image=quay.io/ceph/ceph:v18, name=condescending_ritchie, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:51:30 np0005542249 podman[97396]: 2025-12-02 10:51:30.155136614 +0000 UTC m=+0.160009111 container start c67e06d563e1e13dcc53f4d1e097fc58c01799caf3763528f5f4f8d1f293218d (image=quay.io/ceph/ceph:v18, name=condescending_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:51:30 np0005542249 podman[97396]: 2025-12-02 10:51:30.158344171 +0000 UTC m=+0.163216698 container attach c67e06d563e1e13dcc53f4d1e097fc58c01799caf3763528f5f4f8d1f293218d (image=quay.io/ceph/ceph:v18, name=condescending_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Dec  2 05:51:30 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Dec  2 05:51:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Dec  2 05:51:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4056736540' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  2 05:51:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4056736540' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  2 05:51:30 np0005542249 condescending_ritchie[97411]: 
Dec  2 05:51:30 np0005542249 condescending_ritchie[97411]: [global]
Dec  2 05:51:30 np0005542249 condescending_ritchie[97411]: #011fsid = 95bc4eaa-1a14-59bf-acf2-4b3da055547d
Dec  2 05:51:30 np0005542249 condescending_ritchie[97411]: #011mon_host = 192.168.122.100
Dec  2 05:51:30 np0005542249 systemd[1]: libpod-c67e06d563e1e13dcc53f4d1e097fc58c01799caf3763528f5f4f8d1f293218d.scope: Deactivated successfully.
Dec  2 05:51:30 np0005542249 podman[97436]: 2025-12-02 10:51:30.757278945 +0000 UTC m=+0.032385221 container died c67e06d563e1e13dcc53f4d1e097fc58c01799caf3763528f5f4f8d1f293218d (image=quay.io/ceph/ceph:v18, name=condescending_ritchie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  2 05:51:30 np0005542249 systemd[1]: var-lib-containers-storage-overlay-1c9b0b8b78e712e3d6637ca31e751b6a0d8186f14289c23a9f874a530f407f18-merged.mount: Deactivated successfully.
Dec  2 05:51:30 np0005542249 podman[97436]: 2025-12-02 10:51:30.821640165 +0000 UTC m=+0.096746401 container remove c67e06d563e1e13dcc53f4d1e097fc58c01799caf3763528f5f4f8d1f293218d (image=quay.io/ceph/ceph:v18, name=condescending_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Dec  2 05:51:30 np0005542249 systemd[1]: libpod-conmon-c67e06d563e1e13dcc53f4d1e097fc58c01799caf3763528f5f4f8d1f293218d.scope: Deactivated successfully.
Dec  2 05:51:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v91: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:51:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  2 05:51:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  2 05:51:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  2 05:51:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  2 05:51:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Dec  2 05:51:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec  2 05:51:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec  2 05:51:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec  2 05:51:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Dec  2 05:51:31 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Dec  2 05:51:31 np0005542249 ceph-mgr[75372]: [progress INFO root] update: starting ev 9bde9b73-7371-4f4e-94af-2764f39530b0 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec  2 05:51:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Dec  2 05:51:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec  2 05:51:31 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  2 05:51:31 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/4056736540' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  2 05:51:31 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/4056736540' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  2 05:51:31 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  2 05:51:31 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  2 05:51:31 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec  2 05:51:31 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec  2 05:51:31 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec  2 05:51:31 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec  2 05:51:31 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Dec  2 05:51:31 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Dec  2 05:51:31 np0005542249 python3[97576]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:31 np0005542249 podman[97605]: 2025-12-02 10:51:31.261787241 +0000 UTC m=+0.044944333 container create 2224453c057596122a782b3d7d4706ad42696dfb99704caffe7510eab86f9e30 (image=quay.io/ceph/ceph:v18, name=upbeat_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  2 05:51:31 np0005542249 systemd[1]: Started libpod-conmon-2224453c057596122a782b3d7d4706ad42696dfb99704caffe7510eab86f9e30.scope.
Dec  2 05:51:31 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:31 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0343a031a533a00d31c59d7b4232b83450719cbc41804e31e14e0719354a1d0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:31 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0343a031a533a00d31c59d7b4232b83450719cbc41804e31e14e0719354a1d0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:31 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0343a031a533a00d31c59d7b4232b83450719cbc41804e31e14e0719354a1d0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:31 np0005542249 ceph-mgr[75372]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Dec  2 05:51:31 np0005542249 podman[97605]: 2025-12-02 10:51:31.241746067 +0000 UTC m=+0.024903199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:31 np0005542249 podman[97605]: 2025-12-02 10:51:31.34524081 +0000 UTC m=+0.128397912 container init 2224453c057596122a782b3d7d4706ad42696dfb99704caffe7510eab86f9e30 (image=quay.io/ceph/ceph:v18, name=upbeat_raman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  2 05:51:31 np0005542249 podman[97605]: 2025-12-02 10:51:31.35809802 +0000 UTC m=+0.141255102 container start 2224453c057596122a782b3d7d4706ad42696dfb99704caffe7510eab86f9e30 (image=quay.io/ceph/ceph:v18, name=upbeat_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 05:51:31 np0005542249 podman[97605]: 2025-12-02 10:51:31.363358043 +0000 UTC m=+0.146515155 container attach 2224453c057596122a782b3d7d4706ad42696dfb99704caffe7510eab86f9e30 (image=quay.io/ceph/ceph:v18, name=upbeat_raman, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 05:51:31 np0005542249 podman[97670]: 2025-12-02 10:51:31.493150422 +0000 UTC m=+0.064997078 container exec cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  2 05:51:31 np0005542249 podman[97670]: 2025-12-02 10:51:31.5964683 +0000 UTC m=+0.168314906 container exec_died cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:31 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 39 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=10.116951942s) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active pruub 76.946617126s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:31 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 39 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=10.116951942s) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown pruub 76.946617126s@ mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Dec  2 05:51:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/51738498' entity='client.admin' 
Dec  2 05:51:32 np0005542249 upbeat_raman[97636]: set ssl_option
Dec  2 05:51:32 np0005542249 systemd[1]: libpod-2224453c057596122a782b3d7d4706ad42696dfb99704caffe7510eab86f9e30.scope: Deactivated successfully.
Dec  2 05:51:32 np0005542249 podman[97605]: 2025-12-02 10:51:32.031456827 +0000 UTC m=+0.814613949 container died 2224453c057596122a782b3d7d4706ad42696dfb99704caffe7510eab86f9e30 (image=quay.io/ceph/ceph:v18, name=upbeat_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Dec  2 05:51:32 np0005542249 systemd[1]: var-lib-containers-storage-overlay-b0343a031a533a00d31c59d7b4232b83450719cbc41804e31e14e0719354a1d0-merged.mount: Deactivated successfully.
Dec  2 05:51:32 np0005542249 podman[97605]: 2025-12-02 10:51:32.093265867 +0000 UTC m=+0.876422959 container remove 2224453c057596122a782b3d7d4706ad42696dfb99704caffe7510eab86f9e30 (image=quay.io/ceph/ceph:v18, name=upbeat_raman, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:51:32 np0005542249 systemd[1]: libpod-conmon-2224453c057596122a782b3d7d4706ad42696dfb99704caffe7510eab86f9e30.scope: Deactivated successfully.
Dec  2 05:51:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Dec  2 05:51:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec  2 05:51:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Dec  2 05:51:32 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Dec  2 05:51:32 np0005542249 ceph-mgr[75372]: [progress INFO root] update: starting ev 8ea64f3e-20a4-4243-8c1b-1455f67a0cc1 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec  2 05:51:32 np0005542249 ceph-mgr[75372]: [progress INFO root] complete: finished ev 62a521a3-805d-41cb-a075-6e96b797854a (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec  2 05:51:32 np0005542249 ceph-mgr[75372]: [progress INFO root] Completed event 62a521a3-805d-41cb-a075-6e96b797854a (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Dec  2 05:51:32 np0005542249 ceph-mgr[75372]: [progress INFO root] complete: finished ev c295313d-889b-4f7f-8e5c-13257f25ea2b (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec  2 05:51:32 np0005542249 ceph-mgr[75372]: [progress INFO root] Completed event c295313d-889b-4f7f-8e5c-13257f25ea2b (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Dec  2 05:51:32 np0005542249 ceph-mgr[75372]: [progress INFO root] complete: finished ev a4d3f78f-15d1-46db-a9ab-e81f8896468f (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec  2 05:51:32 np0005542249 ceph-mgr[75372]: [progress INFO root] Completed event a4d3f78f-15d1-46db-a9ab-e81f8896468f (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=39 pruub=11.768332481s) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active pruub 67.616813660s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:32 np0005542249 ceph-mgr[75372]: [progress INFO root] complete: finished ev 48c814f5-fd1d-4c32-b4d3-6ebe8b10810f (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec  2 05:51:32 np0005542249 ceph-mgr[75372]: [progress INFO root] Completed event 48c814f5-fd1d-4c32-b4d3-6ebe8b10810f (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Dec  2 05:51:32 np0005542249 ceph-mgr[75372]: [progress INFO root] complete: finished ev 9bde9b73-7371-4f4e-94af-2764f39530b0 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec  2 05:51:32 np0005542249 ceph-mgr[75372]: [progress INFO root] Completed event 9bde9b73-7371-4f4e-94af-2764f39530b0 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Dec  2 05:51:32 np0005542249 ceph-mgr[75372]: [progress INFO root] complete: finished ev 8ea64f3e-20a4-4243-8c1b-1455f67a0cc1 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec  2 05:51:32 np0005542249 ceph-mgr[75372]: [progress INFO root] Completed event 8ea64f3e-20a4-4243-8c1b-1455f67a0cc1 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=39 pruub=11.768332481s) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown pruub 67.616813660s@ mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.1f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.8( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.1c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/51738498' entity='client.admin' 
Dec  2 05:51:32 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.1b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.6( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.0( empty local-lis/les=39/40 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.10( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.17( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 40 pg[4.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:51:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:51:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:51:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:51:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 05:51:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:51:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 05:51:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:32 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec  2 05:51:32 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 13b89892-2972-42ea-adf9-2b565c6f99f7 does not exist
Dec  2 05:51:32 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 2a93c0b1-2dad-4cbd-9e14-c0e15957c3ca does not exist
Dec  2 05:51:32 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev cd1a8718-74bc-42ff-933e-23cd67e18146 does not exist
Dec  2 05:51:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 05:51:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 05:51:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 05:51:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 05:51:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:51:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:51:32 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec  2 05:51:32 np0005542249 python3[97879]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:32 np0005542249 podman[97928]: 2025-12-02 10:51:32.494351303 +0000 UTC m=+0.073039987 container create 6237911ac26089e183bfc3c426033c94692a854cb9ef2dddbf237defbc48404c (image=quay.io/ceph/ceph:v18, name=hopeful_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:51:32 np0005542249 systemd[1]: Started libpod-conmon-6237911ac26089e183bfc3c426033c94692a854cb9ef2dddbf237defbc48404c.scope.
Dec  2 05:51:32 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:32 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54922a744a0dcf77a749174dcf130ba1bb1f7d4148bd1205f166d343daa4ba8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:32 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54922a744a0dcf77a749174dcf130ba1bb1f7d4148bd1205f166d343daa4ba8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:32 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54922a744a0dcf77a749174dcf130ba1bb1f7d4148bd1205f166d343daa4ba8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:32 np0005542249 podman[97928]: 2025-12-02 10:51:32.470684479 +0000 UTC m=+0.049373253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:32 np0005542249 podman[97928]: 2025-12-02 10:51:32.579551619 +0000 UTC m=+0.158240323 container init 6237911ac26089e183bfc3c426033c94692a854cb9ef2dddbf237defbc48404c (image=quay.io/ceph/ceph:v18, name=hopeful_kapitsa, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:51:32 np0005542249 podman[97928]: 2025-12-02 10:51:32.58987962 +0000 UTC m=+0.168568324 container start 6237911ac26089e183bfc3c426033c94692a854cb9ef2dddbf237defbc48404c (image=quay.io/ceph/ceph:v18, name=hopeful_kapitsa, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:51:32 np0005542249 podman[97928]: 2025-12-02 10:51:32.593300542 +0000 UTC m=+0.171989336 container attach 6237911ac26089e183bfc3c426033c94692a854cb9ef2dddbf237defbc48404c (image=quay.io/ceph/ceph:v18, name=hopeful_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 05:51:32 np0005542249 podman[98016]: 2025-12-02 10:51:32.918056542 +0000 UTC m=+0.070266841 container create 99f9750233aa2d7d8aafe06b74f4b538f462b54cb4c7beeaba61dfe4d9118cec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chatelet, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:51:32 np0005542249 systemd[76634]: Starting Mark boot as successful...
Dec  2 05:51:32 np0005542249 systemd[1]: Started libpod-conmon-99f9750233aa2d7d8aafe06b74f4b538f462b54cb4c7beeaba61dfe4d9118cec.scope.
Dec  2 05:51:32 np0005542249 systemd[76634]: Finished Mark boot as successful.
Dec  2 05:51:32 np0005542249 podman[98016]: 2025-12-02 10:51:32.887154142 +0000 UTC m=+0.039364491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:51:32 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:33 np0005542249 podman[98016]: 2025-12-02 10:51:33.007201656 +0000 UTC m=+0.159412025 container init 99f9750233aa2d7d8aafe06b74f4b538f462b54cb4c7beeaba61dfe4d9118cec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:51:33 np0005542249 podman[98016]: 2025-12-02 10:51:33.014169415 +0000 UTC m=+0.166379714 container start 99f9750233aa2d7d8aafe06b74f4b538f462b54cb4c7beeaba61dfe4d9118cec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chatelet, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:51:33 np0005542249 podman[98016]: 2025-12-02 10:51:33.019765997 +0000 UTC m=+0.171976306 container attach 99f9750233aa2d7d8aafe06b74f4b538f462b54cb4c7beeaba61dfe4d9118cec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  2 05:51:33 np0005542249 hopeful_chatelet[98053]: 167 167
Dec  2 05:51:33 np0005542249 systemd[1]: libpod-99f9750233aa2d7d8aafe06b74f4b538f462b54cb4c7beeaba61dfe4d9118cec.scope: Deactivated successfully.
Dec  2 05:51:33 np0005542249 conmon[98053]: conmon 99f9750233aa2d7d8aaf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-99f9750233aa2d7d8aafe06b74f4b538f462b54cb4c7beeaba61dfe4d9118cec.scope/container/memory.events
Dec  2 05:51:33 np0005542249 podman[98016]: 2025-12-02 10:51:33.022258215 +0000 UTC m=+0.174468514 container died 99f9750233aa2d7d8aafe06b74f4b538f462b54cb4c7beeaba61dfe4d9118cec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chatelet, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  2 05:51:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v94: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:51:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  2 05:51:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  2 05:51:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  2 05:51:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  2 05:51:33 np0005542249 systemd[1]: var-lib-containers-storage-overlay-83788f46e5831ae737a45219fec35b6d55ecd00141cc0ce0502ab9b3db589929-merged.mount: Deactivated successfully.
Dec  2 05:51:33 np0005542249 podman[98016]: 2025-12-02 10:51:33.075548874 +0000 UTC m=+0.227759173 container remove 99f9750233aa2d7d8aafe06b74f4b538f462b54cb4c7beeaba61dfe4d9118cec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  2 05:51:33 np0005542249 systemd[1]: libpod-conmon-99f9750233aa2d7d8aafe06b74f4b538f462b54cb4c7beeaba61dfe4d9118cec.scope: Deactivated successfully.
Dec  2 05:51:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Dec  2 05:51:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  2 05:51:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  2 05:51:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Dec  2 05:51:33 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Dec  2 05:51:33 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 41 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=41 pruub=12.805402756s) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active pruub 81.012954712s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 41 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=41 pruub=12.805402756s) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown pruub 81.012954712s@ mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=39/41 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:33 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 05:51:33 np0005542249 ceph-mgr[75372]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Dec  2 05:51:33 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Dec  2 05:51:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec  2 05:51:33 np0005542249 podman[98078]: 2025-12-02 10:51:33.346473479 +0000 UTC m=+0.122768138 container create 79e79252a2cc9f1d3470a0ac4a024724db3e4430c6136c85e21a382a21b978ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_keller, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  2 05:51:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:33 np0005542249 hopeful_kapitsa[97971]: Scheduled rgw.rgw update...
Dec  2 05:51:33 np0005542249 podman[98078]: 2025-12-02 10:51:33.262287841 +0000 UTC m=+0.038582520 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:51:33 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:33 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:33 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:51:33 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:33 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 05:51:33 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  2 05:51:33 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  2 05:51:33 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  2 05:51:33 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  2 05:51:33 np0005542249 systemd[1]: libpod-6237911ac26089e183bfc3c426033c94692a854cb9ef2dddbf237defbc48404c.scope: Deactivated successfully.
Dec  2 05:51:33 np0005542249 podman[97928]: 2025-12-02 10:51:33.372333773 +0000 UTC m=+0.951022467 container died 6237911ac26089e183bfc3c426033c94692a854cb9ef2dddbf237defbc48404c (image=quay.io/ceph/ceph:v18, name=hopeful_kapitsa, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  2 05:51:33 np0005542249 systemd[1]: Started libpod-conmon-79e79252a2cc9f1d3470a0ac4a024724db3e4430c6136c85e21a382a21b978ec.scope.
Dec  2 05:51:33 np0005542249 systemd[1]: var-lib-containers-storage-overlay-f54922a744a0dcf77a749174dcf130ba1bb1f7d4148bd1205f166d343daa4ba8-merged.mount: Deactivated successfully.
Dec  2 05:51:33 np0005542249 podman[97928]: 2025-12-02 10:51:33.417803739 +0000 UTC m=+0.996492473 container remove 6237911ac26089e183bfc3c426033c94692a854cb9ef2dddbf237defbc48404c (image=quay.io/ceph/ceph:v18, name=hopeful_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  2 05:51:33 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:33 np0005542249 systemd[1]: libpod-conmon-6237911ac26089e183bfc3c426033c94692a854cb9ef2dddbf237defbc48404c.scope: Deactivated successfully.
Dec  2 05:51:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2989f717e295460c8dbae338afec476ed84c052209fc2116b4ca1ee6e0eaa9fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2989f717e295460c8dbae338afec476ed84c052209fc2116b4ca1ee6e0eaa9fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2989f717e295460c8dbae338afec476ed84c052209fc2116b4ca1ee6e0eaa9fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2989f717e295460c8dbae338afec476ed84c052209fc2116b4ca1ee6e0eaa9fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2989f717e295460c8dbae338afec476ed84c052209fc2116b4ca1ee6e0eaa9fd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:33 np0005542249 podman[98078]: 2025-12-02 10:51:33.442313106 +0000 UTC m=+0.218607815 container init 79e79252a2cc9f1d3470a0ac4a024724db3e4430c6136c85e21a382a21b978ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_keller, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:33 np0005542249 podman[98078]: 2025-12-02 10:51:33.451370192 +0000 UTC m=+0.227664851 container start 79e79252a2cc9f1d3470a0ac4a024724db3e4430c6136c85e21a382a21b978ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_keller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:51:33 np0005542249 podman[98078]: 2025-12-02 10:51:33.454066095 +0000 UTC m=+0.230360754 container attach 79e79252a2cc9f1d3470a0ac4a024724db3e4430c6136c85e21a382a21b978ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_keller, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:51:33 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 41 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=41 pruub=13.989539146s) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active pruub 77.624160767s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:33 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 41 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=41 pruub=13.989539146s) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown pruub 77.624160767s@ mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Dec  2 05:51:34 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Dec  2 05:51:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec  2 05:51:34 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec  2 05:51:34 np0005542249 python3[98202]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:51:34 np0005542249 angry_keller[98104]: --> passed data devices: 0 physical, 3 LVM
Dec  2 05:51:34 np0005542249 angry_keller[98104]: --> relative data size: 1.0
Dec  2 05:51:34 np0005542249 angry_keller[98104]: --> All data devices are unavailable
Dec  2 05:51:34 np0005542249 systemd[1]: libpod-79e79252a2cc9f1d3470a0ac4a024724db3e4430c6136c85e21a382a21b978ec.scope: Deactivated successfully.
Dec  2 05:51:34 np0005542249 podman[98078]: 2025-12-02 10:51:34.549095646 +0000 UTC m=+1.325390345 container died 79e79252a2cc9f1d3470a0ac4a024724db3e4430c6136c85e21a382a21b978ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_keller, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:51:34 np0005542249 systemd[1]: libpod-79e79252a2cc9f1d3470a0ac4a024724db3e4430c6136c85e21a382a21b978ec.scope: Consumed 1.025s CPU time.
Dec  2 05:51:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Dec  2 05:51:34 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Dec  2 05:51:34 np0005542249 ceph-mon[75081]: Saving service rgw.rgw spec with placement compute-0
Dec  2 05:51:34 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.1a( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.14( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.15( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.17( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.16( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.11( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.10( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.13( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.12( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.c( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.d( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.f( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.e( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.2( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.3( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.1( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.1b( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.6( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.b( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.18( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.7( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.8( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.19( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.4( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.9( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.5( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.1e( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.a( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.1f( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.1c( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.1d( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.14( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.1e( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.1d( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.12( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.10( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.17( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.16( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.14( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.b( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.7( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.d( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.19( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.17( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.15( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.10( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.16( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.c( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.12( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.11( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.f( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.d( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.e( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.0( empty local-lis/les=41/42 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.2( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.3( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.6( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.1b( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.13( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.b( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.18( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.7( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.1( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.19( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.8( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.4( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.9( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.a( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.1e( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.1f( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.5( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.1a( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.1c( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 42 pg[6.1d( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.1e( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.1d( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.12( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.10( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.16( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.17( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.b( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.14( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.0( empty local-lis/les=41/42 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.7( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.d( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.19( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 systemd[1]: var-lib-containers-storage-overlay-2989f717e295460c8dbae338afec476ed84c052209fc2116b4ca1ee6e0eaa9fd-merged.mount: Deactivated successfully.
Dec  2 05:51:34 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:34 np0005542249 podman[98078]: 2025-12-02 10:51:34.949168804 +0000 UTC m=+1.725463463 container remove 79e79252a2cc9f1d3470a0ac4a024724db3e4430c6136c85e21a382a21b978ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_keller, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:51:34 np0005542249 python3[98294]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764672694.126865-36730-167649755270174/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:51:34 np0005542249 systemd[1]: libpod-conmon-79e79252a2cc9f1d3470a0ac4a024724db3e4430c6136c85e21a382a21b978ec.scope: Deactivated successfully.
Dec  2 05:51:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v97: 193 pgs: 62 unknown, 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:51:35 np0005542249 python3[98449]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:35 np0005542249 podman[98464]: 2025-12-02 10:51:35.564573355 +0000 UTC m=+0.072305706 container create c7a6995812ed5e1e5cd6e1133cf31d1550d958685d5c8a82fc80f799f4c5bea3 (image=quay.io/ceph/ceph:v18, name=focused_moore, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:51:35 np0005542249 systemd[1]: Started libpod-conmon-c7a6995812ed5e1e5cd6e1133cf31d1550d958685d5c8a82fc80f799f4c5bea3.scope.
Dec  2 05:51:35 np0005542249 podman[98464]: 2025-12-02 10:51:35.531613469 +0000 UTC m=+0.039345880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:35 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:35 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda7f0ee522ea2d37db72aba0e9a0f570c08bca5bf233d743f0d02cc1b403aa7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:35 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda7f0ee522ea2d37db72aba0e9a0f570c08bca5bf233d743f0d02cc1b403aa7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:35 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda7f0ee522ea2d37db72aba0e9a0f570c08bca5bf233d743f0d02cc1b403aa7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:35 np0005542249 podman[98464]: 2025-12-02 10:51:35.687660451 +0000 UTC m=+0.195392842 container init c7a6995812ed5e1e5cd6e1133cf31d1550d958685d5c8a82fc80f799f4c5bea3 (image=quay.io/ceph/ceph:v18, name=focused_moore, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  2 05:51:35 np0005542249 podman[98464]: 2025-12-02 10:51:35.698484856 +0000 UTC m=+0.206217197 container start c7a6995812ed5e1e5cd6e1133cf31d1550d958685d5c8a82fc80f799f4c5bea3 (image=quay.io/ceph/ceph:v18, name=focused_moore, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:51:35 np0005542249 podman[98464]: 2025-12-02 10:51:35.702726281 +0000 UTC m=+0.210458672 container attach c7a6995812ed5e1e5cd6e1133cf31d1550d958685d5c8a82fc80f799f4c5bea3 (image=quay.io/ceph/ceph:v18, name=focused_moore, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:51:35 np0005542249 podman[98509]: 2025-12-02 10:51:35.821551372 +0000 UTC m=+0.069620124 container create c49c42974934c168a895194ef6789e6c9c6783963f220d63c5247ea025c26ffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bassi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:51:35 np0005542249 systemd[1]: Started libpod-conmon-c49c42974934c168a895194ef6789e6c9c6783963f220d63c5247ea025c26ffa.scope.
Dec  2 05:51:35 np0005542249 podman[98509]: 2025-12-02 10:51:35.792076041 +0000 UTC m=+0.040144803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:51:35 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:35 np0005542249 podman[98509]: 2025-12-02 10:51:35.925383355 +0000 UTC m=+0.173452117 container init c49c42974934c168a895194ef6789e6c9c6783963f220d63c5247ea025c26ffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  2 05:51:35 np0005542249 podman[98509]: 2025-12-02 10:51:35.934438411 +0000 UTC m=+0.182507163 container start c49c42974934c168a895194ef6789e6c9c6783963f220d63c5247ea025c26ffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bassi, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Dec  2 05:51:35 np0005542249 podman[98509]: 2025-12-02 10:51:35.93880472 +0000 UTC m=+0.186873482 container attach c49c42974934c168a895194ef6789e6c9c6783963f220d63c5247ea025c26ffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bassi, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:51:35 np0005542249 xenodochial_bassi[98526]: 167 167
Dec  2 05:51:35 np0005542249 systemd[1]: libpod-c49c42974934c168a895194ef6789e6c9c6783963f220d63c5247ea025c26ffa.scope: Deactivated successfully.
Dec  2 05:51:35 np0005542249 podman[98509]: 2025-12-02 10:51:35.942405447 +0000 UTC m=+0.190474199 container died c49c42974934c168a895194ef6789e6c9c6783963f220d63c5247ea025c26ffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:51:35 np0005542249 systemd[1]: var-lib-containers-storage-overlay-f8dc511b5c57054eeaaf1fd8a41fe69256619c79f6a5ccc299c16a64115896a1-merged.mount: Deactivated successfully.
Dec  2 05:51:35 np0005542249 podman[98509]: 2025-12-02 10:51:35.993323422 +0000 UTC m=+0.241392174 container remove c49c42974934c168a895194ef6789e6c9c6783963f220d63c5247ea025c26ffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bassi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:51:36 np0005542249 systemd[1]: libpod-conmon-c49c42974934c168a895194ef6789e6c9c6783963f220d63c5247ea025c26ffa.scope: Deactivated successfully.
Dec  2 05:51:36 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Dec  2 05:51:36 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Dec  2 05:51:36 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec  2 05:51:36 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec  2 05:51:36 np0005542249 podman[98568]: 2025-12-02 10:51:36.181063456 +0000 UTC m=+0.043338629 container create e3e958028e4ca497a698138661b27d12ef9729694472ccaed90416e1cbd46292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:51:36 np0005542249 systemd[1]: Started libpod-conmon-e3e958028e4ca497a698138661b27d12ef9729694472ccaed90416e1cbd46292.scope.
Dec  2 05:51:36 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:36 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a553524b8b0a7663b99911ebbd720efe488cc4669923e9659461f3ae9a955274/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:36 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a553524b8b0a7663b99911ebbd720efe488cc4669923e9659461f3ae9a955274/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:36 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a553524b8b0a7663b99911ebbd720efe488cc4669923e9659461f3ae9a955274/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:36 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a553524b8b0a7663b99911ebbd720efe488cc4669923e9659461f3ae9a955274/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:36 np0005542249 podman[98568]: 2025-12-02 10:51:36.245897509 +0000 UTC m=+0.108172772 container init e3e958028e4ca497a698138661b27d12ef9729694472ccaed90416e1cbd46292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  2 05:51:36 np0005542249 podman[98568]: 2025-12-02 10:51:36.253696141 +0000 UTC m=+0.115971324 container start e3e958028e4ca497a698138661b27d12ef9729694472ccaed90416e1cbd46292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bhabha, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  2 05:51:36 np0005542249 podman[98568]: 2025-12-02 10:51:36.161789462 +0000 UTC m=+0.024064665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:51:36 np0005542249 podman[98568]: 2025-12-02 10:51:36.258557963 +0000 UTC m=+0.120833176 container attach e3e958028e4ca497a698138661b27d12ef9729694472ccaed90416e1cbd46292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bhabha, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:36 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 05:51:36 np0005542249 ceph-mgr[75372]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec  2 05:51:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Dec  2 05:51:36 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec  2 05:51:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Dec  2 05:51:36 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec  2 05:51:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Dec  2 05:51:36 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec  2 05:51:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Dec  2 05:51:36 np0005542249 ceph-mon[75081]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  2 05:51:36 np0005542249 ceph-mon[75081]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec  2 05:51:36 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0[75077]: 2025-12-02T10:51:36.281+0000 7f76fe4a0640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  2 05:51:36 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec  2 05:51:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).mds e2 new map
Dec  2 05:51:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-02T10:51:36.281920+0000#012modified#0112025-12-02T10:51:36.281965+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Dec  2 05:51:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Dec  2 05:51:36 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Dec  2 05:51:36 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec  2 05:51:36 np0005542249 ceph-mgr[75372]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec  2 05:51:36 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec  2 05:51:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec  2 05:51:36 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:36 np0005542249 ceph-mgr[75372]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec  2 05:51:36 np0005542249 systemd[1]: libpod-c7a6995812ed5e1e5cd6e1133cf31d1550d958685d5c8a82fc80f799f4c5bea3.scope: Deactivated successfully.
Dec  2 05:51:36 np0005542249 conmon[98505]: conmon c7a6995812ed5e1e5cd6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c7a6995812ed5e1e5cd6e1133cf31d1550d958685d5c8a82fc80f799f4c5bea3.scope/container/memory.events
Dec  2 05:51:36 np0005542249 podman[98464]: 2025-12-02 10:51:36.314996157 +0000 UTC m=+0.822728488 container died c7a6995812ed5e1e5cd6e1133cf31d1550d958685d5c8a82fc80f799f4c5bea3 (image=quay.io/ceph/ceph:v18, name=focused_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  2 05:51:36 np0005542249 ceph-mgr[75372]: [progress INFO root] Writing back 9 completed events
Dec  2 05:51:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  2 05:51:36 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:36 np0005542249 systemd[1]: var-lib-containers-storage-overlay-eda7f0ee522ea2d37db72aba0e9a0f570c08bca5bf233d743f0d02cc1b403aa7-merged.mount: Deactivated successfully.
Dec  2 05:51:36 np0005542249 podman[98464]: 2025-12-02 10:51:36.366559069 +0000 UTC m=+0.874291380 container remove c7a6995812ed5e1e5cd6e1133cf31d1550d958685d5c8a82fc80f799f4c5bea3 (image=quay.io/ceph/ceph:v18, name=focused_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  2 05:51:36 np0005542249 systemd[1]: libpod-conmon-c7a6995812ed5e1e5cd6e1133cf31d1550d958685d5c8a82fc80f799f4c5bea3.scope: Deactivated successfully.
Dec  2 05:51:36 np0005542249 python3[98630]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:36 np0005542249 podman[98631]: 2025-12-02 10:51:36.74245924 +0000 UTC m=+0.036491153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]: {
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:    "0": [
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:        {
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "devices": [
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "/dev/loop3"
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            ],
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "lv_name": "ceph_lv0",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "lv_size": "21470642176",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "name": "ceph_lv0",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "tags": {
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.cluster_name": "ceph",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.crush_device_class": "",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.encrypted": "0",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.osd_id": "0",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.type": "block",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.vdo": "0"
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            },
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "type": "block",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "vg_name": "ceph_vg0"
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:        }
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:    ],
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:    "1": [
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:        {
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "devices": [
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "/dev/loop4"
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            ],
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "lv_name": "ceph_lv1",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "lv_size": "21470642176",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "name": "ceph_lv1",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "tags": {
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.cluster_name": "ceph",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.crush_device_class": "",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.encrypted": "0",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.osd_id": "1",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.type": "block",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.vdo": "0"
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            },
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "type": "block",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "vg_name": "ceph_vg1"
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:        }
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:    ],
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:    "2": [
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:        {
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "devices": [
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "/dev/loop5"
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            ],
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "lv_name": "ceph_lv2",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "lv_size": "21470642176",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "name": "ceph_lv2",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "tags": {
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.cluster_name": "ceph",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.crush_device_class": "",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.encrypted": "0",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.osd_id": "2",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.type": "block",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:                "ceph.vdo": "0"
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            },
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "type": "block",
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:            "vg_name": "ceph_vg2"
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:        }
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]:    ]
Dec  2 05:51:37 np0005542249 goofy_bhabha[98586]: }
Dec  2 05:51:37 np0005542249 systemd[1]: libpod-e3e958028e4ca497a698138661b27d12ef9729694472ccaed90416e1cbd46292.scope: Deactivated successfully.
Dec  2 05:51:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v99: 193 pgs: 31 unknown, 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:51:37 np0005542249 podman[98631]: 2025-12-02 10:51:37.05145212 +0000 UTC m=+0.345483953 container create 4a2f817d369e6f24f0d13dc2e6403a876798495512266f6440cac94c62d32ce3 (image=quay.io/ceph/ceph:v18, name=zen_goodall, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:37 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec  2 05:51:37 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec  2 05:51:37 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec  2 05:51:37 np0005542249 ceph-mon[75081]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  2 05:51:37 np0005542249 ceph-mon[75081]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec  2 05:51:37 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec  2 05:51:37 np0005542249 ceph-mon[75081]: Saving service mds.cephfs spec with placement compute-0
Dec  2 05:51:37 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:37 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:37 np0005542249 podman[98568]: 2025-12-02 10:51:37.061339449 +0000 UTC m=+0.923614662 container died e3e958028e4ca497a698138661b27d12ef9729694472ccaed90416e1cbd46292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bhabha, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:37 np0005542249 systemd[1]: Started libpod-conmon-4a2f817d369e6f24f0d13dc2e6403a876798495512266f6440cac94c62d32ce3.scope.
Dec  2 05:51:37 np0005542249 systemd[1]: var-lib-containers-storage-overlay-a553524b8b0a7663b99911ebbd720efe488cc4669923e9659461f3ae9a955274-merged.mount: Deactivated successfully.
Dec  2 05:51:37 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec  2 05:51:37 np0005542249 podman[98568]: 2025-12-02 10:51:37.128538695 +0000 UTC m=+0.990813868 container remove e3e958028e4ca497a698138661b27d12ef9729694472ccaed90416e1cbd46292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  2 05:51:37 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Dec  2 05:51:37 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:37 np0005542249 systemd[1]: libpod-conmon-e3e958028e4ca497a698138661b27d12ef9729694472ccaed90416e1cbd46292.scope: Deactivated successfully.
Dec  2 05:51:37 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/499e4ae8f74a67c0f14d9138ebcae9b3c692608e27c61860d3b1e9861e364134/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:37 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/499e4ae8f74a67c0f14d9138ebcae9b3c692608e27c61860d3b1e9861e364134/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:37 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/499e4ae8f74a67c0f14d9138ebcae9b3c692608e27c61860d3b1e9861e364134/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:37 np0005542249 podman[98631]: 2025-12-02 10:51:37.152388445 +0000 UTC m=+0.446420358 container init 4a2f817d369e6f24f0d13dc2e6403a876798495512266f6440cac94c62d32ce3 (image=quay.io/ceph/ceph:v18, name=zen_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  2 05:51:37 np0005542249 podman[98631]: 2025-12-02 10:51:37.161157983 +0000 UTC m=+0.455189806 container start 4a2f817d369e6f24f0d13dc2e6403a876798495512266f6440cac94c62d32ce3 (image=quay.io/ceph/ceph:v18, name=zen_goodall, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:51:37 np0005542249 podman[98631]: 2025-12-02 10:51:37.165806959 +0000 UTC m=+0.459838782 container attach 4a2f817d369e6f24f0d13dc2e6403a876798495512266f6440cac94c62d32ce3 (image=quay.io/ceph/ceph:v18, name=zen_goodall, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  2 05:51:37 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.5 deep-scrub starts
Dec  2 05:51:37 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.5 deep-scrub ok
Dec  2 05:51:37 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 05:51:37 np0005542249 ceph-mgr[75372]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec  2 05:51:37 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec  2 05:51:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec  2 05:51:37 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:37 np0005542249 zen_goodall[98664]: Scheduled mds.cephfs update...
Dec  2 05:51:37 np0005542249 systemd[1]: libpod-4a2f817d369e6f24f0d13dc2e6403a876798495512266f6440cac94c62d32ce3.scope: Deactivated successfully.
Dec  2 05:51:37 np0005542249 podman[98631]: 2025-12-02 10:51:37.751558904 +0000 UTC m=+1.045590787 container died 4a2f817d369e6f24f0d13dc2e6403a876798495512266f6440cac94c62d32ce3 (image=quay.io/ceph/ceph:v18, name=zen_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:51:37 np0005542249 systemd[1]: var-lib-containers-storage-overlay-499e4ae8f74a67c0f14d9138ebcae9b3c692608e27c61860d3b1e9861e364134-merged.mount: Deactivated successfully.
Dec  2 05:51:37 np0005542249 podman[98631]: 2025-12-02 10:51:37.812508072 +0000 UTC m=+1.106539915 container remove 4a2f817d369e6f24f0d13dc2e6403a876798495512266f6440cac94c62d32ce3 (image=quay.io/ceph/ceph:v18, name=zen_goodall, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Dec  2 05:51:37 np0005542249 systemd[1]: libpod-conmon-4a2f817d369e6f24f0d13dc2e6403a876798495512266f6440cac94c62d32ce3.scope: Deactivated successfully.
Dec  2 05:51:37 np0005542249 podman[98835]: 2025-12-02 10:51:37.904951165 +0000 UTC m=+0.058127652 container create 0dce89931edcc459ebf48f27323c94e665dccf18749e20f21634733324a7375b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  2 05:51:37 np0005542249 systemd[1]: Started libpod-conmon-0dce89931edcc459ebf48f27323c94e665dccf18749e20f21634733324a7375b.scope.
Dec  2 05:51:37 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:37 np0005542249 podman[98835]: 2025-12-02 10:51:37.873348235 +0000 UTC m=+0.026524792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:51:37 np0005542249 podman[98835]: 2025-12-02 10:51:37.971773842 +0000 UTC m=+0.124950319 container init 0dce89931edcc459ebf48f27323c94e665dccf18749e20f21634733324a7375b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goodall, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:51:37 np0005542249 podman[98835]: 2025-12-02 10:51:37.978768452 +0000 UTC m=+0.131944909 container start 0dce89931edcc459ebf48f27323c94e665dccf18749e20f21634733324a7375b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goodall, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:51:37 np0005542249 podman[98835]: 2025-12-02 10:51:37.981910837 +0000 UTC m=+0.135087384 container attach 0dce89931edcc459ebf48f27323c94e665dccf18749e20f21634733324a7375b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goodall, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  2 05:51:37 np0005542249 pedantic_goodall[98851]: 167 167
Dec  2 05:51:37 np0005542249 systemd[1]: libpod-0dce89931edcc459ebf48f27323c94e665dccf18749e20f21634733324a7375b.scope: Deactivated successfully.
Dec  2 05:51:37 np0005542249 podman[98835]: 2025-12-02 10:51:37.98384666 +0000 UTC m=+0.137023117 container died 0dce89931edcc459ebf48f27323c94e665dccf18749e20f21634733324a7375b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  2 05:51:38 np0005542249 systemd[1]: var-lib-containers-storage-overlay-81b6306c03e0a24d37d264e4e703bbae0403724eb29e9bc82a65680e9f610b7a-merged.mount: Deactivated successfully.
Dec  2 05:51:38 np0005542249 podman[98835]: 2025-12-02 10:51:38.028498864 +0000 UTC m=+0.181675321 container remove 0dce89931edcc459ebf48f27323c94e665dccf18749e20f21634733324a7375b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  2 05:51:38 np0005542249 systemd[1]: libpod-conmon-0dce89931edcc459ebf48f27323c94e665dccf18749e20f21634733324a7375b.scope: Deactivated successfully.
Dec  2 05:51:38 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:38 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec  2 05:51:38 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec  2 05:51:38 np0005542249 podman[98875]: 2025-12-02 10:51:38.184461375 +0000 UTC m=+0.055779678 container create 01d9f9ebbb4048f61180a6040e3a5ee3efef3e41373db7bb75240315f8543cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  2 05:51:38 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.6 deep-scrub starts
Dec  2 05:51:38 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.6 deep-scrub ok
Dec  2 05:51:38 np0005542249 systemd[1]: Started libpod-conmon-01d9f9ebbb4048f61180a6040e3a5ee3efef3e41373db7bb75240315f8543cd4.scope.
Dec  2 05:51:38 np0005542249 podman[98875]: 2025-12-02 10:51:38.154077458 +0000 UTC m=+0.025395821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:51:38 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:38 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a794b8f8f93e5877e6df2624ef0a67c88722429994294305332905cde730d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:38 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a794b8f8f93e5877e6df2624ef0a67c88722429994294305332905cde730d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:38 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a794b8f8f93e5877e6df2624ef0a67c88722429994294305332905cde730d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:38 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a794b8f8f93e5877e6df2624ef0a67c88722429994294305332905cde730d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:38 np0005542249 podman[98875]: 2025-12-02 10:51:38.288465692 +0000 UTC m=+0.159784045 container init 01d9f9ebbb4048f61180a6040e3a5ee3efef3e41373db7bb75240315f8543cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:51:38 np0005542249 podman[98875]: 2025-12-02 10:51:38.2990666 +0000 UTC m=+0.170384903 container start 01d9f9ebbb4048f61180a6040e3a5ee3efef3e41373db7bb75240315f8543cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Dec  2 05:51:38 np0005542249 podman[98875]: 2025-12-02 10:51:38.302721339 +0000 UTC m=+0.174039682 container attach 01d9f9ebbb4048f61180a6040e3a5ee3efef3e41373db7bb75240315f8543cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_diffie, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  2 05:51:38 np0005542249 python3[98974]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 05:51:38 np0005542249 python3[99047]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764672698.2286065-36760-22842511139370/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=17ecae96b4c7e30ce02024b078d2b5cdfc359db1 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:51:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v100: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: Saving service mds.cephfs spec with placement compute-0
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.049500465s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.855865479s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.049424171s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.855819702s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.049408913s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.855850220s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.049427032s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.855865479s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.061413765s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 72.867828369s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.049386024s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.855850220s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.049365044s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.855819702s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.061343193s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.867828369s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.049174309s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.855773926s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.049150467s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.855773926s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.049144745s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.855804443s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.049126625s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.855804443s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.061437607s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 72.868133545s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.061417580s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.868133545s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.048993111s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.855728149s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.048948288s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.855751038s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.048958778s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.855728149s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.061338425s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 72.868141174s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.048933983s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.855751038s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.061317444s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.868141174s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.061272621s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 72.868148804s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.061283112s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 72.868194580s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.061260223s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.868148804s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.061269760s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.868194580s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.048732758s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.855667114s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.061235428s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 72.868225098s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.061219215s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.868225098s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.048596382s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.855628967s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.048578262s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.855628967s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.061046600s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 72.868202209s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.048493385s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.855667114s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.061032295s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.868202209s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.061351776s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 72.868537903s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.048710823s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.855667114s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.048472404s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.855667114s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.047334671s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.854644775s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.061234474s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.868537903s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.060963631s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 72.868301392s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.047319412s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.854644775s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.060944557s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.868301392s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.047230721s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.854629517s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.060852051s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 72.868255615s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.060806274s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.868255615s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.047179222s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.854629517s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.048098564s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.855583191s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.048063278s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.855583191s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.060762405s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 72.868316650s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.060747147s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.868316650s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.046967506s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.854560852s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.060906410s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 72.868522644s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.046942711s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.854560852s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.046845436s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.854560852s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.060553551s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 72.868293762s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.060533524s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.868293762s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.060768127s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.868522644s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.060039520s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 72.867851257s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.046817780s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.854560852s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.060014725s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.867851257s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.060716629s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 72.868598938s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.046401024s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.854293823s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.046380997s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.854293823s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.046371460s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.854286194s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.060696602s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.868598938s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.046322823s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.854286194s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.046289444s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.854286194s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.060364723s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 72.868362427s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.046275139s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.854286194s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.046096802s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.854141235s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.046079636s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.854141235s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.060324669s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 72.868438721s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.060306549s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.868438721s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.047716141s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.855880737s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.047696114s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.855880737s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.046120644s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.854309082s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.046042442s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.854148865s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.046091080s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.854309082s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.045907021s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.854148865s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.060214043s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 72.868469238s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.060194969s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.868469238s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.060184479s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 72.868484497s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.060158730s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.868484497s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.037450790s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.845817566s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.060103416s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 72.868492126s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.037429810s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.845817566s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.060081482s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.868492126s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.060340881s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.868362427s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 2.c deep-scrub starts
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[2.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[5.1d( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[2.2( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[2.b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[2.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[2.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[2.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.041357994s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 83.214820862s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.041301727s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.214820862s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[2.d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.801329613s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 85.975090027s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.801259995s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.975090027s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.794755936s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 85.968757629s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.794721603s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.968757629s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.800866127s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 85.975059509s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.800830841s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.975059509s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.040388107s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 83.214744568s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.040343285s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.214744568s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.040310860s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 83.214729309s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.040203094s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.214729309s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.800576210s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 85.975181580s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.800551414s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.975181580s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.039998055s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 83.214729309s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.039970398s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.214729309s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.039846420s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 83.214736938s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.800616264s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 85.975578308s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.039806366s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.214736938s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.800587654s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.975578308s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.039546967s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 83.214668274s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 2.c deep-scrub ok
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[5.9( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[5.16( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[5.12( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[2.15( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[5.13( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[2.17( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[5.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[2.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.806496620s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.581123352s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.806481361s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.581123352s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.030806541s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 83.805534363s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.030794144s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805534363s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.030710220s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 83.805519104s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.030698776s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805519104s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.039516449s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.214668274s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.806119919s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.581024170s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.806107521s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.581024170s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.030526161s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 83.805503845s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.030514717s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805503845s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.030399323s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 83.805465698s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.030386925s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805465698s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.805886269s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.581062317s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.805873871s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.581062317s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.029854774s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 83.805252075s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.029817581s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805252075s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.805377960s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.581138611s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.029339790s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 83.805259705s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.029267311s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805259705s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.029136658s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 83.805267334s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.028921127s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 83.805213928s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.029055595s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805267334s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.028831482s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805213928s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.804868698s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.581199646s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.804577827s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.581207275s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.804561615s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.581199646s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.805332184s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.581138611s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.804471016s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.581207275s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.804368973s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.581230164s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.804347038s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.581230164s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.804234505s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.581260681s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.804189682s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.581260681s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.027921677s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 83.805168152s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.027829170s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805168152s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.803754807s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.581275940s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.027308464s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 83.805084229s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.030677795s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 83.808586121s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.027278900s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805084229s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.803360939s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.581314087s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.803323746s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.581314087s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.803694725s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.581275940s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.803577423s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.581291199s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.026881218s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 83.805076599s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.803048134s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.581291199s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.026822090s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805076599s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.802855492s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.581344604s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.802828789s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.581344604s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.802808762s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.581329346s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.026586533s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 83.805160522s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.030485153s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.808586121s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.026552200s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805160522s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.802507401s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.581367493s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.802474022s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.581367493s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.025943756s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 83.805015564s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.025918007s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805015564s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.802289963s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.581405640s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.802269936s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.581405640s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.025955200s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 83.805107117s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.025910378s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805107117s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.024507523s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 83.803764343s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.025736809s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 83.804992676s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.024489403s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.803764343s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.025709152s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.804992676s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.802022934s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.581398010s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.801994324s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.581398010s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.801914215s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.581428528s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.801882744s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.581428528s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.023880005s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 83.803642273s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.015195847s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 83.794975281s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.023850441s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.803642273s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.015116692s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.794975281s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.801417351s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.581436157s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.801400185s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.581436157s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.802744865s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.581329346s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.801006317s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.581443787s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.800990105s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.581443787s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.023233414s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 83.803749084s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.817357063s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 80.597900391s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.817344666s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.597900391s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.023199081s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.803749084s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.023013115s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 83.803649902s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[6.17( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=15.022893906s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.803649902s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[6.c( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.032448769s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 83.214828491s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.792733192s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 85.975204468s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.032390594s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.214828491s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.792749405s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 85.975242615s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.792688370s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.975204468s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.792710304s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.975242615s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.792535782s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 85.975234985s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.031908989s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 83.214637756s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.031865120s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.214637756s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.031733513s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 83.214683533s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.792507172s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.975234985s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.792150497s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 85.975296021s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.031695366s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.214683533s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.792099953s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.975296021s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.791991234s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 85.975311279s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.791964531s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.975311279s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.031243324s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 83.214607239s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.031243324s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 83.214614868s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.031204224s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.214614868s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.031201363s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.214607239s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.792437553s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 85.975906372s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.792389870s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.975906372s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.030964851s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 83.214546204s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.030901909s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.214546204s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.791881561s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 85.975532532s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.030925751s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 83.214591980s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.791899681s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 85.975578308s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.791853905s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.975578308s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.791819572s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.975532532s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.030681610s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 83.214447021s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.030858994s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.214591980s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.030598640s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.214447021s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.025895119s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 83.214599609s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.025700569s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 83.214401245s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.787455559s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 85.976181030s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.025790215s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.214599609s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.025632858s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.214401245s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.787350655s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.976181030s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.025428772s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 83.214401245s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.025401115s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.214401245s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.787252426s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 85.976364136s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.024763107s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 83.213935852s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.024833679s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 83.213996887s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.787201881s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.976364136s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.024708748s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.213935852s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.024769783s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.213996887s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.787061691s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 85.976448059s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.787034035s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.976448059s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.024593353s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 83.214027405s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.786967278s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 85.976455688s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=9.024546623s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.214027405s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.786962509s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 85.976524353s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.786917686s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.976455688s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.786937714s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.976524353s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.786293030s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 85.976531982s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=11.786243439s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.976531982s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[7.1f( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[6.15( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[6.4( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[6.14( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[6.11( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[6.1e( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[6.13( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[6.1c( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 44 pg[6.1d( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[6.8( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 44 pg[6.1f( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:39 np0005542249 intelligent_diffie[98915]: {
Dec  2 05:51:39 np0005542249 intelligent_diffie[98915]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 05:51:39 np0005542249 intelligent_diffie[98915]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:51:39 np0005542249 intelligent_diffie[98915]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 05:51:39 np0005542249 intelligent_diffie[98915]:        "osd_id": 0,
Dec  2 05:51:39 np0005542249 intelligent_diffie[98915]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 05:51:39 np0005542249 intelligent_diffie[98915]:        "type": "bluestore"
Dec  2 05:51:39 np0005542249 intelligent_diffie[98915]:    },
Dec  2 05:51:39 np0005542249 intelligent_diffie[98915]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 05:51:39 np0005542249 intelligent_diffie[98915]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:51:39 np0005542249 intelligent_diffie[98915]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 05:51:39 np0005542249 intelligent_diffie[98915]:        "osd_id": 2,
Dec  2 05:51:39 np0005542249 intelligent_diffie[98915]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 05:51:39 np0005542249 intelligent_diffie[98915]:        "type": "bluestore"
Dec  2 05:51:39 np0005542249 intelligent_diffie[98915]:    },
Dec  2 05:51:39 np0005542249 intelligent_diffie[98915]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 05:51:39 np0005542249 intelligent_diffie[98915]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:51:39 np0005542249 intelligent_diffie[98915]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 05:51:39 np0005542249 intelligent_diffie[98915]:        "osd_id": 1,
Dec  2 05:51:39 np0005542249 intelligent_diffie[98915]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 05:51:39 np0005542249 intelligent_diffie[98915]:        "type": "bluestore"
Dec  2 05:51:39 np0005542249 intelligent_diffie[98915]:    }
Dec  2 05:51:39 np0005542249 intelligent_diffie[98915]: }
Dec  2 05:51:39 np0005542249 systemd[1]: libpod-01d9f9ebbb4048f61180a6040e3a5ee3efef3e41373db7bb75240315f8543cd4.scope: Deactivated successfully.
Dec  2 05:51:39 np0005542249 podman[98875]: 2025-12-02 10:51:39.32528211 +0000 UTC m=+1.196600423 container died 01d9f9ebbb4048f61180a6040e3a5ee3efef3e41373db7bb75240315f8543cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  2 05:51:39 np0005542249 systemd[1]: libpod-01d9f9ebbb4048f61180a6040e3a5ee3efef3e41373db7bb75240315f8543cd4.scope: Consumed 1.026s CPU time.
Dec  2 05:51:39 np0005542249 systemd[1]: var-lib-containers-storage-overlay-e7a794b8f8f93e5877e6df2624ef0a67c88722429994294305332905cde730d3-merged.mount: Deactivated successfully.
Dec  2 05:51:39 np0005542249 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 05:51:39 np0005542249 podman[98875]: 2025-12-02 10:51:39.395344895 +0000 UTC m=+1.266663218 container remove 01d9f9ebbb4048f61180a6040e3a5ee3efef3e41373db7bb75240315f8543cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:39 np0005542249 systemd[1]: libpod-conmon-01d9f9ebbb4048f61180a6040e3a5ee3efef3e41373db7bb75240315f8543cd4.scope: Deactivated successfully.
Dec  2 05:51:39 np0005542249 python3[99122]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:39 np0005542249 podman[99139]: 2025-12-02 10:51:39.47569409 +0000 UTC m=+0.051269155 container create 5e75966ce536deebaf4174541edfae83ca26218b2e751ae96f97b7343955ca46 (image=quay.io/ceph/ceph:v18, name=serene_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  2 05:51:39 np0005542249 systemd[1]: Started libpod-conmon-5e75966ce536deebaf4174541edfae83ca26218b2e751ae96f97b7343955ca46.scope.
Dec  2 05:51:39 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:39 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/123681ae8bd89c6e2b1f1fc4ef7f3c1d941db609e38e0fb206154380dc66ff19/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:39 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/123681ae8bd89c6e2b1f1fc4ef7f3c1d941db609e38e0fb206154380dc66ff19/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:39 np0005542249 podman[99139]: 2025-12-02 10:51:39.546616829 +0000 UTC m=+0.122191924 container init 5e75966ce536deebaf4174541edfae83ca26218b2e751ae96f97b7343955ca46 (image=quay.io/ceph/ceph:v18, name=serene_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:51:39 np0005542249 podman[99139]: 2025-12-02 10:51:39.456861768 +0000 UTC m=+0.032436833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:39 np0005542249 podman[99139]: 2025-12-02 10:51:39.555701975 +0000 UTC m=+0.131277020 container start 5e75966ce536deebaf4174541edfae83ca26218b2e751ae96f97b7343955ca46 (image=quay.io/ceph/ceph:v18, name=serene_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:51:39 np0005542249 podman[99139]: 2025-12-02 10:51:39.55919515 +0000 UTC m=+0.134770195 container attach 5e75966ce536deebaf4174541edfae83ca26218b2e751ae96f97b7343955ca46 (image=quay.io/ceph/ceph:v18, name=serene_meninsky, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  2 05:51:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=44/45 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=44/45 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=44/45 n=0 ec=39/24 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2644220209' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2644220209' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec  2 05:51:40 np0005542249 systemd[1]: libpod-5e75966ce536deebaf4174541edfae83ca26218b2e751ae96f97b7343955ca46.scope: Deactivated successfully.
Dec  2 05:51:40 np0005542249 podman[99139]: 2025-12-02 10:51:40.185398016 +0000 UTC m=+0.760973051 container died 5e75966ce536deebaf4174541edfae83ca26218b2e751ae96f97b7343955ca46 (image=quay.io/ceph/ceph:v18, name=serene_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:40 np0005542249 systemd[1]: var-lib-containers-storage-overlay-123681ae8bd89c6e2b1f1fc4ef7f3c1d941db609e38e0fb206154380dc66ff19-merged.mount: Deactivated successfully.
Dec  2 05:51:40 np0005542249 podman[99139]: 2025-12-02 10:51:40.237358878 +0000 UTC m=+0.812933923 container remove 5e75966ce536deebaf4174541edfae83ca26218b2e751ae96f97b7343955ca46 (image=quay.io/ceph/ceph:v18, name=serene_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Dec  2 05:51:40 np0005542249 systemd[1]: libpod-conmon-5e75966ce536deebaf4174541edfae83ca26218b2e751ae96f97b7343955ca46.scope: Deactivated successfully.
Dec  2 05:51:40 np0005542249 podman[99412]: 2025-12-02 10:51:40.324915258 +0000 UTC m=+0.065133711 container exec cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:51:40 np0005542249 podman[99412]: 2025-12-02 10:51:40.425106163 +0000 UTC m=+0.165324626 container exec_died cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:51:40 np0005542249 python3[99533]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:40 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 1e78dd62-4540-44c4-8de6-9bf9b1f9fb9f does not exist
Dec  2 05:51:40 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev e851582d-ec37-4bd6-93d1-44b307f1f59d does not exist
Dec  2 05:51:40 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev ee7ed32f-95f4-4df9-828e-eef80c24689b does not exist
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 05:51:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 05:51:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:51:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:51:41 np0005542249 podman[99560]: 2025-12-02 10:51:41.020572792 +0000 UTC m=+0.047473552 container create 7a8e6cde96de5ebe57edec2c60017f8645c779378d85f36db0ee6a8660f4f7a4 (image=quay.io/ceph/ceph:v18, name=distracted_lehmann, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:51:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:51:41 np0005542249 systemd[1]: Started libpod-conmon-7a8e6cde96de5ebe57edec2c60017f8645c779378d85f36db0ee6a8660f4f7a4.scope.
Dec  2 05:51:41 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/2644220209' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec  2 05:51:41 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/2644220209' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec  2 05:51:41 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:41 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:41 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:51:41 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:41 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 05:51:41 np0005542249 podman[99560]: 2025-12-02 10:51:41.00061904 +0000 UTC m=+0.027519850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:41 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:41 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fc1e9fb6328f54f397fd70ebb624b907a8451edb7c832d708b75fe410f508a3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:41 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fc1e9fb6328f54f397fd70ebb624b907a8451edb7c832d708b75fe410f508a3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:41 np0005542249 podman[99560]: 2025-12-02 10:51:41.133336718 +0000 UTC m=+0.160237558 container init 7a8e6cde96de5ebe57edec2c60017f8645c779378d85f36db0ee6a8660f4f7a4 (image=quay.io/ceph/ceph:v18, name=distracted_lehmann, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:51:41 np0005542249 podman[99560]: 2025-12-02 10:51:41.142107976 +0000 UTC m=+0.169008746 container start 7a8e6cde96de5ebe57edec2c60017f8645c779378d85f36db0ee6a8660f4f7a4 (image=quay.io/ceph/ceph:v18, name=distracted_lehmann, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  2 05:51:41 np0005542249 podman[99560]: 2025-12-02 10:51:41.14591766 +0000 UTC m=+0.172818470 container attach 7a8e6cde96de5ebe57edec2c60017f8645c779378d85f36db0ee6a8660f4f7a4 (image=quay.io/ceph/ceph:v18, name=distracted_lehmann, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:51:41 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.b scrub starts
Dec  2 05:51:41 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.b scrub ok
Dec  2 05:51:41 np0005542249 ceph-mgr[75372]: [progress INFO root] Completed event ed218532-5d4b-4ba7-970c-3ba8ca79adc4 (Global Recovery Event) in 10 seconds
Dec  2 05:51:41 np0005542249 podman[99740]: 2025-12-02 10:51:41.733824084 +0000 UTC m=+0.058389099 container create 88dbba95950b26b749c1244f1aff562e77702d089a9c1da76e425a2e1c0845ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  2 05:51:41 np0005542249 systemd[1]: Started libpod-conmon-88dbba95950b26b749c1244f1aff562e77702d089a9c1da76e425a2e1c0845ce.scope.
Dec  2 05:51:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec  2 05:51:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3720793409' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  2 05:51:41 np0005542249 distracted_lehmann[99599]: 
Dec  2 05:51:41 np0005542249 distracted_lehmann[99599]: {"fsid":"95bc4eaa-1a14-59bf-acf2-4b3da055547d","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":182,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":45,"num_osds":3,"num_up_osds":3,"osd_up_since":1764672643,"num_in_osds":3,"osd_in_since":1764672613,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84205568,"bytes_avail":64327720960,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2025-12-02T10:51:37.048931+0000","services":{"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"ed218532-5d4b-4ba7-970c-3ba8ca79adc4":{"message":"Global Recovery Event (5s)\n      [===================.........] (remaining: 2s)","progress":0.68947368860244751,"add_to_ceph_s":true}}}
Dec  2 05:51:41 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:41 np0005542249 systemd[1]: libpod-7a8e6cde96de5ebe57edec2c60017f8645c779378d85f36db0ee6a8660f4f7a4.scope: Deactivated successfully.
Dec  2 05:51:41 np0005542249 podman[99560]: 2025-12-02 10:51:41.797440853 +0000 UTC m=+0.824341653 container died 7a8e6cde96de5ebe57edec2c60017f8645c779378d85f36db0ee6a8660f4f7a4 (image=quay.io/ceph/ceph:v18, name=distracted_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 05:51:41 np0005542249 podman[99740]: 2025-12-02 10:51:41.714397066 +0000 UTC m=+0.038962101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:51:41 np0005542249 podman[99740]: 2025-12-02 10:51:41.808509904 +0000 UTC m=+0.133074949 container init 88dbba95950b26b749c1244f1aff562e77702d089a9c1da76e425a2e1c0845ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mendeleev, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  2 05:51:41 np0005542249 podman[99740]: 2025-12-02 10:51:41.814021714 +0000 UTC m=+0.138586729 container start 88dbba95950b26b749c1244f1aff562e77702d089a9c1da76e425a2e1c0845ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:51:41 np0005542249 adoring_mendeleev[99756]: 167 167
Dec  2 05:51:41 np0005542249 podman[99740]: 2025-12-02 10:51:41.817379155 +0000 UTC m=+0.141944260 container attach 88dbba95950b26b749c1244f1aff562e77702d089a9c1da76e425a2e1c0845ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mendeleev, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:51:41 np0005542249 systemd[1]: libpod-88dbba95950b26b749c1244f1aff562e77702d089a9c1da76e425a2e1c0845ce.scope: Deactivated successfully.
Dec  2 05:51:41 np0005542249 podman[99740]: 2025-12-02 10:51:41.818561358 +0000 UTC m=+0.143126373 container died 88dbba95950b26b749c1244f1aff562e77702d089a9c1da76e425a2e1c0845ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  2 05:51:41 np0005542249 systemd[1]: var-lib-containers-storage-overlay-1fc1e9fb6328f54f397fd70ebb624b907a8451edb7c832d708b75fe410f508a3-merged.mount: Deactivated successfully.
Dec  2 05:51:41 np0005542249 podman[99560]: 2025-12-02 10:51:41.849591601 +0000 UTC m=+0.876492361 container remove 7a8e6cde96de5ebe57edec2c60017f8645c779378d85f36db0ee6a8660f4f7a4 (image=quay.io/ceph/ceph:v18, name=distracted_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  2 05:51:41 np0005542249 systemd[1]: var-lib-containers-storage-overlay-ae6e62ca9e346a51343bd52b93baa27f8dfb304943ccbb35e872ce7c4244917f-merged.mount: Deactivated successfully.
Dec  2 05:51:41 np0005542249 systemd[1]: libpod-conmon-7a8e6cde96de5ebe57edec2c60017f8645c779378d85f36db0ee6a8660f4f7a4.scope: Deactivated successfully.
Dec  2 05:51:41 np0005542249 podman[99740]: 2025-12-02 10:51:41.870292634 +0000 UTC m=+0.194857649 container remove 88dbba95950b26b749c1244f1aff562e77702d089a9c1da76e425a2e1c0845ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mendeleev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  2 05:51:41 np0005542249 systemd[1]: libpod-conmon-88dbba95950b26b749c1244f1aff562e77702d089a9c1da76e425a2e1c0845ce.scope: Deactivated successfully.
Dec  2 05:51:42 np0005542249 podman[99805]: 2025-12-02 10:51:42.031795915 +0000 UTC m=+0.044627874 container create fdc4297b2179930b4213c672ee0fe7635588f09a4a7bf3aa2e0dc967a230e6df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swanson, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  2 05:51:42 np0005542249 systemd[1]: Started libpod-conmon-fdc4297b2179930b4213c672ee0fe7635588f09a4a7bf3aa2e0dc967a230e6df.scope.
Dec  2 05:51:42 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:42 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8358e2ffdc153fc33bc2691910295db2e754e8483e838f8514c1894517bbd11f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:42 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8358e2ffdc153fc33bc2691910295db2e754e8483e838f8514c1894517bbd11f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:42 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8358e2ffdc153fc33bc2691910295db2e754e8483e838f8514c1894517bbd11f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:42 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8358e2ffdc153fc33bc2691910295db2e754e8483e838f8514c1894517bbd11f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:42 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8358e2ffdc153fc33bc2691910295db2e754e8483e838f8514c1894517bbd11f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:42 np0005542249 podman[99805]: 2025-12-02 10:51:42.010124316 +0000 UTC m=+0.022956295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:51:42 np0005542249 podman[99805]: 2025-12-02 10:51:42.124526747 +0000 UTC m=+0.137358716 container init fdc4297b2179930b4213c672ee0fe7635588f09a4a7bf3aa2e0dc967a230e6df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:51:42 np0005542249 podman[99805]: 2025-12-02 10:51:42.131420074 +0000 UTC m=+0.144252023 container start fdc4297b2179930b4213c672ee0fe7635588f09a4a7bf3aa2e0dc967a230e6df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swanson, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:51:42 np0005542249 podman[99805]: 2025-12-02 10:51:42.134881908 +0000 UTC m=+0.147713867 container attach fdc4297b2179930b4213c672ee0fe7635588f09a4a7bf3aa2e0dc967a230e6df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:51:42 np0005542249 python3[99832]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:42 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 2.e scrub starts
Dec  2 05:51:42 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 2.e scrub ok
Dec  2 05:51:42 np0005542249 podman[99841]: 2025-12-02 10:51:42.226116518 +0000 UTC m=+0.054268407 container create 8804984badb62ec94bf3b28d1f1b38cc0201143ad08944b41733b1f3fe3ba0cd (image=quay.io/ceph/ceph:v18, name=happy_dijkstra, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  2 05:51:42 np0005542249 systemd[1]: Started libpod-conmon-8804984badb62ec94bf3b28d1f1b38cc0201143ad08944b41733b1f3fe3ba0cd.scope.
Dec  2 05:51:42 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:42 np0005542249 podman[99841]: 2025-12-02 10:51:42.199518285 +0000 UTC m=+0.027670254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:42 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e626cbe99f8ac00bcdb4829bb1c7443f898f42cda156133c06cf4e68c138ec/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:42 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e626cbe99f8ac00bcdb4829bb1c7443f898f42cda156133c06cf4e68c138ec/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:42 np0005542249 podman[99841]: 2025-12-02 10:51:42.317257646 +0000 UTC m=+0.145409555 container init 8804984badb62ec94bf3b28d1f1b38cc0201143ad08944b41733b1f3fe3ba0cd (image=quay.io/ceph/ceph:v18, name=happy_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:42 np0005542249 podman[99841]: 2025-12-02 10:51:42.322042416 +0000 UTC m=+0.150194305 container start 8804984badb62ec94bf3b28d1f1b38cc0201143ad08944b41733b1f3fe3ba0cd (image=quay.io/ceph/ceph:v18, name=happy_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:42 np0005542249 podman[99841]: 2025-12-02 10:51:42.325299765 +0000 UTC m=+0.153451654 container attach 8804984badb62ec94bf3b28d1f1b38cc0201143ad08944b41733b1f3fe3ba0cd (image=quay.io/ceph/ceph:v18, name=happy_dijkstra, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  2 05:51:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 05:51:42 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3520222890' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 05:51:42 np0005542249 happy_dijkstra[99857]: 
Dec  2 05:51:42 np0005542249 happy_dijkstra[99857]: {"epoch":1,"fsid":"95bc4eaa-1a14-59bf-acf2-4b3da055547d","modified":"2025-12-02T10:48:33.594680Z","created":"2025-12-02T10:48:33.594680Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Dec  2 05:51:42 np0005542249 happy_dijkstra[99857]: dumped monmap epoch 1
Dec  2 05:51:42 np0005542249 systemd[1]: libpod-8804984badb62ec94bf3b28d1f1b38cc0201143ad08944b41733b1f3fe3ba0cd.scope: Deactivated successfully.
Dec  2 05:51:42 np0005542249 podman[99890]: 2025-12-02 10:51:42.970987249 +0000 UTC m=+0.026808479 container died 8804984badb62ec94bf3b28d1f1b38cc0201143ad08944b41733b1f3fe3ba0cd (image=quay.io/ceph/ceph:v18, name=happy_dijkstra, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  2 05:51:42 np0005542249 systemd[1]: var-lib-containers-storage-overlay-36e626cbe99f8ac00bcdb4829bb1c7443f898f42cda156133c06cf4e68c138ec-merged.mount: Deactivated successfully.
Dec  2 05:51:43 np0005542249 podman[99890]: 2025-12-02 10:51:43.016083315 +0000 UTC m=+0.071904545 container remove 8804984badb62ec94bf3b28d1f1b38cc0201143ad08944b41733b1f3fe3ba0cd (image=quay.io/ceph/ceph:v18, name=happy_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  2 05:51:43 np0005542249 systemd[1]: libpod-conmon-8804984badb62ec94bf3b28d1f1b38cc0201143ad08944b41733b1f3fe3ba0cd.scope: Deactivated successfully.
Dec  2 05:51:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:51:43 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Dec  2 05:51:43 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Dec  2 05:51:43 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Dec  2 05:51:43 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Dec  2 05:51:43 np0005542249 naughty_swanson[99836]: --> passed data devices: 0 physical, 3 LVM
Dec  2 05:51:43 np0005542249 naughty_swanson[99836]: --> relative data size: 1.0
Dec  2 05:51:43 np0005542249 naughty_swanson[99836]: --> All data devices are unavailable
Dec  2 05:51:43 np0005542249 systemd[1]: libpod-fdc4297b2179930b4213c672ee0fe7635588f09a4a7bf3aa2e0dc967a230e6df.scope: Deactivated successfully.
Dec  2 05:51:43 np0005542249 podman[99805]: 2025-12-02 10:51:43.195837883 +0000 UTC m=+1.208669832 container died fdc4297b2179930b4213c672ee0fe7635588f09a4a7bf3aa2e0dc967a230e6df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swanson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  2 05:51:43 np0005542249 systemd[1]: libpod-fdc4297b2179930b4213c672ee0fe7635588f09a4a7bf3aa2e0dc967a230e6df.scope: Consumed 1.027s CPU time.
Dec  2 05:51:43 np0005542249 systemd[1]: var-lib-containers-storage-overlay-8358e2ffdc153fc33bc2691910295db2e754e8483e838f8514c1894517bbd11f-merged.mount: Deactivated successfully.
Dec  2 05:51:43 np0005542249 podman[99805]: 2025-12-02 10:51:43.247839527 +0000 UTC m=+1.260671476 container remove fdc4297b2179930b4213c672ee0fe7635588f09a4a7bf3aa2e0dc967a230e6df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  2 05:51:43 np0005542249 systemd[1]: libpod-conmon-fdc4297b2179930b4213c672ee0fe7635588f09a4a7bf3aa2e0dc967a230e6df.scope: Deactivated successfully.
Dec  2 05:51:43 np0005542249 python3[100016]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:43 np0005542249 podman[100060]: 2025-12-02 10:51:43.593455783 +0000 UTC m=+0.042977139 container create b0a001d5e1970024763f79ba5581ab8c3a1e419145478d7ec94a69dbac5ddb82 (image=quay.io/ceph/ceph:v18, name=determined_napier, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:43 np0005542249 systemd[1]: Started libpod-conmon-b0a001d5e1970024763f79ba5581ab8c3a1e419145478d7ec94a69dbac5ddb82.scope.
Dec  2 05:51:43 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:43 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc039b8d8197166ed85ddaf99eb336a5e568014cd10882ac057f211d44abb497/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:43 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc039b8d8197166ed85ddaf99eb336a5e568014cd10882ac057f211d44abb497/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:43 np0005542249 podman[100060]: 2025-12-02 10:51:43.577515039 +0000 UTC m=+0.027036445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:43 np0005542249 podman[100060]: 2025-12-02 10:51:43.682638058 +0000 UTC m=+0.132159454 container init b0a001d5e1970024763f79ba5581ab8c3a1e419145478d7ec94a69dbac5ddb82 (image=quay.io/ceph/ceph:v18, name=determined_napier, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 05:51:43 np0005542249 podman[100060]: 2025-12-02 10:51:43.689485894 +0000 UTC m=+0.139007260 container start b0a001d5e1970024763f79ba5581ab8c3a1e419145478d7ec94a69dbac5ddb82 (image=quay.io/ceph/ceph:v18, name=determined_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  2 05:51:43 np0005542249 podman[100060]: 2025-12-02 10:51:43.693287877 +0000 UTC m=+0.142809243 container attach b0a001d5e1970024763f79ba5581ab8c3a1e419145478d7ec94a69dbac5ddb82 (image=quay.io/ceph/ceph:v18, name=determined_napier, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  2 05:51:43 np0005542249 podman[100120]: 2025-12-02 10:51:43.841020474 +0000 UTC m=+0.040871103 container create 32667f8de2a14e9f5980d5606515c9433a63395d92d0a33e7aa71b641be1473b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:43 np0005542249 systemd[1]: Started libpod-conmon-32667f8de2a14e9f5980d5606515c9433a63395d92d0a33e7aa71b641be1473b.scope.
Dec  2 05:51:43 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:43 np0005542249 podman[100120]: 2025-12-02 10:51:43.91257059 +0000 UTC m=+0.112421279 container init 32667f8de2a14e9f5980d5606515c9433a63395d92d0a33e7aa71b641be1473b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  2 05:51:43 np0005542249 podman[100120]: 2025-12-02 10:51:43.823773515 +0000 UTC m=+0.023624164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:51:43 np0005542249 podman[100120]: 2025-12-02 10:51:43.920731702 +0000 UTC m=+0.120582341 container start 32667f8de2a14e9f5980d5606515c9433a63395d92d0a33e7aa71b641be1473b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:43 np0005542249 gifted_proskuriakova[100136]: 167 167
Dec  2 05:51:43 np0005542249 podman[100120]: 2025-12-02 10:51:43.924295818 +0000 UTC m=+0.124146467 container attach 32667f8de2a14e9f5980d5606515c9433a63395d92d0a33e7aa71b641be1473b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:51:43 np0005542249 systemd[1]: libpod-32667f8de2a14e9f5980d5606515c9433a63395d92d0a33e7aa71b641be1473b.scope: Deactivated successfully.
Dec  2 05:51:43 np0005542249 podman[100120]: 2025-12-02 10:51:43.925350267 +0000 UTC m=+0.125200896 container died 32667f8de2a14e9f5980d5606515c9433a63395d92d0a33e7aa71b641be1473b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:51:43 np0005542249 systemd[1]: var-lib-containers-storage-overlay-8810bfcc661ac6e1b26238f51ca886641e96b4c831564c684b68d609ac30ab8c-merged.mount: Deactivated successfully.
Dec  2 05:51:43 np0005542249 podman[100120]: 2025-12-02 10:51:43.964482091 +0000 UTC m=+0.164332720 container remove 32667f8de2a14e9f5980d5606515c9433a63395d92d0a33e7aa71b641be1473b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  2 05:51:43 np0005542249 systemd[1]: libpod-conmon-32667f8de2a14e9f5980d5606515c9433a63395d92d0a33e7aa71b641be1473b.scope: Deactivated successfully.
Dec  2 05:51:44 np0005542249 podman[100181]: 2025-12-02 10:51:44.104768615 +0000 UTC m=+0.040423990 container create c6ee9e7291a819a1f02d10926f63fcc20fd1a45d6ebd815286d1cf6efbc6011a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaplygin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  2 05:51:44 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 4.6 deep-scrub starts
Dec  2 05:51:44 np0005542249 systemd[1]: Started libpod-conmon-c6ee9e7291a819a1f02d10926f63fcc20fd1a45d6ebd815286d1cf6efbc6011a.scope.
Dec  2 05:51:44 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 4.6 deep-scrub ok
Dec  2 05:51:44 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:44 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabc2c51f0f5fe4e48eabdab99b8c30c491cd39ca21275646481f823675a8575/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:44 np0005542249 podman[100181]: 2025-12-02 10:51:44.086315573 +0000 UTC m=+0.021970978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:51:44 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabc2c51f0f5fe4e48eabdab99b8c30c491cd39ca21275646481f823675a8575/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:44 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabc2c51f0f5fe4e48eabdab99b8c30c491cd39ca21275646481f823675a8575/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:44 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabc2c51f0f5fe4e48eabdab99b8c30c491cd39ca21275646481f823675a8575/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:44 np0005542249 podman[100181]: 2025-12-02 10:51:44.196216171 +0000 UTC m=+0.131871566 container init c6ee9e7291a819a1f02d10926f63fcc20fd1a45d6ebd815286d1cf6efbc6011a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  2 05:51:44 np0005542249 podman[100181]: 2025-12-02 10:51:44.206807359 +0000 UTC m=+0.142462774 container start c6ee9e7291a819a1f02d10926f63fcc20fd1a45d6ebd815286d1cf6efbc6011a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  2 05:51:44 np0005542249 podman[100181]: 2025-12-02 10:51:44.214406845 +0000 UTC m=+0.150062240 container attach c6ee9e7291a819a1f02d10926f63fcc20fd1a45d6ebd815286d1cf6efbc6011a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  2 05:51:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Dec  2 05:51:44 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1504350899' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec  2 05:51:44 np0005542249 determined_napier[100076]: [client.openstack]
Dec  2 05:51:44 np0005542249 determined_napier[100076]: #011key = AQDlwy5pAAAAABAAgtssPDtZlvdeFi0aRcVzWw==
Dec  2 05:51:44 np0005542249 determined_napier[100076]: #011caps mgr = "allow *"
Dec  2 05:51:44 np0005542249 determined_napier[100076]: #011caps mon = "profile rbd"
Dec  2 05:51:44 np0005542249 determined_napier[100076]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Dec  2 05:51:44 np0005542249 systemd[1]: libpod-b0a001d5e1970024763f79ba5581ab8c3a1e419145478d7ec94a69dbac5ddb82.scope: Deactivated successfully.
Dec  2 05:51:44 np0005542249 podman[100205]: 2025-12-02 10:51:44.337583044 +0000 UTC m=+0.025153045 container died b0a001d5e1970024763f79ba5581ab8c3a1e419145478d7ec94a69dbac5ddb82 (image=quay.io/ceph/ceph:v18, name=determined_napier, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:44 np0005542249 systemd[1]: var-lib-containers-storage-overlay-bc039b8d8197166ed85ddaf99eb336a5e568014cd10882ac057f211d44abb497-merged.mount: Deactivated successfully.
Dec  2 05:51:44 np0005542249 podman[100205]: 2025-12-02 10:51:44.381780886 +0000 UTC m=+0.069350867 container remove b0a001d5e1970024763f79ba5581ab8c3a1e419145478d7ec94a69dbac5ddb82 (image=quay.io/ceph/ceph:v18, name=determined_napier, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 05:51:44 np0005542249 systemd[1]: libpod-conmon-b0a001d5e1970024763f79ba5581ab8c3a1e419145478d7ec94a69dbac5ddb82.scope: Deactivated successfully.
Dec  2 05:51:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]: {
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:    "0": [
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:        {
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "devices": [
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "/dev/loop3"
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            ],
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "lv_name": "ceph_lv0",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "lv_size": "21470642176",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "name": "ceph_lv0",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "tags": {
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.cluster_name": "ceph",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.crush_device_class": "",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.encrypted": "0",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.osd_id": "0",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.type": "block",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.vdo": "0"
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            },
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "type": "block",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "vg_name": "ceph_vg0"
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:        }
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:    ],
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:    "1": [
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:        {
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "devices": [
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "/dev/loop4"
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            ],
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "lv_name": "ceph_lv1",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "lv_size": "21470642176",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "name": "ceph_lv1",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "tags": {
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.cluster_name": "ceph",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.crush_device_class": "",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.encrypted": "0",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.osd_id": "1",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.type": "block",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.vdo": "0"
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            },
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "type": "block",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "vg_name": "ceph_vg1"
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:        }
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:    ],
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:    "2": [
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:        {
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "devices": [
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "/dev/loop5"
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            ],
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "lv_name": "ceph_lv2",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "lv_size": "21470642176",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "name": "ceph_lv2",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "tags": {
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.cluster_name": "ceph",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.crush_device_class": "",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.encrypted": "0",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.osd_id": "2",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.type": "block",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:                "ceph.vdo": "0"
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            },
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "type": "block",
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:            "vg_name": "ceph_vg2"
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:        }
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]:    ]
Dec  2 05:51:44 np0005542249 sharp_chaplygin[100198]: }
Dec  2 05:51:44 np0005542249 systemd[1]: libpod-c6ee9e7291a819a1f02d10926f63fcc20fd1a45d6ebd815286d1cf6efbc6011a.scope: Deactivated successfully.
Dec  2 05:51:44 np0005542249 podman[100181]: 2025-12-02 10:51:44.971198261 +0000 UTC m=+0.906853636 container died c6ee9e7291a819a1f02d10926f63fcc20fd1a45d6ebd815286d1cf6efbc6011a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  2 05:51:44 np0005542249 systemd[1]: var-lib-containers-storage-overlay-eabc2c51f0f5fe4e48eabdab99b8c30c491cd39ca21275646481f823675a8575-merged.mount: Deactivated successfully.
Dec  2 05:51:45 np0005542249 podman[100181]: 2025-12-02 10:51:45.019967027 +0000 UTC m=+0.955622422 container remove c6ee9e7291a819a1f02d10926f63fcc20fd1a45d6ebd815286d1cf6efbc6011a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:45 np0005542249 systemd[1]: libpod-conmon-c6ee9e7291a819a1f02d10926f63fcc20fd1a45d6ebd815286d1cf6efbc6011a.scope: Deactivated successfully.
Dec  2 05:51:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:51:45 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/1504350899' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec  2 05:51:45 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 4.b scrub starts
Dec  2 05:51:45 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 4.b scrub ok
Dec  2 05:51:45 np0005542249 podman[100435]: 2025-12-02 10:51:45.584721752 +0000 UTC m=+0.022992696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:51:45 np0005542249 podman[100435]: 2025-12-02 10:51:45.71008843 +0000 UTC m=+0.148359394 container create 6561176f333f8e5268456b2c14a67a6ca9cdf44aeb7435c34b8acbbff5830cb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:51:45 np0005542249 systemd[1]: Started libpod-conmon-6561176f333f8e5268456b2c14a67a6ca9cdf44aeb7435c34b8acbbff5830cb0.scope.
Dec  2 05:51:45 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:45 np0005542249 podman[100435]: 2025-12-02 10:51:45.836535308 +0000 UTC m=+0.274806282 container init 6561176f333f8e5268456b2c14a67a6ca9cdf44aeb7435c34b8acbbff5830cb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  2 05:51:45 np0005542249 podman[100435]: 2025-12-02 10:51:45.844917646 +0000 UTC m=+0.283188590 container start 6561176f333f8e5268456b2c14a67a6ca9cdf44aeb7435c34b8acbbff5830cb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_moore, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:51:45 np0005542249 podman[100435]: 2025-12-02 10:51:45.848205625 +0000 UTC m=+0.286476609 container attach 6561176f333f8e5268456b2c14a67a6ca9cdf44aeb7435c34b8acbbff5830cb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_moore, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:51:45 np0005542249 affectionate_moore[100532]: 167 167
Dec  2 05:51:45 np0005542249 systemd[1]: libpod-6561176f333f8e5268456b2c14a67a6ca9cdf44aeb7435c34b8acbbff5830cb0.scope: Deactivated successfully.
Dec  2 05:51:45 np0005542249 podman[100435]: 2025-12-02 10:51:45.851302039 +0000 UTC m=+0.289573013 container died 6561176f333f8e5268456b2c14a67a6ca9cdf44aeb7435c34b8acbbff5830cb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_moore, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  2 05:51:45 np0005542249 systemd[1]: var-lib-containers-storage-overlay-f3207c29046d63ee4f7d7114c7e324e6d3b6554b71fdf6b2fec06f0d39abc4dc-merged.mount: Deactivated successfully.
Dec  2 05:51:45 np0005542249 podman[100435]: 2025-12-02 10:51:45.893667341 +0000 UTC m=+0.331938275 container remove 6561176f333f8e5268456b2c14a67a6ca9cdf44aeb7435c34b8acbbff5830cb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  2 05:51:45 np0005542249 systemd[1]: libpod-conmon-6561176f333f8e5268456b2c14a67a6ca9cdf44aeb7435c34b8acbbff5830cb0.scope: Deactivated successfully.
Dec  2 05:51:45 np0005542249 ansible-async_wrapper.py[100543]: Invoked with j487749912216 30 /home/zuul/.ansible/tmp/ansible-tmp-1764672705.4470956-36834-140412651013367/AnsiballZ_command.py _
Dec  2 05:51:45 np0005542249 ansible-async_wrapper.py[100562]: Starting module and watcher
Dec  2 05:51:45 np0005542249 ansible-async_wrapper.py[100562]: Start watching 100564 (30)
Dec  2 05:51:45 np0005542249 ansible-async_wrapper.py[100564]: Start module (100564)
Dec  2 05:51:45 np0005542249 ansible-async_wrapper.py[100543]: Return async_wrapper task started.
Dec  2 05:51:46 np0005542249 podman[100570]: 2025-12-02 10:51:46.070754936 +0000 UTC m=+0.044216063 container create 25647f4dab15a35faaa08a8e5d0f9e767ecc43a571f3e0abd0042102aefaa0fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sinoussi, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:51:46 np0005542249 systemd[1]: Started libpod-conmon-25647f4dab15a35faaa08a8e5d0f9e767ecc43a571f3e0abd0042102aefaa0fd.scope.
Dec  2 05:51:46 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:46 np0005542249 python3[100565]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:46 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bcaf363ec913726cfc46322183cfcacef6b8e63ca4a9f79a9d3648ab13527b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:46 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bcaf363ec913726cfc46322183cfcacef6b8e63ca4a9f79a9d3648ab13527b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:46 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bcaf363ec913726cfc46322183cfcacef6b8e63ca4a9f79a9d3648ab13527b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:46 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bcaf363ec913726cfc46322183cfcacef6b8e63ca4a9f79a9d3648ab13527b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:46 np0005542249 podman[100570]: 2025-12-02 10:51:46.049687103 +0000 UTC m=+0.023148260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:51:46 np0005542249 podman[100570]: 2025-12-02 10:51:46.1485017 +0000 UTC m=+0.121962827 container init 25647f4dab15a35faaa08a8e5d0f9e767ecc43a571f3e0abd0042102aefaa0fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 05:51:46 np0005542249 podman[100570]: 2025-12-02 10:51:46.159157559 +0000 UTC m=+0.132618676 container start 25647f4dab15a35faaa08a8e5d0f9e767ecc43a571f3e0abd0042102aefaa0fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:51:46 np0005542249 podman[100570]: 2025-12-02 10:51:46.162559762 +0000 UTC m=+0.136020879 container attach 25647f4dab15a35faaa08a8e5d0f9e767ecc43a571f3e0abd0042102aefaa0fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  2 05:51:46 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 4.c scrub starts
Dec  2 05:51:46 np0005542249 podman[100590]: 2025-12-02 10:51:46.190855311 +0000 UTC m=+0.040068471 container create 810ffcd95d19926f05be3dcbd7f0464117826a036f0435f4eb5139e843318a30 (image=quay.io/ceph/ceph:v18, name=dazzling_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  2 05:51:46 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 4.c scrub ok
Dec  2 05:51:46 np0005542249 systemd[1]: Started libpod-conmon-810ffcd95d19926f05be3dcbd7f0464117826a036f0435f4eb5139e843318a30.scope.
Dec  2 05:51:46 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:46 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad5c442bd4ce68f1c916c22409201d347eeb62a1cc1da88dcf1211be7a8ac3fa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:46 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad5c442bd4ce68f1c916c22409201d347eeb62a1cc1da88dcf1211be7a8ac3fa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:46 np0005542249 podman[100590]: 2025-12-02 10:51:46.17425643 +0000 UTC m=+0.023469600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:46 np0005542249 podman[100590]: 2025-12-02 10:51:46.276454878 +0000 UTC m=+0.125668048 container init 810ffcd95d19926f05be3dcbd7f0464117826a036f0435f4eb5139e843318a30 (image=quay.io/ceph/ceph:v18, name=dazzling_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  2 05:51:46 np0005542249 podman[100590]: 2025-12-02 10:51:46.284188139 +0000 UTC m=+0.133401299 container start 810ffcd95d19926f05be3dcbd7f0464117826a036f0435f4eb5139e843318a30 (image=quay.io/ceph/ceph:v18, name=dazzling_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:51:46 np0005542249 podman[100590]: 2025-12-02 10:51:46.291997061 +0000 UTC m=+0.141210211 container attach 810ffcd95d19926f05be3dcbd7f0464117826a036f0435f4eb5139e843318a30 (image=quay.io/ceph/ceph:v18, name=dazzling_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  2 05:51:46 np0005542249 ceph-mgr[75372]: [progress INFO root] Writing back 10 completed events
Dec  2 05:51:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  2 05:51:46 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:46 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  2 05:51:46 np0005542249 dazzling_lamarr[100608]: 
Dec  2 05:51:46 np0005542249 dazzling_lamarr[100608]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  2 05:51:46 np0005542249 systemd[1]: libpod-810ffcd95d19926f05be3dcbd7f0464117826a036f0435f4eb5139e843318a30.scope: Deactivated successfully.
Dec  2 05:51:46 np0005542249 podman[100641]: 2025-12-02 10:51:46.898925392 +0000 UTC m=+0.020580351 container died 810ffcd95d19926f05be3dcbd7f0464117826a036f0435f4eb5139e843318a30 (image=quay.io/ceph/ceph:v18, name=dazzling_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:46 np0005542249 systemd[1]: var-lib-containers-storage-overlay-ad5c442bd4ce68f1c916c22409201d347eeb62a1cc1da88dcf1211be7a8ac3fa-merged.mount: Deactivated successfully.
Dec  2 05:51:46 np0005542249 podman[100641]: 2025-12-02 10:51:46.935968409 +0000 UTC m=+0.057623368 container remove 810ffcd95d19926f05be3dcbd7f0464117826a036f0435f4eb5139e843318a30 (image=quay.io/ceph/ceph:v18, name=dazzling_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  2 05:51:46 np0005542249 systemd[1]: libpod-conmon-810ffcd95d19926f05be3dcbd7f0464117826a036f0435f4eb5139e843318a30.scope: Deactivated successfully.
Dec  2 05:51:46 np0005542249 ansible-async_wrapper.py[100564]: Module complete (100564)
Dec  2 05:51:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:51:47 np0005542249 naughty_sinoussi[100587]: {
Dec  2 05:51:47 np0005542249 naughty_sinoussi[100587]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 05:51:47 np0005542249 naughty_sinoussi[100587]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:51:47 np0005542249 naughty_sinoussi[100587]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 05:51:47 np0005542249 naughty_sinoussi[100587]:        "osd_id": 0,
Dec  2 05:51:47 np0005542249 naughty_sinoussi[100587]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 05:51:47 np0005542249 naughty_sinoussi[100587]:        "type": "bluestore"
Dec  2 05:51:47 np0005542249 naughty_sinoussi[100587]:    },
Dec  2 05:51:47 np0005542249 naughty_sinoussi[100587]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 05:51:47 np0005542249 naughty_sinoussi[100587]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:51:47 np0005542249 naughty_sinoussi[100587]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 05:51:47 np0005542249 naughty_sinoussi[100587]:        "osd_id": 2,
Dec  2 05:51:47 np0005542249 naughty_sinoussi[100587]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 05:51:47 np0005542249 naughty_sinoussi[100587]:        "type": "bluestore"
Dec  2 05:51:47 np0005542249 naughty_sinoussi[100587]:    },
Dec  2 05:51:47 np0005542249 naughty_sinoussi[100587]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 05:51:47 np0005542249 naughty_sinoussi[100587]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:51:47 np0005542249 naughty_sinoussi[100587]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 05:51:47 np0005542249 naughty_sinoussi[100587]:        "osd_id": 1,
Dec  2 05:51:47 np0005542249 naughty_sinoussi[100587]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 05:51:47 np0005542249 naughty_sinoussi[100587]:        "type": "bluestore"
Dec  2 05:51:47 np0005542249 naughty_sinoussi[100587]:    }
Dec  2 05:51:47 np0005542249 naughty_sinoussi[100587]: }
Dec  2 05:51:47 np0005542249 systemd[1]: libpod-25647f4dab15a35faaa08a8e5d0f9e767ecc43a571f3e0abd0042102aefaa0fd.scope: Deactivated successfully.
Dec  2 05:51:47 np0005542249 podman[100570]: 2025-12-02 10:51:47.09966815 +0000 UTC m=+1.073129277 container died 25647f4dab15a35faaa08a8e5d0f9e767ecc43a571f3e0abd0042102aefaa0fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sinoussi, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:51:47 np0005542249 systemd[1]: var-lib-containers-storage-overlay-0bcaf363ec913726cfc46322183cfcacef6b8e63ca4a9f79a9d3648ab13527b3-merged.mount: Deactivated successfully.
Dec  2 05:51:47 np0005542249 podman[100570]: 2025-12-02 10:51:47.159207969 +0000 UTC m=+1.132669086 container remove 25647f4dab15a35faaa08a8e5d0f9e767ecc43a571f3e0abd0042102aefaa0fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sinoussi, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  2 05:51:47 np0005542249 systemd[1]: libpod-conmon-25647f4dab15a35faaa08a8e5d0f9e767ecc43a571f3e0abd0042102aefaa0fd.scope: Deactivated successfully.
Dec  2 05:51:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:51:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:51:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:47 np0005542249 ceph-mgr[75372]: [progress INFO root] update: starting ev 5d9ccafc-bb6e-4743-a5fe-90654a93cfcb (Updating rgw.rgw deployment (+1 -> 1))
Dec  2 05:51:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ssuoka", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Dec  2 05:51:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ssuoka", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  2 05:51:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ssuoka", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  2 05:51:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Dec  2 05:51:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:51:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:51:47 np0005542249 ceph-mgr[75372]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.ssuoka on compute-0
Dec  2 05:51:47 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.ssuoka on compute-0
Dec  2 05:51:47 np0005542249 python3[100735]: ansible-ansible.legacy.async_status Invoked with jid=j487749912216.100543 mode=status _async_dir=/root/.ansible_async
Dec  2 05:51:47 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:47 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:47 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:47 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ssuoka", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  2 05:51:47 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ssuoka", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  2 05:51:47 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:47 np0005542249 python3[100862]: ansible-ansible.legacy.async_status Invoked with jid=j487749912216.100543 mode=cleanup _async_dir=/root/.ansible_async
Dec  2 05:51:47 np0005542249 podman[100925]: 2025-12-02 10:51:47.901997693 +0000 UTC m=+0.061345239 container create 2d2145dbe5356c8b5822673b8354541c37af1cbfaf43189c10029da5994a5150 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_dewdney, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:51:47 np0005542249 systemd[1]: Started libpod-conmon-2d2145dbe5356c8b5822673b8354541c37af1cbfaf43189c10029da5994a5150.scope.
Dec  2 05:51:47 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:47 np0005542249 podman[100925]: 2025-12-02 10:51:47.883837129 +0000 UTC m=+0.043184705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:51:47 np0005542249 podman[100925]: 2025-12-02 10:51:47.996526824 +0000 UTC m=+0.155874370 container init 2d2145dbe5356c8b5822673b8354541c37af1cbfaf43189c10029da5994a5150 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:51:48 np0005542249 podman[100925]: 2025-12-02 10:51:48.008903929 +0000 UTC m=+0.168251515 container start 2d2145dbe5356c8b5822673b8354541c37af1cbfaf43189c10029da5994a5150 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_dewdney, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:51:48 np0005542249 podman[100925]: 2025-12-02 10:51:48.012931889 +0000 UTC m=+0.172279435 container attach 2d2145dbe5356c8b5822673b8354541c37af1cbfaf43189c10029da5994a5150 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_dewdney, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:51:48 np0005542249 blissful_dewdney[100942]: 167 167
Dec  2 05:51:48 np0005542249 systemd[1]: libpod-2d2145dbe5356c8b5822673b8354541c37af1cbfaf43189c10029da5994a5150.scope: Deactivated successfully.
Dec  2 05:51:48 np0005542249 podman[100925]: 2025-12-02 10:51:48.015997742 +0000 UTC m=+0.175345288 container died 2d2145dbe5356c8b5822673b8354541c37af1cbfaf43189c10029da5994a5150 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_dewdney, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  2 05:51:48 np0005542249 systemd[1]: var-lib-containers-storage-overlay-c87cabe0f2dc7fd0c58730478d9c0ca65f5aaba1226a02cb51bca02497053114-merged.mount: Deactivated successfully.
Dec  2 05:51:48 np0005542249 podman[100925]: 2025-12-02 10:51:48.051110948 +0000 UTC m=+0.210458534 container remove 2d2145dbe5356c8b5822673b8354541c37af1cbfaf43189c10029da5994a5150 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_dewdney, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  2 05:51:48 np0005542249 systemd[1]: libpod-conmon-2d2145dbe5356c8b5822673b8354541c37af1cbfaf43189c10029da5994a5150.scope: Deactivated successfully.
Dec  2 05:51:48 np0005542249 systemd[1]: Reloading.
Dec  2 05:51:48 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:51:48 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:51:48 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.d scrub starts
Dec  2 05:51:48 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.d scrub ok
Dec  2 05:51:48 np0005542249 ceph-mon[75081]: Deploying daemon rgw.rgw.compute-0.ssuoka on compute-0
Dec  2 05:51:48 np0005542249 systemd[1]: Reloading.
Dec  2 05:51:48 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:51:48 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:51:48 np0005542249 python3[101023]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:48 np0005542249 podman[101063]: 2025-12-02 10:51:48.63456556 +0000 UTC m=+0.063029675 container create a0c19b266b906e002d308103c742cea492ff678a3e930300181e5c07ed427a3f (image=quay.io/ceph/ceph:v18, name=reverent_yonath, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  2 05:51:48 np0005542249 systemd[1]: Started libpod-conmon-a0c19b266b906e002d308103c742cea492ff678a3e930300181e5c07ed427a3f.scope.
Dec  2 05:51:48 np0005542249 systemd[1]: Starting Ceph rgw.rgw.compute-0.ssuoka for 95bc4eaa-1a14-59bf-acf2-4b3da055547d...
Dec  2 05:51:48 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:48 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6532f07c264f702ba5868701c4b121cb0b19c2d5997949371014106e887f64f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:48 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6532f07c264f702ba5868701c4b121cb0b19c2d5997949371014106e887f64f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:48 np0005542249 podman[101063]: 2025-12-02 10:51:48.614813993 +0000 UTC m=+0.043278138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:48 np0005542249 podman[101063]: 2025-12-02 10:51:48.723619592 +0000 UTC m=+0.152083707 container init a0c19b266b906e002d308103c742cea492ff678a3e930300181e5c07ed427a3f (image=quay.io/ceph/ceph:v18, name=reverent_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  2 05:51:48 np0005542249 podman[101063]: 2025-12-02 10:51:48.732164144 +0000 UTC m=+0.160628259 container start a0c19b266b906e002d308103c742cea492ff678a3e930300181e5c07ed427a3f (image=quay.io/ceph/ceph:v18, name=reverent_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:51:48 np0005542249 podman[101063]: 2025-12-02 10:51:48.736115011 +0000 UTC m=+0.164579126 container attach a0c19b266b906e002d308103c742cea492ff678a3e930300181e5c07ed427a3f (image=quay.io/ceph/ceph:v18, name=reverent_yonath, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:51:48 np0005542249 podman[101132]: 2025-12-02 10:51:48.897820377 +0000 UTC m=+0.048713835 container create ab2d5290b253be90bd5477bcee4a9bca9d7800a28bd9dd88c5ef94245f38483b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-rgw-rgw-compute-0-ssuoka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 05:51:48 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31b9657d89fcfd28213277198afea8a63e49510566fd20ad37c0b0e7e17bf716/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:48 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31b9657d89fcfd28213277198afea8a63e49510566fd20ad37c0b0e7e17bf716/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:48 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31b9657d89fcfd28213277198afea8a63e49510566fd20ad37c0b0e7e17bf716/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:48 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31b9657d89fcfd28213277198afea8a63e49510566fd20ad37c0b0e7e17bf716/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.ssuoka supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:48 np0005542249 podman[101132]: 2025-12-02 10:51:48.963795071 +0000 UTC m=+0.114688509 container init ab2d5290b253be90bd5477bcee4a9bca9d7800a28bd9dd88c5ef94245f38483b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-rgw-rgw-compute-0-ssuoka, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  2 05:51:48 np0005542249 podman[101132]: 2025-12-02 10:51:48.96927275 +0000 UTC m=+0.120166188 container start ab2d5290b253be90bd5477bcee4a9bca9d7800a28bd9dd88c5ef94245f38483b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-rgw-rgw-compute-0-ssuoka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:51:48 np0005542249 podman[101132]: 2025-12-02 10:51:48.875448529 +0000 UTC m=+0.026341977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:51:48 np0005542249 bash[101132]: ab2d5290b253be90bd5477bcee4a9bca9d7800a28bd9dd88c5ef94245f38483b
Dec  2 05:51:48 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Dec  2 05:51:48 np0005542249 systemd[1]: Started Ceph rgw.rgw.compute-0.ssuoka for 95bc4eaa-1a14-59bf-acf2-4b3da055547d.
Dec  2 05:51:48 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Dec  2 05:51:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:51:49 np0005542249 radosgw[101151]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec  2 05:51:49 np0005542249 radosgw[101151]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Dec  2 05:51:49 np0005542249 radosgw[101151]: framework: beast
Dec  2 05:51:49 np0005542249 radosgw[101151]: framework conf key: endpoint, val: 192.168.122.100:8082
Dec  2 05:51:49 np0005542249 radosgw[101151]: init_numa not setting numa affinity
Dec  2 05:51:49 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:51:49 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec  2 05:51:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:51:49 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:49 np0005542249 ceph-mgr[75372]: [progress INFO root] complete: finished ev 5d9ccafc-bb6e-4743-a5fe-90654a93cfcb (Updating rgw.rgw deployment (+1 -> 1))
Dec  2 05:51:49 np0005542249 ceph-mgr[75372]: [progress INFO root] Completed event 5d9ccafc-bb6e-4743-a5fe-90654a93cfcb (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Dec  2 05:51:49 np0005542249 ceph-mgr[75372]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Dec  2 05:51:49 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Dec  2 05:51:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec  2 05:51:49 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec  2 05:51:49 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:49 np0005542249 ceph-mgr[75372]: [progress INFO root] update: starting ev 84e7aa6d-321c-4e5b-be91-e1db9d577cab (Updating mds.cephfs deployment (+1 -> 1))
Dec  2 05:51:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bydekr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Dec  2 05:51:49 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bydekr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  2 05:51:49 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bydekr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  2 05:51:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:51:49 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:51:49 np0005542249 ceph-mgr[75372]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.bydekr on compute-0
Dec  2 05:51:49 np0005542249 ceph-mgr[75372]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.bydekr on compute-0
Dec  2 05:51:49 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Dec  2 05:51:49 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Dec  2 05:51:49 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.14263 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  2 05:51:49 np0005542249 reverent_yonath[101080]: 
Dec  2 05:51:49 np0005542249 reverent_yonath[101080]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  2 05:51:49 np0005542249 systemd[1]: libpod-a0c19b266b906e002d308103c742cea492ff678a3e930300181e5c07ed427a3f.scope: Deactivated successfully.
Dec  2 05:51:49 np0005542249 podman[101063]: 2025-12-02 10:51:49.328301102 +0000 UTC m=+0.756765227 container died a0c19b266b906e002d308103c742cea492ff678a3e930300181e5c07ed427a3f (image=quay.io/ceph/ceph:v18, name=reverent_yonath, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:51:49 np0005542249 systemd[1]: var-lib-containers-storage-overlay-f6532f07c264f702ba5868701c4b121cb0b19c2d5997949371014106e887f64f-merged.mount: Deactivated successfully.
Dec  2 05:51:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:51:49 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec  2 05:51:49 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec  2 05:51:49 np0005542249 podman[101063]: 2025-12-02 10:51:49.951144266 +0000 UTC m=+1.379608391 container remove a0c19b266b906e002d308103c742cea492ff678a3e930300181e5c07ed427a3f (image=quay.io/ceph/ceph:v18, name=reverent_yonath, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:51:49 np0005542249 systemd[1]: libpod-conmon-a0c19b266b906e002d308103c742cea492ff678a3e930300181e5c07ed427a3f.scope: Deactivated successfully.
Dec  2 05:51:50 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:50 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:50 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:50 np0005542249 ceph-mon[75081]: Saving service rgw.rgw spec with placement compute-0
Dec  2 05:51:50 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:50 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:50 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bydekr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  2 05:51:50 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bydekr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  2 05:51:50 np0005542249 ceph-mon[75081]: Deploying daemon mds.cephfs.compute-0.bydekr on compute-0
Dec  2 05:51:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Dec  2 05:51:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Dec  2 05:51:50 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Dec  2 05:51:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Dec  2 05:51:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4085680628' entity='client.rgw.rgw.compute-0.ssuoka' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec  2 05:51:50 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 46 pg[8.0( empty local-lis/les=0/0 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [1] r=0 lpr=46 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:50 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 4.16 deep-scrub starts
Dec  2 05:51:50 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 4.16 deep-scrub ok
Dec  2 05:51:50 np0005542249 podman[101386]: 2025-12-02 10:51:50.288114196 +0000 UTC m=+0.052922030 container create 14a544b9f6d9694082d38f794193ddaaf09d9d51584f685326683839bd989522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:51:50 np0005542249 systemd[1]: Started libpod-conmon-14a544b9f6d9694082d38f794193ddaaf09d9d51584f685326683839bd989522.scope.
Dec  2 05:51:50 np0005542249 podman[101386]: 2025-12-02 10:51:50.25624731 +0000 UTC m=+0.021055124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:51:50 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Dec  2 05:51:50 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:50 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Dec  2 05:51:50 np0005542249 podman[101386]: 2025-12-02 10:51:50.384828006 +0000 UTC m=+0.149635830 container init 14a544b9f6d9694082d38f794193ddaaf09d9d51584f685326683839bd989522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:51:50 np0005542249 podman[101386]: 2025-12-02 10:51:50.397909032 +0000 UTC m=+0.162716836 container start 14a544b9f6d9694082d38f794193ddaaf09d9d51584f685326683839bd989522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ganguly, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Dec  2 05:51:50 np0005542249 podman[101386]: 2025-12-02 10:51:50.402590259 +0000 UTC m=+0.167398073 container attach 14a544b9f6d9694082d38f794193ddaaf09d9d51584f685326683839bd989522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ganguly, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  2 05:51:50 np0005542249 systemd[1]: libpod-14a544b9f6d9694082d38f794193ddaaf09d9d51584f685326683839bd989522.scope: Deactivated successfully.
Dec  2 05:51:50 np0005542249 mystifying_ganguly[101402]: 167 167
Dec  2 05:51:50 np0005542249 conmon[101402]: conmon 14a544b9f6d9694082d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-14a544b9f6d9694082d38f794193ddaaf09d9d51584f685326683839bd989522.scope/container/memory.events
Dec  2 05:51:50 np0005542249 podman[101409]: 2025-12-02 10:51:50.454946012 +0000 UTC m=+0.037273684 container died 14a544b9f6d9694082d38f794193ddaaf09d9d51584f685326683839bd989522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ganguly, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:51:50 np0005542249 systemd[1]: var-lib-containers-storage-overlay-f42e5e676a5cf08232329ba0c6470537c79ec90b3d090aa815708c8fd26a1ac4-merged.mount: Deactivated successfully.
Dec  2 05:51:50 np0005542249 podman[101409]: 2025-12-02 10:51:50.497910821 +0000 UTC m=+0.080238473 container remove 14a544b9f6d9694082d38f794193ddaaf09d9d51584f685326683839bd989522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:50 np0005542249 systemd[1]: libpod-conmon-14a544b9f6d9694082d38f794193ddaaf09d9d51584f685326683839bd989522.scope: Deactivated successfully.
Dec  2 05:51:50 np0005542249 systemd[1]: Reloading.
Dec  2 05:51:50 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:51:50 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:51:50 np0005542249 systemd[1]: Reloading.
Dec  2 05:51:51 np0005542249 ansible-async_wrapper.py[100562]: Done in kid B.
Dec  2 05:51:51 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:51:51 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:51:51 np0005542249 python3[101488]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v109: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:51:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Dec  2 05:51:51 np0005542249 ceph-mon[75081]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  2 05:51:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4085680628' entity='client.rgw.rgw.compute-0.ssuoka' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec  2 05:51:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Dec  2 05:51:51 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Dec  2 05:51:51 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/4085680628' entity='client.rgw.rgw.compute-0.ssuoka' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec  2 05:51:51 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 47 pg[8.0( empty local-lis/les=46/47 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [1] r=0 lpr=46 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:51 np0005542249 podman[101526]: 2025-12-02 10:51:51.101271905 +0000 UTC m=+0.034745506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:51 np0005542249 systemd[1]: Starting Ceph mds.cephfs.compute-0.bydekr for 95bc4eaa-1a14-59bf-acf2-4b3da055547d...
Dec  2 05:51:51 np0005542249 ceph-mgr[75372]: [progress INFO root] Writing back 11 completed events
Dec  2 05:51:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  2 05:51:52 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec  2 05:51:52 np0005542249 podman[101526]: 2025-12-02 10:51:52.581140789 +0000 UTC m=+1.514614350 container create c1dd0405568aef83b40c38b5dfb42d948d364851832cb887f5d4a54730cd9b90 (image=quay.io/ceph/ceph:v18, name=sleepy_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:51:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Dec  2 05:51:52 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec  2 05:51:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:52 np0005542249 ceph-mgr[75372]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Dec  2 05:51:52 np0005542249 systemd[1]: Started libpod-conmon-c1dd0405568aef83b40c38b5dfb42d948d364851832cb887f5d4a54730cd9b90.scope.
Dec  2 05:51:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Dec  2 05:51:52 np0005542249 ceph-mon[75081]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  2 05:51:52 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/4085680628' entity='client.rgw.rgw.compute-0.ssuoka' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec  2 05:51:52 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Dec  2 05:51:52 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:52 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 48 pg[9.0( empty local-lis/les=0/0 n=0 ec=48/48 lis/c=0/0 les/c/f=0/0/0 sis=48) [1] r=0 lpr=48 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Dec  2 05:51:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4085680628' entity='client.rgw.rgw.compute-0.ssuoka' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  2 05:51:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c82b14f28f64a4a34f74b76d5a15b68e4ee5854bd65a2fe009405eee08f9361f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c82b14f28f64a4a34f74b76d5a15b68e4ee5854bd65a2fe009405eee08f9361f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:52 np0005542249 podman[101526]: 2025-12-02 10:51:52.702275952 +0000 UTC m=+1.635749503 container init c1dd0405568aef83b40c38b5dfb42d948d364851832cb887f5d4a54730cd9b90 (image=quay.io/ceph/ceph:v18, name=sleepy_kowalevski, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:51:52 np0005542249 podman[101526]: 2025-12-02 10:51:52.712810609 +0000 UTC m=+1.646284130 container start c1dd0405568aef83b40c38b5dfb42d948d364851832cb887f5d4a54730cd9b90 (image=quay.io/ceph/ceph:v18, name=sleepy_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  2 05:51:52 np0005542249 podman[101526]: 2025-12-02 10:51:52.717135366 +0000 UTC m=+1.650608887 container attach c1dd0405568aef83b40c38b5dfb42d948d364851832cb887f5d4a54730cd9b90 (image=quay.io/ceph/ceph:v18, name=sleepy_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:51:52 np0005542249 podman[101594]: 2025-12-02 10:51:52.797529812 +0000 UTC m=+0.041826768 container create f64fbb716bbb10d4999b599073f025f197d0e4bf9a2dfe0b7e86e89fb078831c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mds-cephfs-compute-0-bydekr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Dec  2 05:51:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9bb8973f1dd8a5b5993a5ce1e212a1446bde079af117b95eabd62e864b7b797/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9bb8973f1dd8a5b5993a5ce1e212a1446bde079af117b95eabd62e864b7b797/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9bb8973f1dd8a5b5993a5ce1e212a1446bde079af117b95eabd62e864b7b797/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9bb8973f1dd8a5b5993a5ce1e212a1446bde079af117b95eabd62e864b7b797/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.bydekr supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:52 np0005542249 podman[101594]: 2025-12-02 10:51:52.862866278 +0000 UTC m=+0.107163254 container init f64fbb716bbb10d4999b599073f025f197d0e4bf9a2dfe0b7e86e89fb078831c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mds-cephfs-compute-0-bydekr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  2 05:51:52 np0005542249 podman[101594]: 2025-12-02 10:51:52.867410792 +0000 UTC m=+0.111707748 container start f64fbb716bbb10d4999b599073f025f197d0e4bf9a2dfe0b7e86e89fb078831c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mds-cephfs-compute-0-bydekr, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:51:52 np0005542249 podman[101594]: 2025-12-02 10:51:52.778902666 +0000 UTC m=+0.023199642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:51:52 np0005542249 bash[101594]: f64fbb716bbb10d4999b599073f025f197d0e4bf9a2dfe0b7e86e89fb078831c
Dec  2 05:51:52 np0005542249 systemd[1]: Started Ceph mds.cephfs.compute-0.bydekr for 95bc4eaa-1a14-59bf-acf2-4b3da055547d.
Dec  2 05:51:52 np0005542249 ceph-mds[101614]: set uid:gid to 167:167 (ceph:ceph)
Dec  2 05:51:52 np0005542249 ceph-mds[101614]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Dec  2 05:51:52 np0005542249 ceph-mds[101614]: main not setting numa affinity
Dec  2 05:51:52 np0005542249 ceph-mds[101614]: pidfile_write: ignore empty --pid-file
Dec  2 05:51:52 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mds-cephfs-compute-0-bydekr[101610]: starting mds.cephfs.compute-0.bydekr at 
Dec  2 05:51:52 np0005542249 ceph-mds[101614]: mds.cephfs.compute-0.bydekr Updating MDS map to version 2 from mon.0
Dec  2 05:51:53 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec  2 05:51:53 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec  2 05:51:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v112: 195 pgs: 1 unknown, 194 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 682 B/s wr, 1 op/s
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:53 np0005542249 ceph-mgr[75372]: [progress INFO root] complete: finished ev 84e7aa6d-321c-4e5b-be91-e1db9d577cab (Updating mds.cephfs deployment (+1 -> 1))
Dec  2 05:51:53 np0005542249 ceph-mgr[75372]: [progress INFO root] Completed event 84e7aa6d-321c-4e5b-be91-e1db9d577cab (Updating mds.cephfs deployment (+1 -> 1)) in 4 seconds
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:53 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 4.17 deep-scrub starts
Dec  2 05:51:53 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 4.17 deep-scrub ok
Dec  2 05:51:53 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  2 05:51:53 np0005542249 sleepy_kowalevski[101565]: 
Dec  2 05:51:53 np0005542249 sleepy_kowalevski[101565]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Dec  2 05:51:53 np0005542249 systemd[1]: libpod-c1dd0405568aef83b40c38b5dfb42d948d364851832cb887f5d4a54730cd9b90.scope: Deactivated successfully.
Dec  2 05:51:53 np0005542249 podman[101526]: 2025-12-02 10:51:53.382876857 +0000 UTC m=+2.316350388 container died c1dd0405568aef83b40c38b5dfb42d948d364851832cb887f5d4a54730cd9b90 (image=quay.io/ceph/ceph:v18, name=sleepy_kowalevski, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  2 05:51:53 np0005542249 systemd[1]: var-lib-containers-storage-overlay-c82b14f28f64a4a34f74b76d5a15b68e4ee5854bd65a2fe009405eee08f9361f-merged.mount: Deactivated successfully.
Dec  2 05:51:53 np0005542249 podman[101526]: 2025-12-02 10:51:53.463617211 +0000 UTC m=+2.397090732 container remove c1dd0405568aef83b40c38b5dfb42d948d364851832cb887f5d4a54730cd9b90 (image=quay.io/ceph/ceph:v18, name=sleepy_kowalevski, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Dec  2 05:51:53 np0005542249 systemd[1]: libpod-conmon-c1dd0405568aef83b40c38b5dfb42d948d364851832cb887f5d4a54730cd9b90.scope: Deactivated successfully.
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).mds e3 new map
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-02T10:51:36.281920+0000#012modified#0112025-12-02T10:51:36.281965+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.bydekr{-1:14265} state up:standby seq 1 addr [v2:192.168.122.100:6814/1316625427,v1:192.168.122.100:6815/1316625427] compat {c=[1],r=[1],i=[7ff]}]
Dec  2 05:51:53 np0005542249 ceph-mds[101614]: mds.cephfs.compute-0.bydekr Updating MDS map to version 3 from mon.0
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1316625427,v1:192.168.122.100:6815/1316625427] up:boot
Dec  2 05:51:53 np0005542249 ceph-mds[101614]: mds.cephfs.compute-0.bydekr Monitors have assigned me to become a standby.
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/1316625427,v1:192.168.122.100:6815/1316625427] as mds.0
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.bydekr assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.bydekr"} v 0) v1
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.bydekr"}]: dispatch
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).mds e3 all = 0
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).mds e4 new map
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-02T10:51:36.281920+0000#012modified#0112025-12-02T10:51:53.682311+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14265}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.bydekr{0:14265} state up:creating seq 1 addr [v2:192.168.122.100:6814/1316625427,v1:192.168.122.100:6815/1316625427] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4085680628' entity='client.rgw.rgw.compute-0.ssuoka' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Dec  2 05:51:53 np0005542249 ceph-mds[101614]: mds.cephfs.compute-0.bydekr Updating MDS map to version 4 from mon.0
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/4085680628' entity='client.rgw.rgw.compute-0.ssuoka' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:53 np0005542249 ceph-mds[101614]: mds.0.4 handle_mds_map i am now mds.0.4
Dec  2 05:51:53 np0005542249 ceph-mds[101614]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Dec  2 05:51:53 np0005542249 ceph-mds[101614]: mds.0.cache creating system inode with ino:0x1
Dec  2 05:51:53 np0005542249 ceph-mds[101614]: mds.0.cache creating system inode with ino:0x100
Dec  2 05:51:53 np0005542249 ceph-mds[101614]: mds.0.cache creating system inode with ino:0x600
Dec  2 05:51:53 np0005542249 ceph-mds[101614]: mds.0.cache creating system inode with ino:0x601
Dec  2 05:51:53 np0005542249 ceph-mds[101614]: mds.0.cache creating system inode with ino:0x602
Dec  2 05:51:53 np0005542249 ceph-mds[101614]: mds.0.cache creating system inode with ino:0x603
Dec  2 05:51:53 np0005542249 ceph-mds[101614]: mds.0.cache creating system inode with ino:0x604
Dec  2 05:51:53 np0005542249 ceph-mds[101614]: mds.0.cache creating system inode with ino:0x605
Dec  2 05:51:53 np0005542249 ceph-mds[101614]: mds.0.cache creating system inode with ino:0x606
Dec  2 05:51:53 np0005542249 ceph-mds[101614]: mds.0.cache creating system inode with ino:0x607
Dec  2 05:51:53 np0005542249 ceph-mds[101614]: mds.0.cache creating system inode with ino:0x608
Dec  2 05:51:53 np0005542249 ceph-mds[101614]: mds.0.cache creating system inode with ino:0x609
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Dec  2 05:51:53 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.bydekr=up:creating}
Dec  2 05:51:53 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 49 pg[9.0( empty local-lis/les=48/49 n=0 ec=48/48 lis/c=0/0 les/c/f=0/0/0 sis=48) [1] r=0 lpr=48 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:54 np0005542249 ceph-mds[101614]: mds.0.4 creating_done
Dec  2 05:51:54 np0005542249 ceph-mon[75081]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.bydekr is now active in filesystem cephfs as rank 0
Dec  2 05:51:54 np0005542249 podman[101900]: 2025-12-02 10:51:54.219308667 +0000 UTC m=+0.094553492 container exec cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 05:51:54 np0005542249 podman[101900]: 2025-12-02 10:51:54.338181549 +0000 UTC m=+0.213426344 container exec_died cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:54 np0005542249 python3[101945]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:54 np0005542249 podman[101965]: 2025-12-02 10:51:54.487337344 +0000 UTC m=+0.050402331 container create 9ed66bf543d99318e42a930f0bcd983866fb6cf1cabc6dd363032fffc65dea47 (image=quay.io/ceph/ceph:v18, name=eloquent_hamilton, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:51:54 np0005542249 podman[101965]: 2025-12-02 10:51:54.46622829 +0000 UTC m=+0.029293327 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:54 np0005542249 systemd[1]: Started libpod-conmon-9ed66bf543d99318e42a930f0bcd983866fb6cf1cabc6dd363032fffc65dea47.scope.
Dec  2 05:51:54 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:54 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21100000823bacb9664aa047d36aca0df7702c85523f491baf37393f5701b5cf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:54 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21100000823bacb9664aa047d36aca0df7702c85523f491baf37393f5701b5cf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:54 np0005542249 podman[101965]: 2025-12-02 10:51:54.671923513 +0000 UTC m=+0.234988560 container init 9ed66bf543d99318e42a930f0bcd983866fb6cf1cabc6dd363032fffc65dea47 (image=quay.io/ceph/ceph:v18, name=eloquent_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:51:54 np0005542249 podman[101965]: 2025-12-02 10:51:54.684742181 +0000 UTC m=+0.247807148 container start 9ed66bf543d99318e42a930f0bcd983866fb6cf1cabc6dd363032fffc65dea47 (image=quay.io/ceph/ceph:v18, name=eloquent_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:51:54 np0005542249 podman[101965]: 2025-12-02 10:51:54.692375789 +0000 UTC m=+0.255440776 container attach 9ed66bf543d99318e42a930f0bcd983866fb6cf1cabc6dd363032fffc65dea47 (image=quay.io/ceph/ceph:v18, name=eloquent_hamilton, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  2 05:51:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:51:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Dec  2 05:51:54 np0005542249 ceph-mon[75081]: daemon mds.cephfs.compute-0.bydekr assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec  2 05:51:54 np0005542249 ceph-mon[75081]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec  2 05:51:54 np0005542249 ceph-mon[75081]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec  2 05:51:54 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/4085680628' entity='client.rgw.rgw.compute-0.ssuoka' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  2 05:51:54 np0005542249 ceph-mon[75081]: daemon mds.cephfs.compute-0.bydekr is now active in filesystem cephfs as rank 0
Dec  2 05:51:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).mds e5 new map
Dec  2 05:51:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-02T10:51:36.281920+0000#012modified#0112025-12-02T10:51:54.935602+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14265}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.bydekr{0:14265} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/1316625427,v1:192.168.122.100:6815/1316625427] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Dec  2 05:51:55 np0005542249 ceph-mds[101614]: mds.cephfs.compute-0.bydekr Updating MDS map to version 5 from mon.0
Dec  2 05:51:55 np0005542249 ceph-mds[101614]: mds.0.4 handle_mds_map i am now mds.0.4
Dec  2 05:51:55 np0005542249 ceph-mds[101614]: mds.0.4 handle_mds_map state change up:creating --> up:active
Dec  2 05:51:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Dec  2 05:51:55 np0005542249 ceph-mds[101614]: mds.0.4 recovery_done -- successful recovery!
Dec  2 05:51:55 np0005542249 ceph-mds[101614]: mds.0.4 active_start
Dec  2 05:51:55 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 5.6 deep-scrub starts
Dec  2 05:51:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v114: 195 pgs: 1 unknown, 194 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 820 B/s rd, 820 B/s wr, 1 op/s
Dec  2 05:51:55 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 5.6 deep-scrub ok
Dec  2 05:51:55 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Dec  2 05:51:55 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1316625427,v1:192.168.122.100:6815/1316625427] up:active
Dec  2 05:51:55 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.bydekr=up:active}
Dec  2 05:51:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Dec  2 05:51:55 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4085680628' entity='client.rgw.rgw.compute-0.ssuoka' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  2 05:51:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:51:55 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.14269 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  2 05:51:55 np0005542249 eloquent_hamilton[102013]: 
Dec  2 05:51:55 np0005542249 eloquent_hamilton[102013]: [{"container_id": "3db284894563", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.36%", "created": "2025-12-02T10:49:54.534763Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-12-02T10:49:54.595943Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-02T10:51:55.208440Z", "memory_usage": 11649679, "ports": [], "service_name": "crash", "started": "2025-12-02T10:49:54.448527Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d@crash.compute-0", "version": "18.2.7"}, {"container_id": "f64fbb716bbb", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "5.29%", "created": "2025-12-02T10:51:52.888920Z", "daemon_id": "cephfs.compute-0.bydekr", "daemon_name": "mds.cephfs.compute-0.bydekr", "daemon_type": "mds", "events": ["2025-12-02T10:51:53.131588Z daemon:mds.cephfs.compute-0.bydekr [INFO] \"Deployed mds.cephfs.compute-0.bydekr on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-02T10:51:55.209290Z", "memory_usage": 13484687, "ports": [], "service_name": "mds.cephfs", "started": "2025-12-02T10:51:52.783417Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d@mds.cephfs.compute-0.bydekr", "version": "18.2.7"}, {"container_id": "e03605b236b5", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "25.21%", "created": "2025-12-02T10:48:41.492848Z", "daemon_id": "compute-0.ntxcvs", "daemon_name": "mgr.compute-0.ntxcvs", "daemon_type": "mgr", "events": ["2025-12-02T10:49:59.869501Z daemon:mgr.compute-0.ntxcvs [INFO] \"Reconfigured mgr.compute-0.ntxcvs on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-02T10:51:55.208296Z", "memory_usage": 550502400, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-12-02T10:48:41.384416Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d@mgr.compute-0.ntxcvs", "version": "18.2.7"}, {"container_id": "cfead6f8cdae", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "2.12%", "created": "2025-12-02T10:48:35.876701Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-12-02T10:49:58.926862Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-02T10:51:55.208078Z", "memory_request": 2147483648, "memory_usage": 40863006, "ports": [], "service_name": "mon", "started": "2025-12-02T10:48:38.981452Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d@mon.compute-0", "version": "18.2.7"}, {"container_id": "13b62c875bf2", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.57%", "created": "2025-12-02T10:50:23.719543Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-12-02T10:50:23.790320Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-02T10:51:55.208573Z", "memory_request": 4294967296, "memory_usage": 69478645, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-02T10:50:23.587808Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d@osd.0", "version": "18.2.7"}, {"container_id": "2ab4bdfb1336", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.64%", "created": "2025-12-02T10:50:29.072687Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-12-02T10:50:29.564461Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-02T10:51:55.208700Z", "memory_request": 4294967296, "memory_usage": 66710405, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-02T10:50:28.886430Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d@osd.1", "version": "18.2.7"}, {"container_id": "227e7141028a", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.62%", "created": "2025-12-02T10:50:34.816841Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-12-02T10:50:34.926486Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-02T10:51:55.208828Z", "memory_request": 4294967296, "memory_usage": 65494056, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-02T10:50:34.634229Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d@osd.2", "version": "18.2.7"}, {"container_id": "ab2d5290b253", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.73%", "created": "2025-12-02T10:51:48.986817Z", "daemon_id": "rgw.compute-0.ssuoka", "daemon_name": "rgw.rgw.compute-0.ssuoka", "daemon_type": "rgw", "events": ["2025-12-02T
Dec  2 05:51:55 np0005542249 systemd[1]: libpod-9ed66bf543d99318e42a930f0bcd983866fb6cf1cabc6dd363032fffc65dea47.scope: Deactivated successfully.
Dec  2 05:51:55 np0005542249 podman[101965]: 2025-12-02 10:51:55.286945594 +0000 UTC m=+0.850010641 container died 9ed66bf543d99318e42a930f0bcd983866fb6cf1cabc6dd363032fffc65dea47 (image=quay.io/ceph/ceph:v18, name=eloquent_hamilton, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  2 05:51:55 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 50 pg[10.0( empty local-lis/les=0/0 n=0 ec=50/50 lis/c=0/0 les/c/f=0/0/0 sis=50) [2] r=0 lpr=50 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:55 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:51:55 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Dec  2 05:51:55 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Dec  2 05:51:55 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:55 np0005542249 systemd[1]: var-lib-containers-storage-overlay-21100000823bacb9664aa047d36aca0df7702c85523f491baf37393f5701b5cf-merged.mount: Deactivated successfully.
Dec  2 05:51:55 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Dec  2 05:51:55 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Dec  2 05:51:55 np0005542249 podman[101965]: 2025-12-02 10:51:55.365110989 +0000 UTC m=+0.928175966 container remove 9ed66bf543d99318e42a930f0bcd983866fb6cf1cabc6dd363032fffc65dea47 (image=quay.io/ceph/ceph:v18, name=eloquent_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:51:55 np0005542249 systemd[1]: libpod-conmon-9ed66bf543d99318e42a930f0bcd983866fb6cf1cabc6dd363032fffc65dea47.scope: Deactivated successfully.
Dec  2 05:51:55 np0005542249 rsyslogd[1005]: message too long (8588) with configured size 8096, begin of message is: [{"container_id": "3db284894563", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  2 05:51:56 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Dec  2 05:51:56 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Dec  2 05:51:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Dec  2 05:51:56 np0005542249 ceph-mon[75081]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  2 05:51:56 np0005542249 ceph-mon[75081]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  2 05:51:56 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/4085680628' entity='client.rgw.rgw.compute-0.ssuoka' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  2 05:51:56 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:56 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4085680628' entity='client.rgw.rgw.compute-0.ssuoka' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  2 05:51:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Dec  2 05:51:56 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Dec  2 05:51:56 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 51 pg[10.0( empty local-lis/les=50/51 n=0 ec=50/50 lis/c=0/0 les/c/f=0/0/0 sis=50) [2] r=0 lpr=50 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:51:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:51:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 05:51:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:51:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 05:51:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:51:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:51:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:51:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:51:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:51:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:51:56 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.14 deep-scrub starts
Dec  2 05:51:56 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.14 deep-scrub ok
Dec  2 05:51:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:56 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 85f6c395-229d-4e61-9fdf-27ef791e44ef does not exist
Dec  2 05:51:56 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev faa9cb91-7ae0-4d6b-8bdd-768b3254aa98 does not exist
Dec  2 05:51:56 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev c8eb82ae-0fca-44a5-a00f-9642a7e0291f does not exist
Dec  2 05:51:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 05:51:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 05:51:56 np0005542249 python3[102294]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 05:51:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 05:51:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:51:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:51:56 np0005542249 podman[102295]: 2025-12-02 10:51:56.467795019 +0000 UTC m=+0.037643334 container create 5ce7a5b0e42514041b13822cce6aff905eac4f716ac80a2cbe309a9636e11e1b (image=quay.io/ceph/ceph:v18, name=compassionate_almeida, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:51:56 np0005542249 systemd[1]: Started libpod-conmon-5ce7a5b0e42514041b13822cce6aff905eac4f716ac80a2cbe309a9636e11e1b.scope.
Dec  2 05:51:56 np0005542249 podman[102295]: 2025-12-02 10:51:56.451329281 +0000 UTC m=+0.021177616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:56 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:56 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0c2072318f3f73a75b7889841f8a10a7a38ed1f6d4128846ed30fc587e32687/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:56 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0c2072318f3f73a75b7889841f8a10a7a38ed1f6d4128846ed30fc587e32687/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:56 np0005542249 podman[102295]: 2025-12-02 10:51:56.717349364 +0000 UTC m=+0.287197709 container init 5ce7a5b0e42514041b13822cce6aff905eac4f716ac80a2cbe309a9636e11e1b (image=quay.io/ceph/ceph:v18, name=compassionate_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  2 05:51:56 np0005542249 podman[102295]: 2025-12-02 10:51:56.726087571 +0000 UTC m=+0.295935896 container start 5ce7a5b0e42514041b13822cce6aff905eac4f716ac80a2cbe309a9636e11e1b (image=quay.io/ceph/ceph:v18, name=compassionate_almeida, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:51:56 np0005542249 podman[102295]: 2025-12-02 10:51:56.731745105 +0000 UTC m=+0.301593430 container attach 5ce7a5b0e42514041b13822cce6aff905eac4f716ac80a2cbe309a9636e11e1b (image=quay.io/ceph/ceph:v18, name=compassionate_almeida, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 05:51:56 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 5.a scrub starts
Dec  2 05:51:57 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 5.a scrub ok
Dec  2 05:51:57 np0005542249 podman[102467]: 2025-12-02 10:51:57.041358313 +0000 UTC m=+0.054815451 container create e0b6ab144f98acdb8ba5c4436b0b385bd60d35e93501e59457d91dd00b8d4964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_colden, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:51:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v117: 196 pgs: 1 unknown, 195 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 233 B/s rd, 700 B/s wr, 3 op/s
Dec  2 05:51:57 np0005542249 podman[102467]: 2025-12-02 10:51:57.006900677 +0000 UTC m=+0.020357845 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:51:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Dec  2 05:51:57 np0005542249 ceph-mon[75081]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  2 05:51:57 np0005542249 ceph-mon[75081]: Cluster is now healthy
Dec  2 05:51:57 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/4085680628' entity='client.rgw.rgw.compute-0.ssuoka' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  2 05:51:57 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:51:57 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:57 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 05:51:57 np0005542249 systemd[1]: Started libpod-conmon-e0b6ab144f98acdb8ba5c4436b0b385bd60d35e93501e59457d91dd00b8d4964.scope.
Dec  2 05:51:57 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Dec  2 05:51:57 np0005542249 podman[102467]: 2025-12-02 10:51:57.293686034 +0000 UTC m=+0.307143202 container init e0b6ab144f98acdb8ba5c4436b0b385bd60d35e93501e59457d91dd00b8d4964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  2 05:51:57 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Dec  2 05:51:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Dec  2 05:51:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2206506047' entity='client.rgw.rgw.compute-0.ssuoka' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  2 05:51:57 np0005542249 podman[102467]: 2025-12-02 10:51:57.300460558 +0000 UTC m=+0.313917706 container start e0b6ab144f98acdb8ba5c4436b0b385bd60d35e93501e59457d91dd00b8d4964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_colden, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:51:57 np0005542249 podman[102467]: 2025-12-02 10:51:57.304127067 +0000 UTC m=+0.317584235 container attach e0b6ab144f98acdb8ba5c4436b0b385bd60d35e93501e59457d91dd00b8d4964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 05:51:57 np0005542249 musing_colden[102503]: 167 167
Dec  2 05:51:57 np0005542249 systemd[1]: libpod-e0b6ab144f98acdb8ba5c4436b0b385bd60d35e93501e59457d91dd00b8d4964.scope: Deactivated successfully.
Dec  2 05:51:57 np0005542249 podman[102467]: 2025-12-02 10:51:57.305851194 +0000 UTC m=+0.319308362 container died e0b6ab144f98acdb8ba5c4436b0b385bd60d35e93501e59457d91dd00b8d4964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_colden, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  2 05:51:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec  2 05:51:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3003793615' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  2 05:51:57 np0005542249 compassionate_almeida[102372]: 
Dec  2 05:51:57 np0005542249 compassionate_almeida[102372]: {"fsid":"95bc4eaa-1a14-59bf-acf2-4b3da055547d","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":198,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":52,"num_osds":3,"num_up_osds":3,"osd_up_since":1764672643,"num_in_osds":3,"osd_in_since":1764672613,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":194},{"state_name":"unknown","count":1}],"num_pgs":195,"num_pools":9,"num_objects":6,"data_bytes":460666,"bytes_used":84365312,"bytes_avail":64327561216,"bytes_total":64411926528,"unknown_pgs_ratio":0.0051282052882015705,"read_bytes_sec":820,"write_bytes_sec":820,"read_op_per_sec":0,"write_op_per_sec":0},"fsmap":{"epoch":5,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.bydekr","status":"up:active","gid":14265}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":5,"modified":"2025-12-02T10:51:45.051095+0000","services":{}},"progress_events":{"16669eb4-d9e0-49fb-8a34-89fe7124d6c8":{"message":"Global Recovery Event (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Dec  2 05:51:57 np0005542249 systemd[1]: libpod-5ce7a5b0e42514041b13822cce6aff905eac4f716ac80a2cbe309a9636e11e1b.scope: Deactivated successfully.
Dec  2 05:51:57 np0005542249 systemd[1]: var-lib-containers-storage-overlay-6aafe86550c9df16fccd3285b8f54dfd3c9ae9a18861a40fde0df4aa04323447-merged.mount: Deactivated successfully.
Dec  2 05:51:57 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 52 pg[11.0( empty local-lis/les=0/0 n=0 ec=52/52 lis/c=0/0 les/c/f=0/0/0 sis=52) [1] r=0 lpr=52 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:51:57 np0005542249 podman[102467]: 2025-12-02 10:51:57.469217006 +0000 UTC m=+0.482674144 container remove e0b6ab144f98acdb8ba5c4436b0b385bd60d35e93501e59457d91dd00b8d4964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_colden, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:51:57 np0005542249 podman[102295]: 2025-12-02 10:51:57.478867268 +0000 UTC m=+1.048715593 container died 5ce7a5b0e42514041b13822cce6aff905eac4f716ac80a2cbe309a9636e11e1b (image=quay.io/ceph/ceph:v18, name=compassionate_almeida, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  2 05:51:57 np0005542249 systemd[1]: var-lib-containers-storage-overlay-d0c2072318f3f73a75b7889841f8a10a7a38ed1f6d4128846ed30fc587e32687-merged.mount: Deactivated successfully.
Dec  2 05:51:57 np0005542249 podman[102295]: 2025-12-02 10:51:57.535825247 +0000 UTC m=+1.105673562 container remove 5ce7a5b0e42514041b13822cce6aff905eac4f716ac80a2cbe309a9636e11e1b (image=quay.io/ceph/ceph:v18, name=compassionate_almeida, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  2 05:51:57 np0005542249 systemd[1]: libpod-conmon-5ce7a5b0e42514041b13822cce6aff905eac4f716ac80a2cbe309a9636e11e1b.scope: Deactivated successfully.
Dec  2 05:51:57 np0005542249 systemd[1]: libpod-conmon-e0b6ab144f98acdb8ba5c4436b0b385bd60d35e93501e59457d91dd00b8d4964.scope: Deactivated successfully.
Dec  2 05:51:57 np0005542249 ceph-mgr[75372]: [progress INFO root] Writing back 12 completed events
Dec  2 05:51:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  2 05:51:57 np0005542249 podman[102544]: 2025-12-02 10:51:57.615177934 +0000 UTC m=+0.021501235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:51:57 np0005542249 podman[102544]: 2025-12-02 10:51:57.77059842 +0000 UTC m=+0.176921721 container create db8b67559a470265208118b57d5bbc412012d1c2f3d7a4092356914ed1448245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  2 05:51:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:57 np0005542249 ceph-mgr[75372]: [progress INFO root] Completed event 16669eb4-d9e0-49fb-8a34-89fe7124d6c8 (Global Recovery Event) in 5 seconds
Dec  2 05:51:57 np0005542249 systemd[1]: Started libpod-conmon-db8b67559a470265208118b57d5bbc412012d1c2f3d7a4092356914ed1448245.scope.
Dec  2 05:51:57 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:57 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15931748cdb893414beb881e5a18b11305659eec4bc9667b1b994a7fb66c439c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:57 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15931748cdb893414beb881e5a18b11305659eec4bc9667b1b994a7fb66c439c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:57 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15931748cdb893414beb881e5a18b11305659eec4bc9667b1b994a7fb66c439c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:57 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15931748cdb893414beb881e5a18b11305659eec4bc9667b1b994a7fb66c439c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:57 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15931748cdb893414beb881e5a18b11305659eec4bc9667b1b994a7fb66c439c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:57 np0005542249 podman[102544]: 2025-12-02 10:51:57.94897992 +0000 UTC m=+0.355303221 container init db8b67559a470265208118b57d5bbc412012d1c2f3d7a4092356914ed1448245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_sinoussi, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:51:57 np0005542249 podman[102544]: 2025-12-02 10:51:57.958787937 +0000 UTC m=+0.365111208 container start db8b67559a470265208118b57d5bbc412012d1c2f3d7a4092356914ed1448245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_sinoussi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:51:57 np0005542249 podman[102544]: 2025-12-02 10:51:57.985296418 +0000 UTC m=+0.391619709 container attach db8b67559a470265208118b57d5bbc412012d1c2f3d7a4092356914ed1448245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_sinoussi, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:51:58 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 5.b scrub starts
Dec  2 05:51:58 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 5.b scrub ok
Dec  2 05:51:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Dec  2 05:51:58 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2206506047' entity='client.rgw.rgw.compute-0.ssuoka' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  2 05:51:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Dec  2 05:51:58 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Dec  2 05:51:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Dec  2 05:51:58 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2206506047' entity='client.rgw.rgw.compute-0.ssuoka' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  2 05:51:58 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/2206506047' entity='client.rgw.rgw.compute-0.ssuoka' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  2 05:51:58 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:51:58 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 53 pg[11.0( empty local-lis/les=52/53 n=0 ec=52/52 lis/c=0/0 les/c/f=0/0/0 sis=52) [1] r=0 lpr=52 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:51:58 np0005542249 python3[102590]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:51:58 np0005542249 podman[102591]: 2025-12-02 10:51:58.691175889 +0000 UTC m=+0.094557962 container create af9709a882748d9204f9cd5e657cfd948b27aa0561afddc5f066fcdbf841dbea (image=quay.io/ceph/ceph:v18, name=festive_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Dec  2 05:51:58 np0005542249 systemd[1]: Started libpod-conmon-af9709a882748d9204f9cd5e657cfd948b27aa0561afddc5f066fcdbf841dbea.scope.
Dec  2 05:51:58 np0005542249 podman[102591]: 2025-12-02 10:51:58.636950804 +0000 UTC m=+0.040332927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:51:58 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:51:58 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b32b01fe013e3c67034bf75c1855cb12e898c7912688821afd703d70929a1df8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:58 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b32b01fe013e3c67034bf75c1855cb12e898c7912688821afd703d70929a1df8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:51:58 np0005542249 ceph-mds[101614]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec  2 05:51:58 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mds-cephfs-compute-0-bydekr[101610]: 2025-12-02T10:51:58.767+0000 7f018cc2a640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec  2 05:51:58 np0005542249 podman[102591]: 2025-12-02 10:51:58.854705235 +0000 UTC m=+0.258087318 container init af9709a882748d9204f9cd5e657cfd948b27aa0561afddc5f066fcdbf841dbea (image=quay.io/ceph/ceph:v18, name=festive_elgamal, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  2 05:51:58 np0005542249 podman[102591]: 2025-12-02 10:51:58.867067901 +0000 UTC m=+0.270449954 container start af9709a882748d9204f9cd5e657cfd948b27aa0561afddc5f066fcdbf841dbea (image=quay.io/ceph/ceph:v18, name=festive_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  2 05:51:58 np0005542249 podman[102591]: 2025-12-02 10:51:58.914865181 +0000 UTC m=+0.318247314 container attach af9709a882748d9204f9cd5e657cfd948b27aa0561afddc5f066fcdbf841dbea (image=quay.io/ceph/ceph:v18, name=festive_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  2 05:51:59 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 5.d scrub starts
Dec  2 05:51:59 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 5.d scrub ok
Dec  2 05:51:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v120: 197 pgs: 1 unknown, 196 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.0 KiB/s wr, 12 op/s
Dec  2 05:51:59 np0005542249 brave_sinoussi[102560]: --> passed data devices: 0 physical, 3 LVM
Dec  2 05:51:59 np0005542249 brave_sinoussi[102560]: --> relative data size: 1.0
Dec  2 05:51:59 np0005542249 brave_sinoussi[102560]: --> All data devices are unavailable
Dec  2 05:51:59 np0005542249 systemd[1]: libpod-db8b67559a470265208118b57d5bbc412012d1c2f3d7a4092356914ed1448245.scope: Deactivated successfully.
Dec  2 05:51:59 np0005542249 podman[102544]: 2025-12-02 10:51:59.109809411 +0000 UTC m=+1.516132702 container died db8b67559a470265208118b57d5bbc412012d1c2f3d7a4092356914ed1448245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_sinoussi, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:51:59 np0005542249 systemd[1]: libpod-db8b67559a470265208118b57d5bbc412012d1c2f3d7a4092356914ed1448245.scope: Consumed 1.091s CPU time.
Dec  2 05:51:59 np0005542249 systemd[1]: var-lib-containers-storage-overlay-15931748cdb893414beb881e5a18b11305659eec4bc9667b1b994a7fb66c439c-merged.mount: Deactivated successfully.
Dec  2 05:51:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Dec  2 05:51:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec  2 05:51:59 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1483805374' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  2 05:51:59 np0005542249 festive_elgamal[102615]: 
Dec  2 05:51:59 np0005542249 podman[102544]: 2025-12-02 10:51:59.450074832 +0000 UTC m=+1.856398133 container remove db8b67559a470265208118b57d5bbc412012d1c2f3d7a4092356914ed1448245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_sinoussi, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:51:59 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2206506047' entity='client.rgw.rgw.compute-0.ssuoka' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  2 05:51:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Dec  2 05:51:59 np0005542249 systemd[1]: libpod-conmon-db8b67559a470265208118b57d5bbc412012d1c2f3d7a4092356914ed1448245.scope: Deactivated successfully.
Dec  2 05:51:59 np0005542249 systemd[1]: libpod-af9709a882748d9204f9cd5e657cfd948b27aa0561afddc5f066fcdbf841dbea.scope: Deactivated successfully.
Dec  2 05:51:59 np0005542249 festive_elgamal[102615]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.ssuoka","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Dec  2 05:51:59 np0005542249 podman[102591]: 2025-12-02 10:51:59.467342211 +0000 UTC m=+0.870724334 container died af9709a882748d9204f9cd5e657cfd948b27aa0561afddc5f066fcdbf841dbea (image=quay.io/ceph/ceph:v18, name=festive_elgamal, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  2 05:51:59 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Dec  2 05:51:59 np0005542249 systemd[1]: var-lib-containers-storage-overlay-b32b01fe013e3c67034bf75c1855cb12e898c7912688821afd703d70929a1df8-merged.mount: Deactivated successfully.
Dec  2 05:51:59 np0005542249 podman[102591]: 2025-12-02 10:51:59.65046518 +0000 UTC m=+1.053847223 container remove af9709a882748d9204f9cd5e657cfd948b27aa0561afddc5f066fcdbf841dbea (image=quay.io/ceph/ceph:v18, name=festive_elgamal, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  2 05:51:59 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/2206506047' entity='client.rgw.rgw.compute-0.ssuoka' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  2 05:51:59 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/2206506047' entity='client.rgw.rgw.compute-0.ssuoka' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  2 05:51:59 np0005542249 ceph-mon[75081]: from='client.? 192.168.122.100:0/2206506047' entity='client.rgw.rgw.compute-0.ssuoka' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  2 05:51:59 np0005542249 systemd[1]: libpod-conmon-af9709a882748d9204f9cd5e657cfd948b27aa0561afddc5f066fcdbf841dbea.scope: Deactivated successfully.
Dec  2 05:51:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:51:59 np0005542249 radosgw[101151]: LDAP not started since no server URIs were provided in the configuration.
Dec  2 05:51:59 np0005542249 radosgw[101151]: framework: beast
Dec  2 05:51:59 np0005542249 radosgw[101151]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec  2 05:51:59 np0005542249 radosgw[101151]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec  2 05:51:59 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-rgw-rgw-compute-0-ssuoka[101147]: 2025-12-02T10:51:59.987+0000 7fc90f0e0940 -1 LDAP not started since no server URIs were provided in the configuration.
Dec  2 05:52:00 np0005542249 radosgw[101151]: starting handler: beast
Dec  2 05:52:00 np0005542249 radosgw[101151]: set uid:gid to 167:167 (ceph:ceph)
Dec  2 05:52:00 np0005542249 radosgw[101151]: mgrc service_daemon_register rgw.14271 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.ssuoka,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=5cea1f27-8eab-4d02-a414-5e7ca61e7dc2,zone_name=default,zonegroup_id=3b343abf-7d33-41ab-8076-898b1837a7d4,zonegroup_name=default}
Dec  2 05:52:00 np0005542249 podman[102975]: 2025-12-02 10:52:00.069711148 +0000 UTC m=+0.027979002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:52:00 np0005542249 podman[102975]: 2025-12-02 10:52:00.265361019 +0000 UTC m=+0.223628823 container create 791514b7e0fe129bbc4acdc92ea508d7723e8957d8586ce0872ca6e9cc63d394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:52:00 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 4.1d deep-scrub starts
Dec  2 05:52:00 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 4.1d deep-scrub ok
Dec  2 05:52:00 np0005542249 systemd[1]: Started libpod-conmon-791514b7e0fe129bbc4acdc92ea508d7723e8957d8586ce0872ca6e9cc63d394.scope.
Dec  2 05:52:00 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:52:00 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Dec  2 05:52:00 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Dec  2 05:52:00 np0005542249 podman[102975]: 2025-12-02 10:52:00.430448986 +0000 UTC m=+0.388716780 container init 791514b7e0fe129bbc4acdc92ea508d7723e8957d8586ce0872ca6e9cc63d394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_golick, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:52:00 np0005542249 podman[102975]: 2025-12-02 10:52:00.438162605 +0000 UTC m=+0.396430379 container start 791514b7e0fe129bbc4acdc92ea508d7723e8957d8586ce0872ca6e9cc63d394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_golick, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 05:52:00 np0005542249 xenodochial_golick[103382]: 167 167
Dec  2 05:52:00 np0005542249 systemd[1]: libpod-791514b7e0fe129bbc4acdc92ea508d7723e8957d8586ce0872ca6e9cc63d394.scope: Deactivated successfully.
Dec  2 05:52:00 np0005542249 podman[102975]: 2025-12-02 10:52:00.468381307 +0000 UTC m=+0.426649061 container attach 791514b7e0fe129bbc4acdc92ea508d7723e8957d8586ce0872ca6e9cc63d394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_golick, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 05:52:00 np0005542249 podman[102975]: 2025-12-02 10:52:00.469372544 +0000 UTC m=+0.427640308 container died 791514b7e0fe129bbc4acdc92ea508d7723e8957d8586ce0872ca6e9cc63d394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Dec  2 05:52:00 np0005542249 systemd[1]: var-lib-containers-storage-overlay-42e56796f86f2e2d6c125fff0986ccd4635828c8b8bc5cab3c06820246192b01-merged.mount: Deactivated successfully.
Dec  2 05:52:00 np0005542249 podman[102975]: 2025-12-02 10:52:00.516954118 +0000 UTC m=+0.475221882 container remove 791514b7e0fe129bbc4acdc92ea508d7723e8957d8586ce0872ca6e9cc63d394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Dec  2 05:52:00 np0005542249 systemd[1]: libpod-conmon-791514b7e0fe129bbc4acdc92ea508d7723e8957d8586ce0872ca6e9cc63d394.scope: Deactivated successfully.
Dec  2 05:52:00 np0005542249 python3[103420]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:52:00 np0005542249 podman[103436]: 2025-12-02 10:52:00.674156492 +0000 UTC m=+0.041081868 container create 8b2d49012152b41f1fdf93ae9078381e5a8f28dee74a561f6bbd9fb3ce28dc2e (image=quay.io/ceph/ceph:v18, name=unruffled_hawking, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Dec  2 05:52:00 np0005542249 podman[103434]: 2025-12-02 10:52:00.691485582 +0000 UTC m=+0.059878788 container create 9662899e6851d70ac21121aaeeb4fff374a6953c5058460bc9e82dda95fb25d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hertz, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  2 05:52:00 np0005542249 systemd[1]: Started libpod-conmon-8b2d49012152b41f1fdf93ae9078381e5a8f28dee74a561f6bbd9fb3ce28dc2e.scope.
Dec  2 05:52:00 np0005542249 systemd[1]: Started libpod-conmon-9662899e6851d70ac21121aaeeb4fff374a6953c5058460bc9e82dda95fb25d7.scope.
Dec  2 05:52:00 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:52:00 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b940b6e146179b4827c65d9809aabd3fbd1e376879ca63e40cf185ce39d6a16/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:00 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b940b6e146179b4827c65d9809aabd3fbd1e376879ca63e40cf185ce39d6a16/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:00 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:52:00 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d2a08bffabd672f68dc64253e9ae4e03da5e985f6aa6df3b122652400abd3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:00 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d2a08bffabd672f68dc64253e9ae4e03da5e985f6aa6df3b122652400abd3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:00 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d2a08bffabd672f68dc64253e9ae4e03da5e985f6aa6df3b122652400abd3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:00 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d2a08bffabd672f68dc64253e9ae4e03da5e985f6aa6df3b122652400abd3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:00 np0005542249 podman[103436]: 2025-12-02 10:52:00.742463489 +0000 UTC m=+0.109388895 container init 8b2d49012152b41f1fdf93ae9078381e5a8f28dee74a561f6bbd9fb3ce28dc2e (image=quay.io/ceph/ceph:v18, name=unruffled_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:52:00 np0005542249 podman[103434]: 2025-12-02 10:52:00.747884576 +0000 UTC m=+0.116277792 container init 9662899e6851d70ac21121aaeeb4fff374a6953c5058460bc9e82dda95fb25d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hertz, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:52:00 np0005542249 podman[103436]: 2025-12-02 10:52:00.654244241 +0000 UTC m=+0.021169637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:52:00 np0005542249 podman[103436]: 2025-12-02 10:52:00.751199976 +0000 UTC m=+0.118125352 container start 8b2d49012152b41f1fdf93ae9078381e5a8f28dee74a561f6bbd9fb3ce28dc2e (image=quay.io/ceph/ceph:v18, name=unruffled_hawking, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:52:00 np0005542249 podman[103436]: 2025-12-02 10:52:00.755291988 +0000 UTC m=+0.122217364 container attach 8b2d49012152b41f1fdf93ae9078381e5a8f28dee74a561f6bbd9fb3ce28dc2e (image=quay.io/ceph/ceph:v18, name=unruffled_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:52:00 np0005542249 podman[103434]: 2025-12-02 10:52:00.756273315 +0000 UTC m=+0.124666531 container start 9662899e6851d70ac21121aaeeb4fff374a6953c5058460bc9e82dda95fb25d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  2 05:52:00 np0005542249 podman[103434]: 2025-12-02 10:52:00.664755526 +0000 UTC m=+0.033148762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:52:00 np0005542249 podman[103434]: 2025-12-02 10:52:00.760244513 +0000 UTC m=+0.128637719 container attach 9662899e6851d70ac21121aaeeb4fff374a6953c5058460bc9e82dda95fb25d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hertz, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:52:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v122: 197 pgs: 1 unknown, 196 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 208 B/s rd, 3.3 KiB/s wr, 10 op/s
Dec  2 05:52:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Dec  2 05:52:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1909913017' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Dec  2 05:52:01 np0005542249 unruffled_hawking[103464]: mimic
Dec  2 05:52:01 np0005542249 systemd[1]: libpod-8b2d49012152b41f1fdf93ae9078381e5a8f28dee74a561f6bbd9fb3ce28dc2e.scope: Deactivated successfully.
Dec  2 05:52:01 np0005542249 podman[103436]: 2025-12-02 10:52:01.350378917 +0000 UTC m=+0.717304333 container died 8b2d49012152b41f1fdf93ae9078381e5a8f28dee74a561f6bbd9fb3ce28dc2e (image=quay.io/ceph/ceph:v18, name=unruffled_hawking, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:52:01 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Dec  2 05:52:01 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Dec  2 05:52:01 np0005542249 systemd[1]: var-lib-containers-storage-overlay-0b940b6e146179b4827c65d9809aabd3fbd1e376879ca63e40cf185ce39d6a16-merged.mount: Deactivated successfully.
Dec  2 05:52:01 np0005542249 podman[103436]: 2025-12-02 10:52:01.41595734 +0000 UTC m=+0.782882746 container remove 8b2d49012152b41f1fdf93ae9078381e5a8f28dee74a561f6bbd9fb3ce28dc2e (image=quay.io/ceph/ceph:v18, name=unruffled_hawking, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  2 05:52:01 np0005542249 systemd[1]: libpod-conmon-8b2d49012152b41f1fdf93ae9078381e5a8f28dee74a561f6bbd9fb3ce28dc2e.scope: Deactivated successfully.
Dec  2 05:52:01 np0005542249 practical_hertz[103468]: {
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:    "0": [
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:        {
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "devices": [
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "/dev/loop3"
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            ],
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "lv_name": "ceph_lv0",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "lv_size": "21470642176",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "name": "ceph_lv0",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "tags": {
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.cluster_name": "ceph",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.crush_device_class": "",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.encrypted": "0",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.osd_id": "0",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.type": "block",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.vdo": "0"
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            },
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "type": "block",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "vg_name": "ceph_vg0"
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:        }
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:    ],
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:    "1": [
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:        {
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "devices": [
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "/dev/loop4"
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            ],
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "lv_name": "ceph_lv1",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "lv_size": "21470642176",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "name": "ceph_lv1",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "tags": {
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.cluster_name": "ceph",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.crush_device_class": "",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.encrypted": "0",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.osd_id": "1",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.type": "block",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.vdo": "0"
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            },
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "type": "block",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "vg_name": "ceph_vg1"
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:        }
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:    ],
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:    "2": [
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:        {
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "devices": [
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "/dev/loop5"
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            ],
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "lv_name": "ceph_lv2",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "lv_size": "21470642176",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "name": "ceph_lv2",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "tags": {
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.cluster_name": "ceph",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.crush_device_class": "",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.encrypted": "0",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.osd_id": "2",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.type": "block",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:                "ceph.vdo": "0"
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            },
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "type": "block",
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:            "vg_name": "ceph_vg2"
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:        }
Dec  2 05:52:01 np0005542249 practical_hertz[103468]:    ]
Dec  2 05:52:01 np0005542249 practical_hertz[103468]: }
Dec  2 05:52:01 np0005542249 systemd[1]: libpod-9662899e6851d70ac21121aaeeb4fff374a6953c5058460bc9e82dda95fb25d7.scope: Deactivated successfully.
Dec  2 05:52:01 np0005542249 conmon[103468]: conmon 9662899e6851d70ac211 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9662899e6851d70ac21121aaeeb4fff374a6953c5058460bc9e82dda95fb25d7.scope/container/memory.events
Dec  2 05:52:01 np0005542249 podman[103514]: 2025-12-02 10:52:01.592027577 +0000 UTC m=+0.024643471 container died 9662899e6851d70ac21121aaeeb4fff374a6953c5058460bc9e82dda95fb25d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hertz, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Dec  2 05:52:01 np0005542249 systemd[1]: var-lib-containers-storage-overlay-b4d2a08bffabd672f68dc64253e9ae4e03da5e985f6aa6df3b122652400abd3b-merged.mount: Deactivated successfully.
Dec  2 05:52:01 np0005542249 podman[103514]: 2025-12-02 10:52:01.649438158 +0000 UTC m=+0.082054042 container remove 9662899e6851d70ac21121aaeeb4fff374a6953c5058460bc9e82dda95fb25d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hertz, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:52:01 np0005542249 systemd[1]: libpod-conmon-9662899e6851d70ac21121aaeeb4fff374a6953c5058460bc9e82dda95fb25d7.scope: Deactivated successfully.
Dec  2 05:52:02 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.1c deep-scrub starts
Dec  2 05:52:02 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 3.1c deep-scrub ok
Dec  2 05:52:02 np0005542249 podman[103693]: 2025-12-02 10:52:02.386898767 +0000 UTC m=+0.059053866 container create 6ded804704040e50ab23f870c52f23e6347cb4f99cbe6b6937a79c71731b1390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_heyrovsky, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 05:52:02 np0005542249 python3[103685]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:52:02 np0005542249 systemd[1]: Started libpod-conmon-6ded804704040e50ab23f870c52f23e6347cb4f99cbe6b6937a79c71731b1390.scope.
Dec  2 05:52:02 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:52:02 np0005542249 podman[103693]: 2025-12-02 10:52:02.36310916 +0000 UTC m=+0.035264339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:52:02 np0005542249 podman[103707]: 2025-12-02 10:52:02.462161954 +0000 UTC m=+0.040783890 container create 9988d7c415b6615532256d7afbd620a238c4519d40f9561ce49dfd747fae60b3 (image=quay.io/ceph/ceph:v18, name=heuristic_lalande, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  2 05:52:02 np0005542249 podman[103693]: 2025-12-02 10:52:02.472591538 +0000 UTC m=+0.144746637 container init 6ded804704040e50ab23f870c52f23e6347cb4f99cbe6b6937a79c71731b1390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  2 05:52:02 np0005542249 podman[103693]: 2025-12-02 10:52:02.478655702 +0000 UTC m=+0.150810801 container start 6ded804704040e50ab23f870c52f23e6347cb4f99cbe6b6937a79c71731b1390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_heyrovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  2 05:52:02 np0005542249 quizzical_heyrovsky[103721]: 167 167
Dec  2 05:52:02 np0005542249 podman[103693]: 2025-12-02 10:52:02.482955089 +0000 UTC m=+0.155110188 container attach 6ded804704040e50ab23f870c52f23e6347cb4f99cbe6b6937a79c71731b1390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 05:52:02 np0005542249 systemd[1]: libpod-6ded804704040e50ab23f870c52f23e6347cb4f99cbe6b6937a79c71731b1390.scope: Deactivated successfully.
Dec  2 05:52:02 np0005542249 podman[103693]: 2025-12-02 10:52:02.483888575 +0000 UTC m=+0.156043694 container died 6ded804704040e50ab23f870c52f23e6347cb4f99cbe6b6937a79c71731b1390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_heyrovsky, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  2 05:52:02 np0005542249 systemd[1]: Started libpod-conmon-9988d7c415b6615532256d7afbd620a238c4519d40f9561ce49dfd747fae60b3.scope.
Dec  2 05:52:02 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:52:02 np0005542249 systemd[1]: var-lib-containers-storage-overlay-d59598a3ca96e89214801380b8144b0476e449459510ab0f1b749a44bfdf5127-merged.mount: Deactivated successfully.
Dec  2 05:52:02 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7832fcafd0a64b97b259bc02c57a797f070fdcd589a47ab9e9e0f0541ba0c4f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:02 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7832fcafd0a64b97b259bc02c57a797f070fdcd589a47ab9e9e0f0541ba0c4f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:02 np0005542249 podman[103693]: 2025-12-02 10:52:02.533577585 +0000 UTC m=+0.205732674 container remove 6ded804704040e50ab23f870c52f23e6347cb4f99cbe6b6937a79c71731b1390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  2 05:52:02 np0005542249 podman[103707]: 2025-12-02 10:52:02.538312814 +0000 UTC m=+0.116934750 container init 9988d7c415b6615532256d7afbd620a238c4519d40f9561ce49dfd747fae60b3 (image=quay.io/ceph/ceph:v18, name=heuristic_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  2 05:52:02 np0005542249 podman[103707]: 2025-12-02 10:52:02.442312014 +0000 UTC m=+0.020933960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:52:02 np0005542249 systemd[1]: libpod-conmon-6ded804704040e50ab23f870c52f23e6347cb4f99cbe6b6937a79c71731b1390.scope: Deactivated successfully.
Dec  2 05:52:02 np0005542249 podman[103707]: 2025-12-02 10:52:02.547131604 +0000 UTC m=+0.125753530 container start 9988d7c415b6615532256d7afbd620a238c4519d40f9561ce49dfd747fae60b3 (image=quay.io/ceph/ceph:v18, name=heuristic_lalande, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  2 05:52:02 np0005542249 podman[103707]: 2025-12-02 10:52:02.550405033 +0000 UTC m=+0.129026959 container attach 9988d7c415b6615532256d7afbd620a238c4519d40f9561ce49dfd747fae60b3 (image=quay.io/ceph/ceph:v18, name=heuristic_lalande, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  2 05:52:02 np0005542249 podman[103754]: 2025-12-02 10:52:02.769900411 +0000 UTC m=+0.076162523 container create fe4a1905d5855284ee900ce5736a2dfbe1e9491c255adbfa2d3aedbd9837cbac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  2 05:52:02 np0005542249 ceph-mgr[75372]: [progress INFO root] Writing back 13 completed events
Dec  2 05:52:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  2 05:52:02 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:52:02 np0005542249 podman[103754]: 2025-12-02 10:52:02.739156825 +0000 UTC m=+0.045418987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:52:02 np0005542249 systemd[1]: Started libpod-conmon-fe4a1905d5855284ee900ce5736a2dfbe1e9491c255adbfa2d3aedbd9837cbac.scope.
Dec  2 05:52:02 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:52:02 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7b16583a5f3199aa6c7cf0b1472c40906559e842bdf134ebea7f80c577b7ae0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:02 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7b16583a5f3199aa6c7cf0b1472c40906559e842bdf134ebea7f80c577b7ae0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:02 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7b16583a5f3199aa6c7cf0b1472c40906559e842bdf134ebea7f80c577b7ae0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:02 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7b16583a5f3199aa6c7cf0b1472c40906559e842bdf134ebea7f80c577b7ae0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:02 np0005542249 podman[103754]: 2025-12-02 10:52:02.880408105 +0000 UTC m=+0.186670257 container init fe4a1905d5855284ee900ce5736a2dfbe1e9491c255adbfa2d3aedbd9837cbac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  2 05:52:02 np0005542249 podman[103754]: 2025-12-02 10:52:02.89787902 +0000 UTC m=+0.204141132 container start fe4a1905d5855284ee900ce5736a2dfbe1e9491c255adbfa2d3aedbd9837cbac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:52:02 np0005542249 podman[103754]: 2025-12-02 10:52:02.90192648 +0000 UTC m=+0.208188602 container attach fe4a1905d5855284ee900ce5736a2dfbe1e9491c255adbfa2d3aedbd9837cbac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  2 05:52:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v123: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 7.5 KiB/s wr, 188 op/s
Dec  2 05:52:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Dec  2 05:52:03 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/236459845' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Dec  2 05:52:03 np0005542249 heuristic_lalande[103741]: 
Dec  2 05:52:03 np0005542249 heuristic_lalande[103741]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"rgw":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":7}}
Dec  2 05:52:03 np0005542249 systemd[1]: libpod-9988d7c415b6615532256d7afbd620a238c4519d40f9561ce49dfd747fae60b3.scope: Deactivated successfully.
Dec  2 05:52:03 np0005542249 podman[103707]: 2025-12-02 10:52:03.176956008 +0000 UTC m=+0.755577944 container died 9988d7c415b6615532256d7afbd620a238c4519d40f9561ce49dfd747fae60b3 (image=quay.io/ceph/ceph:v18, name=heuristic_lalande, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  2 05:52:03 np0005542249 systemd[1]: var-lib-containers-storage-overlay-a7832fcafd0a64b97b259bc02c57a797f070fdcd589a47ab9e9e0f0541ba0c4f-merged.mount: Deactivated successfully.
Dec  2 05:52:03 np0005542249 podman[103707]: 2025-12-02 10:52:03.218763934 +0000 UTC m=+0.797385860 container remove 9988d7c415b6615532256d7afbd620a238c4519d40f9561ce49dfd747fae60b3 (image=quay.io/ceph/ceph:v18, name=heuristic_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  2 05:52:03 np0005542249 systemd[1]: libpod-conmon-9988d7c415b6615532256d7afbd620a238c4519d40f9561ce49dfd747fae60b3.scope: Deactivated successfully.
Dec  2 05:52:03 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Dec  2 05:52:03 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Dec  2 05:52:03 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec  2 05:52:03 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec  2 05:52:03 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:52:03 np0005542249 pedantic_zhukovsky[103788]: {
Dec  2 05:52:03 np0005542249 pedantic_zhukovsky[103788]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 05:52:03 np0005542249 pedantic_zhukovsky[103788]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:52:03 np0005542249 pedantic_zhukovsky[103788]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 05:52:03 np0005542249 pedantic_zhukovsky[103788]:        "osd_id": 0,
Dec  2 05:52:03 np0005542249 pedantic_zhukovsky[103788]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 05:52:03 np0005542249 pedantic_zhukovsky[103788]:        "type": "bluestore"
Dec  2 05:52:03 np0005542249 pedantic_zhukovsky[103788]:    },
Dec  2 05:52:03 np0005542249 pedantic_zhukovsky[103788]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 05:52:03 np0005542249 pedantic_zhukovsky[103788]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:52:03 np0005542249 pedantic_zhukovsky[103788]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 05:52:03 np0005542249 pedantic_zhukovsky[103788]:        "osd_id": 2,
Dec  2 05:52:03 np0005542249 pedantic_zhukovsky[103788]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 05:52:03 np0005542249 pedantic_zhukovsky[103788]:        "type": "bluestore"
Dec  2 05:52:03 np0005542249 pedantic_zhukovsky[103788]:    },
Dec  2 05:52:03 np0005542249 pedantic_zhukovsky[103788]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 05:52:03 np0005542249 pedantic_zhukovsky[103788]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:52:03 np0005542249 pedantic_zhukovsky[103788]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 05:52:03 np0005542249 pedantic_zhukovsky[103788]:        "osd_id": 1,
Dec  2 05:52:03 np0005542249 pedantic_zhukovsky[103788]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 05:52:03 np0005542249 pedantic_zhukovsky[103788]:        "type": "bluestore"
Dec  2 05:52:03 np0005542249 pedantic_zhukovsky[103788]:    }
Dec  2 05:52:03 np0005542249 pedantic_zhukovsky[103788]: }
Dec  2 05:52:04 np0005542249 systemd[1]: libpod-fe4a1905d5855284ee900ce5736a2dfbe1e9491c255adbfa2d3aedbd9837cbac.scope: Deactivated successfully.
Dec  2 05:52:04 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 5.e scrub starts
Dec  2 05:52:04 np0005542249 systemd[1]: libpod-fe4a1905d5855284ee900ce5736a2dfbe1e9491c255adbfa2d3aedbd9837cbac.scope: Consumed 1.136s CPU time.
Dec  2 05:52:04 np0005542249 podman[103754]: 2025-12-02 10:52:04.035542811 +0000 UTC m=+1.341804973 container died fe4a1905d5855284ee900ce5736a2dfbe1e9491c255adbfa2d3aedbd9837cbac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:52:04 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 5.e scrub ok
Dec  2 05:52:04 np0005542249 systemd[1]: var-lib-containers-storage-overlay-f7b16583a5f3199aa6c7cf0b1472c40906559e842bdf134ebea7f80c577b7ae0-merged.mount: Deactivated successfully.
Dec  2 05:52:04 np0005542249 podman[103754]: 2025-12-02 10:52:04.090052433 +0000 UTC m=+1.396314515 container remove fe4a1905d5855284ee900ce5736a2dfbe1e9491c255adbfa2d3aedbd9837cbac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_zhukovsky, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:52:04 np0005542249 systemd[1]: libpod-conmon-fe4a1905d5855284ee900ce5736a2dfbe1e9491c255adbfa2d3aedbd9837cbac.scope: Deactivated successfully.
Dec  2 05:52:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:52:04 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:52:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:52:04 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:52:04 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev c80ca8f0-06ef-412c-a447-23d1e569e355 does not exist
Dec  2 05:52:04 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 94c8e06c-a898-432e-b3bf-9710fba19ec1 does not exist
Dec  2 05:52:04 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:52:04 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:52:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:52:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v124: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 5.8 KiB/s wr, 145 op/s
Dec  2 05:52:05 np0005542249 podman[104071]: 2025-12-02 10:52:05.268956745 +0000 UTC m=+0.069173763 container exec cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:52:05 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 7.b scrub starts
Dec  2 05:52:05 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 7.b scrub ok
Dec  2 05:52:05 np0005542249 podman[104071]: 2025-12-02 10:52:05.374262678 +0000 UTC m=+0.174479686 container exec_died cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:52:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:52:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:52:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:52:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:52:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:52:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:52:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 05:52:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:52:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 05:52:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:52:06 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev f14aff0e-2d4e-4132-9d2f-c1ce6ae16e22 does not exist
Dec  2 05:52:06 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 4f42ddcd-8acc-4912-8c8f-4afd856be749 does not exist
Dec  2 05:52:06 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev b82ae5c6-c6f0-4b2f-a9c5-815bfddb03a4 does not exist
Dec  2 05:52:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 05:52:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 05:52:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 05:52:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 05:52:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:52:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:52:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v125: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 5.2 KiB/s wr, 129 op/s
Dec  2 05:52:07 np0005542249 podman[104369]: 2025-12-02 10:52:07.16189576 +0000 UTC m=+0.060482946 container create 8b5f6644c1ef4d39e8a2b7531251ec9f7533c16f0ebdf53b51755f44a0469e7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_satoshi, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  2 05:52:07 np0005542249 systemd[1]: Started libpod-conmon-8b5f6644c1ef4d39e8a2b7531251ec9f7533c16f0ebdf53b51755f44a0469e7f.scope.
Dec  2 05:52:07 np0005542249 podman[104369]: 2025-12-02 10:52:07.139737897 +0000 UTC m=+0.038325083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:52:07 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:52:07 np0005542249 podman[104369]: 2025-12-02 10:52:07.25792688 +0000 UTC m=+0.156514106 container init 8b5f6644c1ef4d39e8a2b7531251ec9f7533c16f0ebdf53b51755f44a0469e7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Dec  2 05:52:07 np0005542249 podman[104369]: 2025-12-02 10:52:07.263509422 +0000 UTC m=+0.162096608 container start 8b5f6644c1ef4d39e8a2b7531251ec9f7533c16f0ebdf53b51755f44a0469e7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:52:07 np0005542249 podman[104369]: 2025-12-02 10:52:07.267071819 +0000 UTC m=+0.165658995 container attach 8b5f6644c1ef4d39e8a2b7531251ec9f7533c16f0ebdf53b51755f44a0469e7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_satoshi, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  2 05:52:07 np0005542249 objective_satoshi[104385]: 167 167
Dec  2 05:52:07 np0005542249 systemd[1]: libpod-8b5f6644c1ef4d39e8a2b7531251ec9f7533c16f0ebdf53b51755f44a0469e7f.scope: Deactivated successfully.
Dec  2 05:52:07 np0005542249 podman[104369]: 2025-12-02 10:52:07.270510942 +0000 UTC m=+0.169098118 container died 8b5f6644c1ef4d39e8a2b7531251ec9f7533c16f0ebdf53b51755f44a0469e7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_satoshi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:52:07 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Dec  2 05:52:07 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Dec  2 05:52:07 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:52:07 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:52:07 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:52:07 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:52:07 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 05:52:07 np0005542249 systemd[1]: var-lib-containers-storage-overlay-2c081b83aa690882b33784500187a1af267c7045d1b7a7c00498fc098c4b2f39-merged.mount: Deactivated successfully.
Dec  2 05:52:07 np0005542249 podman[104369]: 2025-12-02 10:52:07.314246951 +0000 UTC m=+0.212834107 container remove 8b5f6644c1ef4d39e8a2b7531251ec9f7533c16f0ebdf53b51755f44a0469e7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_satoshi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  2 05:52:07 np0005542249 systemd[1]: libpod-conmon-8b5f6644c1ef4d39e8a2b7531251ec9f7533c16f0ebdf53b51755f44a0469e7f.scope: Deactivated successfully.
Dec  2 05:52:07 np0005542249 podman[104409]: 2025-12-02 10:52:07.502858969 +0000 UTC m=+0.056496016 container create eed79705ba28f1c5aeb6d5bd5757e9c98a600e7cb612928b6a88950836cb636e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:52:07 np0005542249 systemd[1]: Started libpod-conmon-eed79705ba28f1c5aeb6d5bd5757e9c98a600e7cb612928b6a88950836cb636e.scope.
Dec  2 05:52:07 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:52:07 np0005542249 podman[104409]: 2025-12-02 10:52:07.479119504 +0000 UTC m=+0.032756641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:52:07 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5d75a11bebf418787656115c6447b55765a7a961f52a692aedfc14f67603ca0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:07 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5d75a11bebf418787656115c6447b55765a7a961f52a692aedfc14f67603ca0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:07 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5d75a11bebf418787656115c6447b55765a7a961f52a692aedfc14f67603ca0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:07 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5d75a11bebf418787656115c6447b55765a7a961f52a692aedfc14f67603ca0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:07 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5d75a11bebf418787656115c6447b55765a7a961f52a692aedfc14f67603ca0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:07 np0005542249 podman[104409]: 2025-12-02 10:52:07.588866378 +0000 UTC m=+0.142503475 container init eed79705ba28f1c5aeb6d5bd5757e9c98a600e7cb612928b6a88950836cb636e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_liskov, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  2 05:52:07 np0005542249 podman[104409]: 2025-12-02 10:52:07.602660152 +0000 UTC m=+0.156297209 container start eed79705ba28f1c5aeb6d5bd5757e9c98a600e7cb612928b6a88950836cb636e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_liskov, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:52:07 np0005542249 podman[104409]: 2025-12-02 10:52:07.605981333 +0000 UTC m=+0.159618440 container attach eed79705ba28f1c5aeb6d5bd5757e9c98a600e7cb612928b6a88950836cb636e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_liskov, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Dec  2 05:52:08 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Dec  2 05:52:08 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Dec  2 05:52:08 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec  2 05:52:08 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec  2 05:52:08 np0005542249 nervous_liskov[104425]: --> passed data devices: 0 physical, 3 LVM
Dec  2 05:52:08 np0005542249 nervous_liskov[104425]: --> relative data size: 1.0
Dec  2 05:52:08 np0005542249 nervous_liskov[104425]: --> All data devices are unavailable
Dec  2 05:52:08 np0005542249 systemd[1]: libpod-eed79705ba28f1c5aeb6d5bd5757e9c98a600e7cb612928b6a88950836cb636e.scope: Deactivated successfully.
Dec  2 05:52:08 np0005542249 systemd[1]: libpod-eed79705ba28f1c5aeb6d5bd5757e9c98a600e7cb612928b6a88950836cb636e.scope: Consumed 1.033s CPU time.
Dec  2 05:52:08 np0005542249 podman[104409]: 2025-12-02 10:52:08.691194084 +0000 UTC m=+1.244831221 container died eed79705ba28f1c5aeb6d5bd5757e9c98a600e7cb612928b6a88950836cb636e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_liskov, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:52:08 np0005542249 systemd[1]: var-lib-containers-storage-overlay-e5d75a11bebf418787656115c6447b55765a7a961f52a692aedfc14f67603ca0-merged.mount: Deactivated successfully.
Dec  2 05:52:08 np0005542249 podman[104409]: 2025-12-02 10:52:08.760881373 +0000 UTC m=+1.314518460 container remove eed79705ba28f1c5aeb6d5bd5757e9c98a600e7cb612928b6a88950836cb636e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_liskov, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:52:08 np0005542249 systemd[1]: libpod-conmon-eed79705ba28f1c5aeb6d5bd5757e9c98a600e7cb612928b6a88950836cb636e.scope: Deactivated successfully.
Dec  2 05:52:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v126: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.2 KiB/s wr, 109 op/s
Dec  2 05:52:09 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Dec  2 05:52:09 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Dec  2 05:52:09 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Dec  2 05:52:09 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Dec  2 05:52:09 np0005542249 podman[104607]: 2025-12-02 10:52:09.530317197 +0000 UTC m=+0.040773404 container create d0220c1640deb1731b9c027fbe44c63626b4f8241447b82a15ef67c571ed6c63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_nightingale, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  2 05:52:09 np0005542249 systemd[1]: Started libpod-conmon-d0220c1640deb1731b9c027fbe44c63626b4f8241447b82a15ef67c571ed6c63.scope.
Dec  2 05:52:09 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:52:09 np0005542249 podman[104607]: 2025-12-02 10:52:09.512278143 +0000 UTC m=+0.022734330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:52:09 np0005542249 podman[104607]: 2025-12-02 10:52:09.611642308 +0000 UTC m=+0.122098565 container init d0220c1640deb1731b9c027fbe44c63626b4f8241447b82a15ef67c571ed6c63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_nightingale, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  2 05:52:09 np0005542249 podman[104607]: 2025-12-02 10:52:09.619029467 +0000 UTC m=+0.129485674 container start d0220c1640deb1731b9c027fbe44c63626b4f8241447b82a15ef67c571ed6c63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:52:09 np0005542249 podman[104607]: 2025-12-02 10:52:09.622850918 +0000 UTC m=+0.133307115 container attach d0220c1640deb1731b9c027fbe44c63626b4f8241447b82a15ef67c571ed6c63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:52:09 np0005542249 happy_nightingale[104623]: 167 167
Dec  2 05:52:09 np0005542249 systemd[1]: libpod-d0220c1640deb1731b9c027fbe44c63626b4f8241447b82a15ef67c571ed6c63.scope: Deactivated successfully.
Dec  2 05:52:09 np0005542249 podman[104607]: 2025-12-02 10:52:09.624795681 +0000 UTC m=+0.135251848 container died d0220c1640deb1731b9c027fbe44c63626b4f8241447b82a15ef67c571ed6c63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  2 05:52:09 np0005542249 systemd[1]: var-lib-containers-storage-overlay-fb1196a9bcda6786abb2dcb20bbb448f6d518d7286555f4c2a36e5f8f91a20d3-merged.mount: Deactivated successfully.
Dec  2 05:52:09 np0005542249 podman[104607]: 2025-12-02 10:52:09.65868917 +0000 UTC m=+0.169145337 container remove d0220c1640deb1731b9c027fbe44c63626b4f8241447b82a15ef67c571ed6c63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_nightingale, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Dec  2 05:52:09 np0005542249 systemd[1]: libpod-conmon-d0220c1640deb1731b9c027fbe44c63626b4f8241447b82a15ef67c571ed6c63.scope: Deactivated successfully.
Dec  2 05:52:09 np0005542249 podman[104646]: 2025-12-02 10:52:09.822493063 +0000 UTC m=+0.045080880 container create 787393baad624563aa2ee5d3584373bce7d165bf23fc52df16cf2185df1de7d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jones, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  2 05:52:09 np0005542249 systemd[1]: Started libpod-conmon-787393baad624563aa2ee5d3584373bce7d165bf23fc52df16cf2185df1de7d3.scope.
Dec  2 05:52:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:52:09 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:52:09 np0005542249 podman[104646]: 2025-12-02 10:52:09.799904567 +0000 UTC m=+0.022492414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:52:09 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30b2c9a757860609e04049515a44f7c22208e746e4010dc17d76cfce62fd979/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:09 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30b2c9a757860609e04049515a44f7c22208e746e4010dc17d76cfce62fd979/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:09 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30b2c9a757860609e04049515a44f7c22208e746e4010dc17d76cfce62fd979/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:09 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30b2c9a757860609e04049515a44f7c22208e746e4010dc17d76cfce62fd979/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:09 np0005542249 podman[104646]: 2025-12-02 10:52:09.916167825 +0000 UTC m=+0.138755652 container init 787393baad624563aa2ee5d3584373bce7d165bf23fc52df16cf2185df1de7d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jones, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:52:09 np0005542249 podman[104646]: 2025-12-02 10:52:09.928081814 +0000 UTC m=+0.150669611 container start 787393baad624563aa2ee5d3584373bce7d165bf23fc52df16cf2185df1de7d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jones, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  2 05:52:09 np0005542249 podman[104646]: 2025-12-02 10:52:09.931554528 +0000 UTC m=+0.154142325 container attach 787393baad624563aa2ee5d3584373bce7d165bf23fc52df16cf2185df1de7d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  2 05:52:10 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Dec  2 05:52:10 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Dec  2 05:52:10 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Dec  2 05:52:10 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]: {
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:    "0": [
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:        {
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "devices": [
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "/dev/loop3"
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            ],
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "lv_name": "ceph_lv0",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "lv_size": "21470642176",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "name": "ceph_lv0",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "tags": {
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.cluster_name": "ceph",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.crush_device_class": "",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.encrypted": "0",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.osd_id": "0",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.type": "block",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.vdo": "0"
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            },
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "type": "block",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "vg_name": "ceph_vg0"
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:        }
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:    ],
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:    "1": [
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:        {
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "devices": [
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "/dev/loop4"
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            ],
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "lv_name": "ceph_lv1",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "lv_size": "21470642176",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "name": "ceph_lv1",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "tags": {
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.cluster_name": "ceph",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.crush_device_class": "",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.encrypted": "0",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.osd_id": "1",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.type": "block",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.vdo": "0"
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            },
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "type": "block",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "vg_name": "ceph_vg1"
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:        }
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:    ],
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:    "2": [
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:        {
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "devices": [
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "/dev/loop5"
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            ],
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "lv_name": "ceph_lv2",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "lv_size": "21470642176",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "name": "ceph_lv2",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "tags": {
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.cluster_name": "ceph",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.crush_device_class": "",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.encrypted": "0",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.osd_id": "2",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.type": "block",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:                "ceph.vdo": "0"
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            },
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "type": "block",
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:            "vg_name": "ceph_vg2"
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:        }
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]:    ]
Dec  2 05:52:10 np0005542249 vibrant_jones[104662]: }
Dec  2 05:52:10 np0005542249 systemd[1]: libpod-787393baad624563aa2ee5d3584373bce7d165bf23fc52df16cf2185df1de7d3.scope: Deactivated successfully.
Dec  2 05:52:10 np0005542249 podman[104646]: 2025-12-02 10:52:10.754852147 +0000 UTC m=+0.977439944 container died 787393baad624563aa2ee5d3584373bce7d165bf23fc52df16cf2185df1de7d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jones, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:52:10 np0005542249 systemd[1]: var-lib-containers-storage-overlay-d30b2c9a757860609e04049515a44f7c22208e746e4010dc17d76cfce62fd979-merged.mount: Deactivated successfully.
Dec  2 05:52:10 np0005542249 podman[104646]: 2025-12-02 10:52:10.814313441 +0000 UTC m=+1.036901238 container remove 787393baad624563aa2ee5d3584373bce7d165bf23fc52df16cf2185df1de7d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jones, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:52:10 np0005542249 systemd[1]: libpod-conmon-787393baad624563aa2ee5d3584373bce7d165bf23fc52df16cf2185df1de7d3.scope: Deactivated successfully.
Dec  2 05:52:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v127: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.8 KiB/s wr, 95 op/s
Dec  2 05:52:11 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Dec  2 05:52:11 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Dec  2 05:52:11 np0005542249 podman[104825]: 2025-12-02 10:52:11.456481094 +0000 UTC m=+0.047403123 container create e0588a811b9a3b07b18f9114edd7dd49e55045039655c7234067cc1e55ce5352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:52:11 np0005542249 systemd[1]: Started libpod-conmon-e0588a811b9a3b07b18f9114edd7dd49e55045039655c7234067cc1e55ce5352.scope.
Dec  2 05:52:11 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:52:11 np0005542249 podman[104825]: 2025-12-02 10:52:11.43919254 +0000 UTC m=+0.030114569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:52:11 np0005542249 podman[104825]: 2025-12-02 10:52:11.540800825 +0000 UTC m=+0.131722884 container init e0588a811b9a3b07b18f9114edd7dd49e55045039655c7234067cc1e55ce5352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chebyshev, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  2 05:52:11 np0005542249 podman[104825]: 2025-12-02 10:52:11.547173036 +0000 UTC m=+0.138095055 container start e0588a811b9a3b07b18f9114edd7dd49e55045039655c7234067cc1e55ce5352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  2 05:52:11 np0005542249 podman[104825]: 2025-12-02 10:52:11.551043399 +0000 UTC m=+0.141965418 container attach e0588a811b9a3b07b18f9114edd7dd49e55045039655c7234067cc1e55ce5352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chebyshev, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Dec  2 05:52:11 np0005542249 systemd[1]: libpod-e0588a811b9a3b07b18f9114edd7dd49e55045039655c7234067cc1e55ce5352.scope: Deactivated successfully.
Dec  2 05:52:11 np0005542249 elegant_chebyshev[104840]: 167 167
Dec  2 05:52:11 np0005542249 podman[104825]: 2025-12-02 10:52:11.554704468 +0000 UTC m=+0.145626487 container died e0588a811b9a3b07b18f9114edd7dd49e55045039655c7234067cc1e55ce5352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chebyshev, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:52:11 np0005542249 conmon[104840]: conmon e0588a811b9a3b07b18f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e0588a811b9a3b07b18f9114edd7dd49e55045039655c7234067cc1e55ce5352.scope/container/memory.events
Dec  2 05:52:11 np0005542249 systemd[1]: var-lib-containers-storage-overlay-71ac049ffd493d780c2931791345f46c4187512c8ccbde2ed39f8591e824ba60-merged.mount: Deactivated successfully.
Dec  2 05:52:11 np0005542249 podman[104825]: 2025-12-02 10:52:11.610771211 +0000 UTC m=+0.201693260 container remove e0588a811b9a3b07b18f9114edd7dd49e55045039655c7234067cc1e55ce5352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  2 05:52:11 np0005542249 systemd[1]: libpod-conmon-e0588a811b9a3b07b18f9114edd7dd49e55045039655c7234067cc1e55ce5352.scope: Deactivated successfully.
Dec  2 05:52:11 np0005542249 podman[104863]: 2025-12-02 10:52:11.852660478 +0000 UTC m=+0.062216840 container create 90210b71aa0a6c7748ad85fa13586d83cfa653aec4e2b1a84e62b58bb688cb97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  2 05:52:11 np0005542249 systemd[1]: Started libpod-conmon-90210b71aa0a6c7748ad85fa13586d83cfa653aec4e2b1a84e62b58bb688cb97.scope.
Dec  2 05:52:11 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:52:11 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac8a37d9392a7308c7bca653eeedd434bbaf7dcd9b9e0c46512b7d6b28832a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:11 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac8a37d9392a7308c7bca653eeedd434bbaf7dcd9b9e0c46512b7d6b28832a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:11 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac8a37d9392a7308c7bca653eeedd434bbaf7dcd9b9e0c46512b7d6b28832a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:11 np0005542249 podman[104863]: 2025-12-02 10:52:11.830520904 +0000 UTC m=+0.040077296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:52:11 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac8a37d9392a7308c7bca653eeedd434bbaf7dcd9b9e0c46512b7d6b28832a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:12 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec  2 05:52:12 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec  2 05:52:12 np0005542249 podman[104863]: 2025-12-02 10:52:12.190163079 +0000 UTC m=+0.399719451 container init 90210b71aa0a6c7748ad85fa13586d83cfa653aec4e2b1a84e62b58bb688cb97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcclintock, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:52:12 np0005542249 podman[104863]: 2025-12-02 10:52:12.202648304 +0000 UTC m=+0.412204656 container start 90210b71aa0a6c7748ad85fa13586d83cfa653aec4e2b1a84e62b58bb688cb97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:52:12 np0005542249 podman[104863]: 2025-12-02 10:52:12.206527009 +0000 UTC m=+0.416083421 container attach 90210b71aa0a6c7748ad85fa13586d83cfa653aec4e2b1a84e62b58bb688cb97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcclintock, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  2 05:52:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v128: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 KiB/s wr, 91 op/s
Dec  2 05:52:13 np0005542249 upbeat_mcclintock[104880]: {
Dec  2 05:52:13 np0005542249 upbeat_mcclintock[104880]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 05:52:13 np0005542249 upbeat_mcclintock[104880]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:52:13 np0005542249 upbeat_mcclintock[104880]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 05:52:13 np0005542249 upbeat_mcclintock[104880]:        "osd_id": 0,
Dec  2 05:52:13 np0005542249 upbeat_mcclintock[104880]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 05:52:13 np0005542249 upbeat_mcclintock[104880]:        "type": "bluestore"
Dec  2 05:52:13 np0005542249 upbeat_mcclintock[104880]:    },
Dec  2 05:52:13 np0005542249 upbeat_mcclintock[104880]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 05:52:13 np0005542249 upbeat_mcclintock[104880]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:52:13 np0005542249 upbeat_mcclintock[104880]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 05:52:13 np0005542249 upbeat_mcclintock[104880]:        "osd_id": 2,
Dec  2 05:52:13 np0005542249 upbeat_mcclintock[104880]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 05:52:13 np0005542249 upbeat_mcclintock[104880]:        "type": "bluestore"
Dec  2 05:52:13 np0005542249 upbeat_mcclintock[104880]:    },
Dec  2 05:52:13 np0005542249 upbeat_mcclintock[104880]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 05:52:13 np0005542249 upbeat_mcclintock[104880]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:52:13 np0005542249 upbeat_mcclintock[104880]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 05:52:13 np0005542249 upbeat_mcclintock[104880]:        "osd_id": 1,
Dec  2 05:52:13 np0005542249 upbeat_mcclintock[104880]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 05:52:13 np0005542249 upbeat_mcclintock[104880]:        "type": "bluestore"
Dec  2 05:52:13 np0005542249 upbeat_mcclintock[104880]:    }
Dec  2 05:52:13 np0005542249 upbeat_mcclintock[104880]: }
Dec  2 05:52:13 np0005542249 systemd[1]: libpod-90210b71aa0a6c7748ad85fa13586d83cfa653aec4e2b1a84e62b58bb688cb97.scope: Deactivated successfully.
Dec  2 05:52:13 np0005542249 podman[104863]: 2025-12-02 10:52:13.330578723 +0000 UTC m=+1.540135115 container died 90210b71aa0a6c7748ad85fa13586d83cfa653aec4e2b1a84e62b58bb688cb97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:52:13 np0005542249 systemd[1]: libpod-90210b71aa0a6c7748ad85fa13586d83cfa653aec4e2b1a84e62b58bb688cb97.scope: Consumed 1.135s CPU time.
Dec  2 05:52:13 np0005542249 systemd[1]: var-lib-containers-storage-overlay-1ac8a37d9392a7308c7bca653eeedd434bbaf7dcd9b9e0c46512b7d6b28832a9-merged.mount: Deactivated successfully.
Dec  2 05:52:13 np0005542249 podman[104863]: 2025-12-02 10:52:13.395585607 +0000 UTC m=+1.605141999 container remove 90210b71aa0a6c7748ad85fa13586d83cfa653aec4e2b1a84e62b58bb688cb97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 05:52:13 np0005542249 systemd[1]: libpod-conmon-90210b71aa0a6c7748ad85fa13586d83cfa653aec4e2b1a84e62b58bb688cb97.scope: Deactivated successfully.
Dec  2 05:52:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:52:13 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:52:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:52:13 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:52:13 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 1e763db6-9e22-4c46-bb00-135b9ce16e03 does not exist
Dec  2 05:52:13 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev f0cd5f30-9dbb-486a-b86e-e0e49eb5e7d1 does not exist
Dec  2 05:52:13 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Dec  2 05:52:14 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Dec  2 05:52:14 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:52:14 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:52:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:52:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v129: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:52:15 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Dec  2 05:52:15 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Dec  2 05:52:16 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Dec  2 05:52:16 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Dec  2 05:52:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v130: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:52:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v131: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:52:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:52:20 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Dec  2 05:52:20 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Dec  2 05:52:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v132: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:52:22 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Dec  2 05:52:22 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Dec  2 05:52:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v133: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:52:23 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Dec  2 05:52:23 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Dec  2 05:52:23 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Dec  2 05:52:23 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Dec  2 05:52:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:52:24 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec  2 05:52:24 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec  2 05:52:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v134: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:52:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_10:52:26
Dec  2 05:52:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 05:52:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 05:52:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['images', 'volumes', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', '.mgr']
Dec  2 05:52:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 05:52:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:52:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:52:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:52:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:52:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:52:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:52:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 05:52:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 05:52:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 05:52:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 05:52:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 05:52:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 05:52:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 05:52:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 05:52:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 05:52:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 05:52:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v135: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:52:27 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Dec  2 05:52:27 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Dec  2 05:52:28 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Dec  2 05:52:28 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Dec  2 05:52:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v136: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:52:29 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 6.a scrub starts
Dec  2 05:52:29 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 6.a scrub ok
Dec  2 05:52:29 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Dec  2 05:52:29 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Dec  2 05:52:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:52:30 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Dec  2 05:52:30 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Dec  2 05:52:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v137: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:52:31 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Dec  2 05:52:31 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Dec  2 05:52:31 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 7.11 deep-scrub starts
Dec  2 05:52:31 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 7.11 deep-scrub ok
Dec  2 05:52:32 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 05:52:32 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:52:32 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 05:52:32 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:52:32 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:52:32 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:52:32 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:52:32 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:52:32 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:52:32 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:52:32 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:52:32 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:52:32 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 05:52:32 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:52:32 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:52:32 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:52:32 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Dec  2 05:52:32 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:52:32 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Dec  2 05:52:32 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:52:32 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  2 05:52:32 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:52:32 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Dec  2 05:52:32 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Dec  2 05:52:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Dec  2 05:52:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec  2 05:52:32 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Dec  2 05:52:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Dec  2 05:52:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec  2 05:52:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Dec  2 05:52:32 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Dec  2 05:52:32 np0005542249 ceph-mgr[75372]: [progress INFO root] update: starting ev ec1da064-9fe5-40e5-8652-9fd08c216479 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec  2 05:52:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Dec  2 05:52:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec  2 05:52:32 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Dec  2 05:52:32 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Dec  2 05:52:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v139: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:52:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  2 05:52:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  2 05:52:33 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec  2 05:52:33 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec  2 05:52:33 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec  2 05:52:33 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  2 05:52:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Dec  2 05:52:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec  2 05:52:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec  2 05:52:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Dec  2 05:52:33 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Dec  2 05:52:33 np0005542249 ceph-mgr[75372]: [progress INFO root] update: starting ev 42d758d3-d053-4e08-88b7-a98f681f1e61 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec  2 05:52:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Dec  2 05:52:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec  2 05:52:34 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec  2 05:52:34 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec  2 05:52:34 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec  2 05:52:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Dec  2 05:52:34 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec  2 05:52:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Dec  2 05:52:34 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Dec  2 05:52:34 np0005542249 ceph-mgr[75372]: [progress INFO root] update: starting ev 47860657-0d97-4fb2-8870-2fab8cfab875 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec  2 05:52:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Dec  2 05:52:34 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  2 05:52:34 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Dec  2 05:52:34 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Dec  2 05:52:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:52:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  2 05:52:35 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  2 05:52:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  2 05:52:35 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  2 05:52:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Dec  2 05:52:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v142: 228 pgs: 31 unknown, 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 56 pg[8.0( v 47'4 (0'0,47'4] local-lis/les=46/47 n=4 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=11.588355064s) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 47'3 mlcod 47'3 active pruub 136.742828369s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:35 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec  2 05:52:35 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec  2 05:52:35 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec  2 05:52:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.0( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=11.588355064s) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 47'3 mlcod 0'0 unknown pruub 136.742828369s@ mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.b( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.a( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.9( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.16( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.3( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=1 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.8( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.18( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.1d( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.14( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.12( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.11( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.1a( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.13( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.1c( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.d( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.1f( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.e( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.10( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.1e( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.f( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.1( v 47'4 (0'0,47'4] local-lis/les=46/47 n=1 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.4( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=1 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.1b( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.19( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.5( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.17( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.2( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=1 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.6( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.15( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.7( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 57 pg[8.c( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-mgr[75372]: [progress INFO root] update: starting ev 68d7c5f4-94eb-4ead-9850-3dc2fea9b72d (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec  2 05:52:35 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 58 pg[10.0( v 51'16 (0'0,51'16] local-lis/les=50/51 n=8 ec=50/50 lis/c=50/50 les/c/f=51/51/0 sis=58 pruub=8.650691032s) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 51'15 mlcod 51'15 active pruub 127.868927002s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:35 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec  2 05:52:35 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  2 05:52:35 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  2 05:52:35 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  2 05:52:35 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 58 pg[10.0( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=50/50 lis/c=50/50 les/c/f=51/51/0 sis=58 pruub=8.650691032s) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 51'15 mlcod 0'0 unknown pruub 127.868927002s@ mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:35 np0005542249 ceph-mgr[75372]: [progress INFO root] complete: finished ev ec1da064-9fe5-40e5-8652-9fd08c216479 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec  2 05:52:35 np0005542249 ceph-mgr[75372]: [progress INFO root] Completed event ec1da064-9fe5-40e5-8652-9fd08c216479 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Dec  2 05:52:35 np0005542249 ceph-mgr[75372]: [progress INFO root] complete: finished ev 42d758d3-d053-4e08-88b7-a98f681f1e61 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec  2 05:52:35 np0005542249 ceph-mgr[75372]: [progress INFO root] Completed event 42d758d3-d053-4e08-88b7-a98f681f1e61 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Dec  2 05:52:35 np0005542249 ceph-mgr[75372]: [progress INFO root] complete: finished ev 47860657-0d97-4fb2-8870-2fab8cfab875 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec  2 05:52:35 np0005542249 ceph-mgr[75372]: [progress INFO root] Completed event 47860657-0d97-4fb2-8870-2fab8cfab875 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Dec  2 05:52:35 np0005542249 ceph-mgr[75372]: [progress INFO root] complete: finished ev 68d7c5f4-94eb-4ead-9850-3dc2fea9b72d (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec  2 05:52:35 np0005542249 ceph-mgr[75372]: [progress INFO root] Completed event 68d7c5f4-94eb-4ead-9850-3dc2fea9b72d (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Dec  2 05:52:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Dec  2 05:52:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Dec  2 05:52:36 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.d( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.1b( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.1e( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.b( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.12( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.a( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.11( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.10( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 58 pg[9.0( v 54'385 (0'0,54'385] local-lis/les=48/49 n=177 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=58 pruub=13.478665352s) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 54'384 mlcod 54'384 active pruub 139.661895752s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.1f( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.1d( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.1c( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.13( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.19( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.18( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.1a( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.7( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=1 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.6( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=1 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.5( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=1 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.4( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=1 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.8( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=1 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.f( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.9( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.c( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.1( v 51'16 (0'0,51'16] local-lis/les=50/51 n=1 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.e( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.2( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=1 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.3( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=1 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.14( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.16( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec  2 05:52:36 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.15( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.17( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.d( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.14( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.15( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.16( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.17( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.1( v 47'4 (0'0,47'4] local-lis/les=56/59 n=1 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.10( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.2( v 47'4 (0'0,47'4] local-lis/les=56/59 n=1 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.3( v 47'4 (0'0,47'4] local-lis/les=56/59 n=1 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.c( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.d( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.e( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.8( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.a( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.b( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.f( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.9( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.0( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 47'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.7( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.6( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.5( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.1e( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.1b( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.12( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.b( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.10( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.a( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.11( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.1d( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.1f( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.1c( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.13( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.19( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.6( v 51'16 (0'0,51'16] local-lis/les=58/59 n=1 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.7( v 51'16 (0'0,51'16] local-lis/les=58/59 n=1 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.18( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.5( v 51'16 (0'0,51'16] local-lis/les=58/59 n=1 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.1a( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.4( v 51'16 (0'0,51'16] local-lis/les=58/59 n=1 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.8( v 51'16 (0'0,51'16] local-lis/les=58/59 n=1 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.9( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.f( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.0( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=50/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 51'15 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.1( v 51'16 (0'0,51'16] local-lis/les=58/59 n=1 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.c( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.2( v 51'16 (0'0,51'16] local-lis/les=58/59 n=1 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.3( v 51'16 (0'0,51'16] local-lis/les=58/59 n=1 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.15( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.16( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.17( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.e( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.4( v 47'4 (0'0,47'4] local-lis/les=56/59 n=1 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.1b( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.19( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.18( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.1d( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.1f( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.13( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.12( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.1c( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.1a( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.1e( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[8.11( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 59 pg[10.14( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=50/50 les/c/f=51/51/0 sis=58) [2] r=0 lpr=58 pi=[50,58)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.0( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=58 pruub=13.478665352s) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 54'384 mlcod 0'0 unknown pruub 139.661895752s@ mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.2( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.1( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.3( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.4( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.5( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.6( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.7( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.8( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.9( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.a( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.b( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.c( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.d( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.e( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.f( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.10( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.11( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.12( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.13( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.14( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.15( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.16( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.17( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.18( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.19( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.1a( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.1b( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.1c( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.1d( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.1e( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:36 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 59 pg[9.1f( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v145: 290 pgs: 32 peering, 31 unknown, 227 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:52:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  2 05:52:37 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  2 05:52:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Dec  2 05:52:37 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  2 05:52:37 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  2 05:52:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Dec  2 05:52:37 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[11.0( empty local-lis/les=52/53 n=0 ec=52/52 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=9.026363373s) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active pruub 136.238876343s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[11.0( empty local-lis/les=52/53 n=0 ec=52/52 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=9.026363373s) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown pruub 136.238876343s@ mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.14( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.0( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 54'384 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.2( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.a( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.4( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.1a( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.12( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.10( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 60 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [1] r=0 lpr=58 pi=[48,58)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:37 np0005542249 ceph-mgr[75372]: [progress INFO root] Writing back 17 completed events
Dec  2 05:52:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  2 05:52:37 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:52:38 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 6.16 deep-scrub starts
Dec  2 05:52:38 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 6.16 deep-scrub ok
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 5.18 deep-scrub starts
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 5.18 deep-scrub ok
Dec  2 05:52:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Dec  2 05:52:38 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  2 05:52:38 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:52:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Dec  2 05:52:38 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.17( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.16( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.15( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.13( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.2( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.1( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.14( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.f( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.e( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.d( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.b( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.9( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.c( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.8( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.a( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.3( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.4( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.5( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.6( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.7( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.18( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.1a( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.1b( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.1c( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.1d( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.1e( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.1f( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.10( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.11( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.12( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.19( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.17( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.15( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.13( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.0( empty local-lis/les=60/61 n=0 ec=52/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.16( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.14( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.f( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.e( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.d( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.9( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.b( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.c( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.2( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.1( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.a( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.8( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.3( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.4( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.5( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.6( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.7( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.18( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.1a( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.1d( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.10( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.1f( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.1b( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.11( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.1c( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.12( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.19( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:38 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 61 pg[11.1e( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [1] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v148: 321 pgs: 32 peering, 31 unknown, 258 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:52:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:52:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v149: 321 pgs: 32 peering, 31 unknown, 258 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:52:41 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Dec  2 05:52:41 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Dec  2 05:52:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v150: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:52:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  2 05:52:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  2 05:52:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  2 05:52:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  2 05:52:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Dec  2 05:52:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  2 05:52:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  2 05:52:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Dec  2 05:52:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Dec  2 05:52:43 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  2 05:52:43 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  2 05:52:43 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  2 05:52:43 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  2 05:52:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  2 05:52:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  2 05:52:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  2 05:52:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  2 05:52:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Dec  2 05:52:43 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.17( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.947601318s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 144.232040405s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.14( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.904088020s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 142.188598633s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.17( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.947521210s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.232040405s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.14( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.903973579s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.188598633s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.15( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.903791428s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 142.188598633s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.15( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.903764725s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.188598633s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.15( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.951070786s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 144.236007690s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.14( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.951086998s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 144.236114502s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.937936783s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 143.222946167s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.14( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.951061249s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.236114502s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.15( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.950970650s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.236007690s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.937857628s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.222946167s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.10( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.909417152s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 142.194732666s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.930899620s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 143.216735840s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.2( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.950417519s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 144.236297607s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.2( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.950385094s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.236297607s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.930826187s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.216735840s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.10( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.908746719s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.194732666s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.1( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.950201035s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 144.236328125s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.1( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.950174332s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.236328125s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.2( v 47'4 (0'0,47'4] local-lis/les=56/59 n=1 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.908555031s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 142.194854736s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.2( v 47'4 (0'0,47'4] local-lis/les=56/59 n=1 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.908533096s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.194854736s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.936691284s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 143.223083496s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.936639786s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.223083496s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.c( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.908342361s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 142.195037842s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.c( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.908313751s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.195037842s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.936370850s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 143.223175049s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.d( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.908151627s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 142.195053101s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.e( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.949226379s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 144.236145020s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.d( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.908122063s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.195053101s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.936317444s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.223175049s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.e( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.949166298s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.236145020s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.d( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.949131966s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 144.236206055s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.d( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.949111938s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.236206055s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.f( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.948904991s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 144.236114502s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.e( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.907860756s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 142.195159912s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.935816765s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 143.223144531s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.935751915s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.223144531s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.f( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.948836327s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.236114502s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.e( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.907778740s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.195159912s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.b( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.948790550s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 144.236251831s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.b( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.948760986s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.236251831s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.935743332s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 143.223327637s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.935722351s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.223327637s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.9( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.948484421s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 144.236206055s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.935514450s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 143.223281860s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.9( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.948439598s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.236206055s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.935485840s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.223281860s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.8( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.948396683s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 144.236312866s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.8( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.948373795s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.236312866s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.f( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.907336235s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 142.195297241s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.b( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.907226562s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 142.195251465s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.b( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.907182693s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.195251465s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.f( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.907258034s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.195297241s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.9( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.907116890s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 142.195327759s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.9( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.907089233s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.195327759s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.934701920s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 143.222961426s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.4( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.948027611s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 144.236389160s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.935097694s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 143.223495483s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.4( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.948005676s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.236389160s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.3( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.947966576s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 144.236373901s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.935056686s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.223495483s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.3( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.947927475s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.236373901s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.934514046s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.222961426s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.6( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.906811714s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 142.195404053s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.934864044s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 143.223541260s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.6( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.906787872s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.195404053s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.934804916s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.223541260s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.6( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.947653770s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 144.236434937s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.6( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.947613716s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.236434937s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.934666634s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 143.223587036s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.934639931s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.223587036s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.18( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.947409630s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 144.236480713s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.1a( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.947448730s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 144.236526489s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.1b( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.910440445s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 142.199523926s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.18( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.947376251s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.236480713s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.1a( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.947422028s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.236526489s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.1b( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.910401344s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.199523926s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.934420586s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 143.223663330s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.1b( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.947273254s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 144.236511230s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.934402466s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.223663330s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.18( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.910430908s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 142.199768066s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.1b( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.947210312s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.236511230s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.18( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.910401344s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.199768066s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.4( v 47'4 (0'0,47'4] local-lis/les=56/59 n=1 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.910034180s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 142.199401855s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.1c( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.947055817s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 144.236511230s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.1c( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.947015762s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.236511230s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.1f( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.910302162s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 142.199813843s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.1f( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.910222054s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.199813843s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.934105873s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 143.223785400s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.934081078s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.223785400s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.4( v 47'4 (0'0,47'4] local-lis/les=56/59 n=1 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.909924507s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.199401855s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.1e( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.954379082s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 144.244216919s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.1e( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.954357147s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.244216919s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.1d( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.909758568s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 142.199783325s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.1f( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.946586609s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 144.236633301s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.1c( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.909775734s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 142.199829102s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.1c( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.909744263s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.199829102s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.1d( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.909693718s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.199783325s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.1f( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.946433067s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.236633301s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.10( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.946233749s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 144.236587524s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.10( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.946211815s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.236587524s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.933382034s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 143.223861694s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.11( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.946283340s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 144.236724854s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.11( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.946191788s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.236724854s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.933338165s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.223861694s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.12( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.909459114s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 142.200042725s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.12( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.909432411s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.200042725s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.12( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.953325272s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 144.244094849s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.12( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.953293800s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.244094849s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.11( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.909196854s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 142.200042725s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.11( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.909174919s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.200042725s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.19( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.953152657s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 144.244110107s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[11.19( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.953130722s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.244110107s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.939017296s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 143.230026245s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.938986778s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.230026245s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.1a( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.908880234s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 142.200057983s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[8.1a( v 47'4 (0'0,47'4] local-lis/les=56/59 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62 pruub=8.908823967s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.200057983s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.938537598s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 143.230041504s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62 pruub=9.937902451s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.230041504s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[8.15( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[11.15( empty local-lis/les=0/0 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[11.10( empty local-lis/les=0/0 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[11.2( empty local-lis/les=0/0 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[9.13( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[11.3( empty local-lis/les=0/0 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[8.2( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[8.10( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[11.d( empty local-lis/les=0/0 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[11.8( empty local-lis/les=0/0 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[8.d( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[9.11( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[11.9( empty local-lis/les=0/0 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[8.4( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[9.5( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[11.18( empty local-lis/les=0/0 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[8.b( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[8.1b( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[11.4( empty local-lis/les=0/0 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[9.b( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[11.1b( empty local-lis/les=0/0 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[11.1c( empty local-lis/les=0/0 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[11.1e( empty local-lis/les=0/0 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[11.11( empty local-lis/les=0/0 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[11.14( empty local-lis/les=0/0 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[8.12( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[11.12( empty local-lis/les=0/0 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[8.11( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[11.b( empty local-lis/les=0/0 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[8.6( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[11.1a( empty local-lis/les=0/0 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[11.1f( empty local-lis/les=0/0 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[8.9( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[8.1c( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[11.6( empty local-lis/les=0/0 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.1e( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.891104698s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 136.245941162s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.b( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.891272545s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 136.246170044s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.1e( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.891054153s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.245941162s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.b( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.891243935s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.246170044s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[9.9( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.d( v 61'17 (0'0,61'17] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.880913734s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 51'16 mlcod 51'16 active pruub 136.235931396s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.13( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.891354561s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 136.246444702s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.13( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.891288757s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.246444702s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.12( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.890852928s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 136.246047974s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.12( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.890832901s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.246047974s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.d( v 61'17 (0'0,61'17] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.880632401s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 51'16 mlcod 0'0 unknown NOTIFY pruub 136.235931396s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[8.f( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.11( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.890903473s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 136.246307373s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.11( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.890807152s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.246307373s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[11.e( empty local-lis/les=0/0 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.10( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.890642166s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 136.246185303s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.10( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.890501976s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.246185303s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.1a( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.890852928s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 136.246627808s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.19( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.890593529s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 136.246459961s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[8.e( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.19( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.890568733s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.246459961s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[8.c( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.7( v 51'16 (0'0,51'16] local-lis/les=58/59 n=1 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.890335083s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 136.246582031s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.1a( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.890817642s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.246627808s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.7( v 51'16 (0'0,51'16] local-lis/les=58/59 n=1 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.890307426s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.246582031s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.6( v 51'16 (0'0,51'16] local-lis/les=58/59 n=1 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.890070915s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 136.246536255s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.4( v 51'16 (0'0,51'16] local-lis/les=58/59 n=1 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.890204430s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 136.246704102s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.4( v 51'16 (0'0,51'16] local-lis/les=58/59 n=1 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.890151024s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.246704102s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.6( v 51'16 (0'0,51'16] local-lis/les=58/59 n=1 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.889957428s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.246536255s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[9.d( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.8( v 51'16 (0'0,51'16] local-lis/les=58/59 n=1 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.889814377s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 136.246658325s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.8( v 51'16 (0'0,51'16] local-lis/les=58/59 n=1 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.889795303s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.246658325s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.f( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.889754295s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 136.246734619s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.f( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.889739037s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.246734619s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[11.f( empty local-lis/les=0/0 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.9( v 61'17 (0'0,61'17] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.888958931s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 51'16 mlcod 51'16 active pruub 136.246719360s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.9( v 61'17 (0'0,61'17] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.888925552s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 51'16 mlcod 0'0 unknown NOTIFY pruub 136.246719360s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[9.1( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.e( v 61'17 (0'0,61'17] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.888979912s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 51'16 mlcod 51'16 active pruub 136.246978760s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.e( v 61'17 (0'0,61'17] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.888953209s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 51'16 mlcod 0'0 unknown NOTIFY pruub 136.246978760s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.1( v 51'16 (0'0,51'16] local-lis/les=58/59 n=1 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.888639450s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 136.246826172s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.1( v 51'16 (0'0,51'16] local-lis/les=58/59 n=1 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.888607025s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.246826172s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[11.1( empty local-lis/les=0/0 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[10.b( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[10.13( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[10.12( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[10.11( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[9.3( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.2( v 51'16 (0'0,51'16] local-lis/les=58/59 n=1 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.888536453s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 136.246917725s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.2( v 51'16 (0'0,51'16] local-lis/les=58/59 n=1 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.888517380s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.246917725s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.15( v 61'17 (0'0,61'17] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.888252258s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 51'16 mlcod 51'16 active pruub 136.246978760s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.15( v 61'17 (0'0,61'17] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.888207436s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 51'16 mlcod 0'0 unknown NOTIFY pruub 136.246978760s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.14( v 61'17 (0'0,61'17] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.891582489s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 51'16 mlcod 51'16 active pruub 136.250503540s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.14( v 61'17 (0'0,61'17] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.891549110s) [1] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 51'16 mlcod 0'0 unknown NOTIFY pruub 136.250503540s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.16( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.888010025s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 136.246978760s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.17( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.887994766s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 136.246994019s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.17( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.887962341s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.246994019s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 62 pg[10.16( v 51'16 (0'0,51'16] local-lis/les=58/59 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=8.887945175s) [0] r=-1 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.246978760s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[9.1d( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[9.19( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[10.10( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[10.19( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[10.1a( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[10.2( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[8.18( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[11.19( empty local-lis/les=0/0 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[10.f( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[9.1b( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[10.6( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[8.1a( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 62 pg[10.14( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[11.17( empty local-lis/les=0/0 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[8.14( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[9.15( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[8.1f( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[8.1d( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[10.1e( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[10.d( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[10.7( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[10.4( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[10.8( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[10.9( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[10.e( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[10.1( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[10.15( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[10.17( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 62 pg[10.16( empty local-lis/les=0/0 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Dec  2 05:52:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Dec  2 05:52:44 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  2 05:52:44 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  2 05:52:44 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  2 05:52:44 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  2 05:52:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Dec  2 05:52:44 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Dec  2 05:52:44 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 63 pg[8.1c( v 47'4 (0'0,47'4] local-lis/les=62/63 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.11( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.11( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.15( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.15( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.d( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.3( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.3( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.d( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.9( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.9( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.b( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.b( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.1b( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.1b( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.19( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.19( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.1d( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.1d( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.1( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.1( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.5( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.5( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.13( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 63 pg[11.1f( empty local-lis/les=62/63 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[9.13( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[58,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:44 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 63 pg[11.1a( empty local-lis/les=62/63 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 63 pg[11.b( empty local-lis/les=62/63 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[10.10( v 51'16 (0'0,51'16] local-lis/les=62/63 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 63 pg[8.11( v 47'4 (0'0,47'4] local-lis/les=62/63 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[10.1a( v 51'16 (0'0,51'16] local-lis/les=62/63 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 63 pg[11.12( empty local-lis/les=62/63 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[10.11( v 51'16 (0'0,51'16] local-lis/les=62/63 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 63 pg[11.11( empty local-lis/les=62/63 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 63 pg[11.1e( empty local-lis/les=62/63 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 63 pg[11.1c( empty local-lis/les=62/63 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 63 pg[11.1b( empty local-lis/les=62/63 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 63 pg[11.18( empty local-lis/les=62/63 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 63 pg[8.1b( v 47'4 (0'0,47'4] local-lis/les=62/63 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 63 pg[8.4( v 47'4 (0'0,47'4] local-lis/les=62/63 n=1 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 63 pg[11.8( empty local-lis/les=62/63 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 63 pg[11.9( empty local-lis/les=62/63 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 63 pg[11.d( empty local-lis/les=62/63 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 63 pg[8.12( v 47'4 (0'0,47'4] local-lis/les=62/63 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 63 pg[8.d( v 47'4 (0'0,47'4] local-lis/les=62/63 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 63 pg[8.2( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=62/63 n=1 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[56,62)/1 crt=47'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 63 pg[11.3( empty local-lis/les=62/63 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 63 pg[11.15( empty local-lis/les=62/63 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 63 pg[8.15( v 47'4 (0'0,47'4] local-lis/les=62/63 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 63 pg[11.2( empty local-lis/les=62/63 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[10.7( v 51'16 (0'0,51'16] local-lis/les=62/63 n=1 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[10.8( v 51'16 (0'0,51'16] local-lis/les=62/63 n=1 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[10.4( v 51'16 (0'0,51'16] local-lis/les=62/63 n=1 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[10.17( v 51'16 (0'0,51'16] local-lis/les=62/63 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[10.9( v 61'17 lc 51'15 (0'0,61'17] local-lis/les=62/63 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=61'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[10.15( v 61'17 lc 51'5 (0'0,61'17] local-lis/les=62/63 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=61'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[10.e( v 61'17 lc 51'7 (0'0,61'17] local-lis/les=62/63 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=61'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[10.1( v 51'16 (0'0,51'16] local-lis/les=62/63 n=1 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[10.1e( v 51'16 (0'0,51'16] local-lis/les=62/63 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[10.16( v 51'16 (0'0,51'16] local-lis/les=62/63 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[10.d( v 61'17 lc 51'9 (0'0,61'17] local-lis/les=62/63 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=61'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[10.19( v 51'16 (0'0,51'16] local-lis/les=62/63 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[10.6( v 51'16 (0'0,51'16] local-lis/les=62/63 n=1 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[10.2( v 51'16 (0'0,51'16] local-lis/les=62/63 n=1 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[10.b( v 51'16 (0'0,51'16] local-lis/les=62/63 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[10.13( v 51'16 (0'0,51'16] local-lis/les=62/63 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[10.f( v 51'16 (0'0,51'16] local-lis/les=62/63 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[10.12( v 51'16 (0'0,51'16] local-lis/les=62/63 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 63 pg[10.14( v 61'17 lc 51'13 (0'0,61'17] local-lis/les=62/63 n=0 ec=58/50 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=61'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[11.17( empty local-lis/les=62/63 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[11.14( empty local-lis/les=62/63 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[8.10( v 47'4 (0'0,47'4] local-lis/les=62/63 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[11.f( empty local-lis/les=62/63 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[8.e( v 47'4 (0'0,47'4] local-lis/les=62/63 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[11.e( empty local-lis/les=62/63 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[8.14( v 47'4 (0'0,47'4] local-lis/les=62/63 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[8.c( v 47'4 (0'0,47'4] local-lis/les=62/63 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[8.1d( v 47'4 (0'0,47'4] local-lis/les=62/63 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[8.1f( v 47'4 (0'0,47'4] local-lis/les=62/63 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[8.1a( v 47'4 (0'0,47'4] local-lis/les=62/63 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[8.b( v 47'4 (0'0,47'4] local-lis/les=62/63 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[11.19( empty local-lis/les=62/63 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[8.f( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=62/63 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=47'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[8.18( v 47'4 (0'0,47'4] local-lis/les=62/63 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[8.9( v 47'4 (0'0,47'4] local-lis/les=62/63 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[11.6( empty local-lis/les=62/63 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[11.1( empty local-lis/les=62/63 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[8.6( v 47'4 (0'0,47'4] local-lis/les=62/63 n=0 ec=56/46 lis/c=56/56 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[11.10( empty local-lis/les=62/63 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 63 pg[11.4( empty local-lis/les=62/63 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:52:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v153: 321 pgs: 23 peering, 298 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:52:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Dec  2 05:52:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Dec  2 05:52:45 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Dec  2 05:52:46 np0005542249 python3[105000]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:52:46 np0005542249 podman[105001]: 2025-12-02 10:52:46.313298241 +0000 UTC m=+0.084410734 container create 606336402edf52926ed9f2675bc4541cc12d9b95f3dab214c208792d0bbbaefe (image=quay.io/ceph/ceph:v18, name=clever_mahavira, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  2 05:52:46 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Dec  2 05:52:46 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Dec  2 05:52:46 np0005542249 systemd[1]: Started libpod-conmon-606336402edf52926ed9f2675bc4541cc12d9b95f3dab214c208792d0bbbaefe.scope.
Dec  2 05:52:46 np0005542249 podman[105001]: 2025-12-02 10:52:46.280967894 +0000 UTC m=+0.052080447 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:52:46 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:52:46 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc2c185443c3f80d38df45a85a3f963d9fbba4247ac2d45c2a14cbd721e1aab6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:46 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc2c185443c3f80d38df45a85a3f963d9fbba4247ac2d45c2a14cbd721e1aab6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:46 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Dec  2 05:52:46 np0005542249 podman[105001]: 2025-12-02 10:52:46.411260508 +0000 UTC m=+0.182373061 container init 606336402edf52926ed9f2675bc4541cc12d9b95f3dab214c208792d0bbbaefe (image=quay.io/ceph/ceph:v18, name=clever_mahavira, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:52:46 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Dec  2 05:52:46 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 64 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:46 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 64 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=63/64 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:46 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 64 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:46 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 64 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=63/64 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:46 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 64 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:46 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 64 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:46 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 64 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:46 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 64 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:46 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 64 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:46 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 64 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:46 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 64 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=63/64 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:46 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 64 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=63/64 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:46 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 64 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:46 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 64 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=63/64 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:46 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 64 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=63/64 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:46 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 64 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=63/64 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[58,63)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:46 np0005542249 podman[105001]: 2025-12-02 10:52:46.42590411 +0000 UTC m=+0.197016583 container start 606336402edf52926ed9f2675bc4541cc12d9b95f3dab214c208792d0bbbaefe (image=quay.io/ceph/ceph:v18, name=clever_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  2 05:52:46 np0005542249 podman[105001]: 2025-12-02 10:52:46.429567549 +0000 UTC m=+0.200680052 container attach 606336402edf52926ed9f2675bc4541cc12d9b95f3dab214c208792d0bbbaefe (image=quay.io/ceph/ceph:v18, name=clever_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  2 05:52:46 np0005542249 clever_mahavira[105017]: could not fetch user info: no user info saved
Dec  2 05:52:46 np0005542249 systemd[1]: libpod-606336402edf52926ed9f2675bc4541cc12d9b95f3dab214c208792d0bbbaefe.scope: Deactivated successfully.
Dec  2 05:52:46 np0005542249 podman[105001]: 2025-12-02 10:52:46.628769071 +0000 UTC m=+0.399881564 container died 606336402edf52926ed9f2675bc4541cc12d9b95f3dab214c208792d0bbbaefe (image=quay.io/ceph/ceph:v18, name=clever_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  2 05:52:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Dec  2 05:52:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Dec  2 05:52:46 np0005542249 systemd[1]: var-lib-containers-storage-overlay-cc2c185443c3f80d38df45a85a3f963d9fbba4247ac2d45c2a14cbd721e1aab6-merged.mount: Deactivated successfully.
Dec  2 05:52:46 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Dec  2 05:52:46 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 65 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=65) [0] r=0 lpr=65 pi=[58,65)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:46 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 65 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=65) [0] r=0 lpr=65 pi=[58,65)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:46 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 65 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=65) [0] r=0 lpr=65 pi=[58,65)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:46 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 65 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=65) [0] r=0 lpr=65 pi=[58,65)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:46 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 65 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=65) [0] r=0 lpr=65 pi=[58,65)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:46 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 65 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=65) [0] r=0 lpr=65 pi=[58,65)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:46 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 65 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=65) [0] r=0 lpr=65 pi=[58,65)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:46 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 65 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=65) [0] r=0 lpr=65 pi=[58,65)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:46 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 65 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=65 pruub=15.742933273s) [0] async=[0] r=-1 lpr=65 pi=[58,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 152.085235596s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:46 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 65 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=65 pruub=15.742671967s) [0] async=[0] r=-1 lpr=65 pi=[58,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 152.085006714s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:46 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 65 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=65 pruub=15.742864609s) [0] async=[0] r=-1 lpr=65 pi=[58,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 152.085250854s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:46 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 65 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=65 pruub=15.742822647s) [0] r=-1 lpr=65 pi=[58,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.085235596s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:46 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 65 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=65 pruub=15.742588043s) [0] r=-1 lpr=65 pi=[58,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.085006714s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:46 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 65 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=65 pruub=15.742795944s) [0] r=-1 lpr=65 pi=[58,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.085250854s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:46 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 65 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=63/64 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=65 pruub=15.741870880s) [0] async=[0] r=-1 lpr=65 pi=[58,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 152.085037231s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:46 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 65 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=63/64 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=65 pruub=15.741831779s) [0] r=-1 lpr=65 pi=[58,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.085037231s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:46 np0005542249 podman[105001]: 2025-12-02 10:52:46.685885053 +0000 UTC m=+0.456997516 container remove 606336402edf52926ed9f2675bc4541cc12d9b95f3dab214c208792d0bbbaefe (image=quay.io/ceph/ceph:v18, name=clever_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:52:46 np0005542249 systemd[1]: libpod-conmon-606336402edf52926ed9f2675bc4541cc12d9b95f3dab214c208792d0bbbaefe.scope: Deactivated successfully.
Dec  2 05:52:47 np0005542249 python3[105140]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 95bc4eaa-1a14-59bf-acf2-4b3da055547d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:52:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v156: 321 pgs: 1 active+recovering+remapped, 11 active+recovery_wait+remapped, 23 peering, 286 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s; 61/213 objects misplaced (28.638%); 27 B/s, 1 objects/s recovering
Dec  2 05:52:47 np0005542249 podman[105141]: 2025-12-02 10:52:47.106599756 +0000 UTC m=+0.058337646 container create 47b11125a421a95a416992c296f960eacbf35708c53d6d8d3497a8ca078e8a03 (image=quay.io/ceph/ceph:v18, name=vibrant_bouman, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:52:47 np0005542249 systemd[1]: Started libpod-conmon-47b11125a421a95a416992c296f960eacbf35708c53d6d8d3497a8ca078e8a03.scope.
Dec  2 05:52:47 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:52:47 np0005542249 podman[105141]: 2025-12-02 10:52:47.087139254 +0000 UTC m=+0.038877154 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  2 05:52:47 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bda2682114c5db08cb0dbaeb0fcaac0cbd38aadb7c74b886b0f3436b59faccb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:47 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bda2682114c5db08cb0dbaeb0fcaac0cbd38aadb7c74b886b0f3436b59faccb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:52:47 np0005542249 podman[105141]: 2025-12-02 10:52:47.196505307 +0000 UTC m=+0.148243187 container init 47b11125a421a95a416992c296f960eacbf35708c53d6d8d3497a8ca078e8a03 (image=quay.io/ceph/ceph:v18, name=vibrant_bouman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  2 05:52:47 np0005542249 podman[105141]: 2025-12-02 10:52:47.2044119 +0000 UTC m=+0.156149780 container start 47b11125a421a95a416992c296f960eacbf35708c53d6d8d3497a8ca078e8a03 (image=quay.io/ceph/ceph:v18, name=vibrant_bouman, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:52:47 np0005542249 podman[105141]: 2025-12-02 10:52:47.207493102 +0000 UTC m=+0.159230982 container attach 47b11125a421a95a416992c296f960eacbf35708c53d6d8d3497a8ca078e8a03 (image=quay.io/ceph/ceph:v18, name=vibrant_bouman, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]: {
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:    "user_id": "openstack",
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:    "display_name": "openstack",
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:    "email": "",
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:    "suspended": 0,
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:    "max_buckets": 1000,
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:    "subusers": [],
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:    "keys": [
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:        {
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:            "user": "openstack",
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:            "access_key": "6R923J33B3KKQPNC2QU0",
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:            "secret_key": "BSp63BkMsTysM5qLFDWDYLlBqNCgGefAoOBdntIG"
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:        }
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:    ],
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:    "swift_keys": [],
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:    "caps": [],
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:    "op_mask": "read, write, delete",
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:    "default_placement": "",
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:    "default_storage_class": "",
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:    "placement_tags": [],
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:    "bucket_quota": {
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:        "enabled": false,
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:        "check_on_raw": false,
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:        "max_size": -1,
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:        "max_size_kb": 0,
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:        "max_objects": -1
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:    },
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:    "user_quota": {
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:        "enabled": false,
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:        "check_on_raw": false,
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:        "max_size": -1,
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:        "max_size_kb": 0,
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:        "max_objects": -1
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:    },
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:    "temp_url_keys": [],
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:    "type": "rgw",
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]:    "mfa_ids": []
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]: }
Dec  2 05:52:47 np0005542249 vibrant_bouman[105157]: 
Dec  2 05:52:47 np0005542249 systemd[1]: libpod-47b11125a421a95a416992c296f960eacbf35708c53d6d8d3497a8ca078e8a03.scope: Deactivated successfully.
Dec  2 05:52:47 np0005542249 conmon[105157]: conmon 47b11125a421a95a4169 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-47b11125a421a95a416992c296f960eacbf35708c53d6d8d3497a8ca078e8a03.scope/container/memory.events
Dec  2 05:52:47 np0005542249 podman[105141]: 2025-12-02 10:52:47.420818133 +0000 UTC m=+0.372556053 container died 47b11125a421a95a416992c296f960eacbf35708c53d6d8d3497a8ca078e8a03 (image=quay.io/ceph/ceph:v18, name=vibrant_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:52:47 np0005542249 systemd[1]: var-lib-containers-storage-overlay-1bda2682114c5db08cb0dbaeb0fcaac0cbd38aadb7c74b886b0f3436b59faccb-merged.mount: Deactivated successfully.
Dec  2 05:52:47 np0005542249 podman[105141]: 2025-12-02 10:52:47.466242761 +0000 UTC m=+0.417980651 container remove 47b11125a421a95a416992c296f960eacbf35708c53d6d8d3497a8ca078e8a03 (image=quay.io/ceph/ceph:v18, name=vibrant_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 05:52:47 np0005542249 systemd[1]: libpod-conmon-47b11125a421a95a416992c296f960eacbf35708c53d6d8d3497a8ca078e8a03.scope: Deactivated successfully.
Dec  2 05:52:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Dec  2 05:52:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Dec  2 05:52:47 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:47 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 66 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=63/64 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66 pruub=14.730584145s) [0] async=[0] r=-1 lpr=66 pi=[58,66)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 152.086166382s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:47 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 66 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=63/64 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66 pruub=14.730197906s) [0] async=[0] r=-1 lpr=66 pi=[58,66)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 152.086273193s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:47 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 66 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=63/64 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66 pruub=14.730476379s) [0] async=[0] r=-1 lpr=66 pi=[58,66)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 152.086639404s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:47 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 66 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=63/64 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66 pruub=14.730073929s) [0] r=-1 lpr=66 pi=[58,66)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.086273193s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:47 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 66 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=63/64 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66 pruub=14.730364799s) [0] r=-1 lpr=66 pi=[58,66)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.086639404s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:47 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 66 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=63/64 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66 pruub=14.730348587s) [0] async=[0] r=-1 lpr=66 pi=[58,66)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 152.086608887s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:47 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 66 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66 pruub=14.728539467s) [0] async=[0] r=-1 lpr=66 pi=[58,66)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 152.085006714s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:47 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 66 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66 pruub=14.728454590s) [0] r=-1 lpr=66 pi=[58,66)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.085006714s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:47 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 66 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66 pruub=14.729692459s) [0] async=[0] r=-1 lpr=66 pi=[58,66)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 152.086349487s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:52:47 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 66 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=63/64 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66 pruub=14.730126381s) [0] r=-1 lpr=66 pi=[58,66)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.086608887s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:47 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 66 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66 pruub=14.729575157s) [0] r=-1 lpr=66 pi=[58,66)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.086349487s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:47 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 66 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66 pruub=14.728921890s) [0] async=[0] r=-1 lpr=66 pi=[58,66)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 152.085845947s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:47 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 66 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66 pruub=14.728643417s) [0] async=[0] r=-1 lpr=66 pi=[58,66)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 152.085739136s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:47 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 66 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66 pruub=14.728260994s) [0] async=[0] r=-1 lpr=66 pi=[58,66)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 152.085342407s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:47 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 66 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66 pruub=14.728719711s) [0] r=-1 lpr=66 pi=[58,66)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.085845947s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:47 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 66 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66 pruub=14.728582382s) [0] r=-1 lpr=66 pi=[58,66)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.085739136s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:47 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 66 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66 pruub=14.728142738s) [0] r=-1 lpr=66 pi=[58,66)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.085342407s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:47 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 66 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66 pruub=14.728549957s) [0] async=[0] r=-1 lpr=66 pi=[58,66)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 152.085937500s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:47 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 66 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=63/64 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66 pruub=14.730334282s) [0] r=-1 lpr=66 pi=[58,66)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.086166382s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:47 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 66 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=63/64 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66 pruub=14.728485107s) [0] r=-1 lpr=66 pi=[58,66)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.085937500s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:47 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 66 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=63/64 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66 pruub=14.728026390s) [0] async=[0] r=-1 lpr=66 pi=[58,66)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 152.086547852s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:47 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 66 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=63/64 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66 pruub=14.727974892s) [0] r=-1 lpr=66 pi=[58,66)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.086547852s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:47 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 66 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=63/64 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66 pruub=14.726248741s) [0] async=[0] r=-1 lpr=66 pi=[58,66)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 152.085205078s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=65/66 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=65) [0] r=0 lpr=65 pi=[58,65)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:47 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 66 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=63/64 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66 pruub=14.725562096s) [0] r=-1 lpr=66 pi=[58,66)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.085205078s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=65) [0] r=0 lpr=65 pi=[58,65)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=65/66 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=65) [0] r=0 lpr=65 pi=[58,65)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 66 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=65/66 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=65) [0] r=0 lpr=65 pi=[58,65)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:48 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Dec  2 05:52:48 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Dec  2 05:52:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Dec  2 05:52:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Dec  2 05:52:48 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Dec  2 05:52:48 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 67 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=66/67 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:48 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 67 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=66/67 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:48 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 67 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=66/67 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:48 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 67 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=66/67 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:48 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 67 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=66/67 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:48 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 67 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=66/67 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:48 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 67 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=66/67 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:48 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 67 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=66/67 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:48 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 67 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=66/67 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:48 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 67 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=66/67 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:48 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 67 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=66/67 n=6 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:48 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 67 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=66/67 n=5 ec=58/48 lis/c=63/58 les/c/f=64/60/0 sis=66) [0] r=0 lpr=66 pi=[58,66)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:52:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v159: 321 pgs: 1 active+recovering+remapped, 11 active+recovery_wait+remapped, 4 peering, 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 511 B/s wr, 3 op/s; 61/215 objects misplaced (28.372%); 389 B/s, 8 objects/s recovering
Dec  2 05:52:49 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Dec  2 05:52:49 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Dec  2 05:52:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:52:50 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 5.c scrub starts
Dec  2 05:52:50 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 5.c scrub ok
Dec  2 05:52:50 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 3.e scrub starts
Dec  2 05:52:50 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 3.e scrub ok
Dec  2 05:52:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v160: 321 pgs: 1 active+recovering+remapped, 11 active+recovery_wait+remapped, 4 peering, 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 376 B/s wr, 2 op/s; 61/215 objects misplaced (28.372%); 286 B/s, 6 objects/s recovering
Dec  2 05:52:51 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec  2 05:52:51 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec  2 05:52:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v161: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 319 B/s wr, 2 op/s; 500 B/s, 14 objects/s recovering
Dec  2 05:52:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Dec  2 05:52:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec  2 05:52:53 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Dec  2 05:52:53 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Dec  2 05:52:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Dec  2 05:52:53 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec  2 05:52:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  2 05:52:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Dec  2 05:52:53 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 7.a scrub starts
Dec  2 05:52:53 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Dec  2 05:52:53 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 7.a scrub ok
Dec  2 05:52:54 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec  2 05:52:54 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec  2 05:52:54 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec  2 05:52:54 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  2 05:52:54 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec  2 05:52:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:52:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v163: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 829 B/s rd, 138 B/s wr, 1 op/s; 340 B/s, 8 objects/s recovering
Dec  2 05:52:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Dec  2 05:52:55 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec  2 05:52:55 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec  2 05:52:55 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec  2 05:52:55 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 3.17 deep-scrub starts
Dec  2 05:52:55 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 3.17 deep-scrub ok
Dec  2 05:52:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Dec  2 05:52:55 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec  2 05:52:55 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  2 05:52:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Dec  2 05:52:55 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Dec  2 05:52:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:52:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:52:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:52:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:52:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:52:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:52:56 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  2 05:52:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v165: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 206 B/s, 7 objects/s recovering
Dec  2 05:52:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Dec  2 05:52:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec  2 05:52:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Dec  2 05:52:57 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec  2 05:52:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  2 05:52:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Dec  2 05:52:57 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Dec  2 05:52:58 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  2 05:52:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v167: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:52:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Dec  2 05:52:59 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  2 05:52:59 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 2.a scrub starts
Dec  2 05:52:59 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 2.a scrub ok
Dec  2 05:52:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Dec  2 05:52:59 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  2 05:52:59 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  2 05:52:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Dec  2 05:52:59 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Dec  2 05:52:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:53:00 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec  2 05:53:00 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec  2 05:53:00 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  2 05:53:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v169: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:53:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Dec  2 05:53:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  2 05:53:01 np0005542249 systemd-logind[787]: New session 34 of user zuul.
Dec  2 05:53:01 np0005542249 systemd[1]: Started Session 34 of User zuul.
Dec  2 05:53:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Dec  2 05:53:02 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  2 05:53:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Dec  2 05:53:02 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Dec  2 05:53:02 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  2 05:53:02 np0005542249 python3.9[105405]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:53:02 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Dec  2 05:53:02 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Dec  2 05:53:03 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  2 05:53:03 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 72 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=72 pruub=14.476796150s) [2] r=-1 lpr=72 pi=[58,72)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 167.223571777s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:03 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 72 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=72 pruub=14.476552010s) [2] r=-1 lpr=72 pi=[58,72)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 167.223571777s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:03 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 72 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=72 pruub=14.476655006s) [2] r=-1 lpr=72 pi=[58,72)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 167.224029541s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:03 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 72 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=72 pruub=14.476539612s) [2] r=-1 lpr=72 pi=[58,72)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 167.224029541s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:03 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 72 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=72 pruub=14.476516724s) [2] r=-1 lpr=72 pi=[58,72)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 167.224166870s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:03 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 72 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=72 pruub=14.476460457s) [2] r=-1 lpr=72 pi=[58,72)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 167.224166870s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:03 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 72 pg[9.16( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=72) [2] r=0 lpr=72 pi=[58,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:03 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 72 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=72 pruub=14.475990295s) [2] r=-1 lpr=72 pi=[58,72)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 167.224197388s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:03 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 72 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=72 pruub=14.475942612s) [2] r=-1 lpr=72 pi=[58,72)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 167.224197388s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:03 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 72 pg[9.e( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=72) [2] r=0 lpr=72 pi=[58,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:03 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 72 pg[9.6( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=72) [2] r=0 lpr=72 pi=[58,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:03 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 72 pg[9.1e( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=72) [2] r=0 lpr=72 pi=[58,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v171: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:53:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Dec  2 05:53:03 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec  2 05:53:03 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Dec  2 05:53:03 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Dec  2 05:53:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Dec  2 05:53:04 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec  2 05:53:04 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  2 05:53:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Dec  2 05:53:04 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Dec  2 05:53:04 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 73 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=66/67 n=5 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=73 pruub=8.619029045s) [2] r=-1 lpr=73 pi=[66,73)/1 crt=54'385 mlcod 0'0 active pruub 167.786010742s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:04 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 73 pg[9.e( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[58,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:04 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 73 pg[9.e( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[58,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:04 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 73 pg[9.1e( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[58,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:04 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 73 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=66/67 n=5 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=73 pruub=8.618949890s) [2] r=-1 lpr=73 pi=[66,73)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 167.786010742s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:04 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 73 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=66/67 n=6 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=73 pruub=8.618577957s) [2] r=-1 lpr=73 pi=[66,73)/1 crt=54'385 mlcod 0'0 active pruub 167.785949707s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:04 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 73 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=66/67 n=6 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=73 pruub=8.618553162s) [2] r=-1 lpr=73 pi=[66,73)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 167.785949707s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:04 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 73 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=58/48 lis/c=65/65 les/c/f=66/66/0 sis=73 pruub=15.597302437s) [2] r=-1 lpr=73 pi=[65,73)/1 crt=54'385 mlcod 0'0 active pruub 174.765213013s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:04 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 73 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=58/48 lis/c=65/65 les/c/f=66/66/0 sis=73 pruub=15.597281456s) [2] r=-1 lpr=73 pi=[65,73)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 174.765213013s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:04 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 73 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=65/66 n=6 ec=58/48 lis/c=65/65 les/c/f=66/66/0 sis=73 pruub=15.597190857s) [2] r=-1 lpr=73 pi=[65,73)/1 crt=54'385 mlcod 0'0 active pruub 174.765197754s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:04 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 73 pg[9.1e( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[58,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:04 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 73 pg[9.6( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[58,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:04 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 73 pg[9.6( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[58,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:04 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 73 pg[9.16( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[58,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:04 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 73 pg[9.16( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[58,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:04 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 73 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=73) [2] r=0 lpr=73 pi=[66,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:04 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 73 pg[9.f( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=73) [2] r=0 lpr=73 pi=[66,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:04 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 73 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=65/66 n=6 ec=58/48 lis/c=65/65 les/c/f=66/66/0 sis=73 pruub=15.597145081s) [2] r=-1 lpr=73 pi=[65,73)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 174.765197754s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:04 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 73 pg[9.17( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=65/65 les/c/f=66/66/0 sis=73) [2] r=0 lpr=73 pi=[65,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:04 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 73 pg[9.7( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=65/65 les/c/f=66/66/0 sis=73) [2] r=0 lpr=73 pi=[65,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:04 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 73 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[58,73)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:04 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 73 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[58,73)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:04 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 73 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[58,73)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:04 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 73 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[58,73)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:04 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 73 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[58,73)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:04 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 73 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[58,73)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:04 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 73 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[58,73)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:04 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 73 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[58,73)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:04 np0005542249 python3.9[105623]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:53:04 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Dec  2 05:53:04 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Dec  2 05:53:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:53:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Dec  2 05:53:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Dec  2 05:53:04 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Dec  2 05:53:04 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 74 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[66,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:04 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 74 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[66,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:04 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 74 pg[9.7( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=65/65 les/c/f=66/66/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[65,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:04 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 74 pg[9.7( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=65/65 les/c/f=66/66/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[65,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:04 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 74 pg[9.f( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[66,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:04 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 74 pg[9.f( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[66,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:04 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 74 pg[9.17( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=65/65 les/c/f=66/66/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[65,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:04 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 74 pg[9.17( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=65/65 les/c/f=66/66/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[65,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:04 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 74 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=65/66 n=6 ec=58/48 lis/c=65/65 les/c/f=66/66/0 sis=74) [2]/[0] r=0 lpr=74 pi=[65,74)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:04 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 74 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=66/67 n=6 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=0 lpr=74 pi=[66,74)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:04 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 74 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=58/48 lis/c=65/65 les/c/f=66/66/0 sis=74) [2]/[0] r=0 lpr=74 pi=[65,74)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:04 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 74 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=65/66 n=6 ec=58/48 lis/c=65/65 les/c/f=66/66/0 sis=74) [2]/[0] r=0 lpr=74 pi=[65,74)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:04 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 74 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=58/48 lis/c=65/65 les/c/f=66/66/0 sis=74) [2]/[0] r=0 lpr=74 pi=[65,74)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:04 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 74 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=66/67 n=6 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=0 lpr=74 pi=[66,74)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:04 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 74 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=66/67 n=5 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=0 lpr=74 pi=[66,74)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:04 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 74 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=66/67 n=5 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=0 lpr=74 pi=[66,74)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:04 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 74 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=73/74 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[58,73)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:04 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 74 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[58,73)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:04 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 74 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[58,73)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:04 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 74 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=73/74 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[58,73)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v174: 321 pgs: 4 unknown, 317 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:53:05 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  2 05:53:05 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec  2 05:53:05 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec  2 05:53:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Dec  2 05:53:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Dec  2 05:53:05 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Dec  2 05:53:05 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 75 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=73/58 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[58,75)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:05 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 75 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=73/58 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[58,75)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:05 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 75 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=73/58 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[58,75)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:05 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 75 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=73/58 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[58,75)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:05 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 75 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=73/58 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[58,75)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:05 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 75 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=73/58 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[58,75)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:05 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 75 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=73/58 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[58,75)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:05 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 75 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=73/58 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[58,75)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:05 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 75 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=58/48 lis/c=73/58 les/c/f=74/60/0 sis=75 pruub=14.994608879s) [2] async=[2] r=-1 lpr=75 pi=[58,75)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 170.602935791s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:05 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 75 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=73/74 n=6 ec=58/48 lis/c=73/58 les/c/f=74/60/0 sis=75 pruub=14.994585991s) [2] async=[2] r=-1 lpr=75 pi=[58,75)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 170.602935791s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:05 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 75 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=58/48 lis/c=73/58 les/c/f=74/60/0 sis=75 pruub=14.994515419s) [2] r=-1 lpr=75 pi=[58,75)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 170.602935791s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:05 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 75 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=73/74 n=6 ec=58/48 lis/c=73/58 les/c/f=74/60/0 sis=75 pruub=14.994511604s) [2] r=-1 lpr=75 pi=[58,75)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 170.602935791s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:05 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 75 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=58/48 lis/c=73/58 les/c/f=74/60/0 sis=75 pruub=14.994180679s) [2] async=[2] r=-1 lpr=75 pi=[58,75)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 170.602935791s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:05 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 75 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=73/74 n=6 ec=58/48 lis/c=73/58 les/c/f=74/60/0 sis=75 pruub=14.994441986s) [2] async=[2] r=-1 lpr=75 pi=[58,75)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 170.603164673s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:05 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 75 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=58/48 lis/c=73/58 les/c/f=74/60/0 sis=75 pruub=14.993809700s) [2] r=-1 lpr=75 pi=[58,75)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 170.602935791s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:05 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 75 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=73/74 n=6 ec=58/48 lis/c=73/58 les/c/f=74/60/0 sis=75 pruub=14.994077682s) [2] r=-1 lpr=75 pi=[58,75)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 170.603164673s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:05 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 75 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=74/75 n=5 ec=58/48 lis/c=65/65 les/c/f=66/66/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[65,74)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:05 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 75 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=74/75 n=6 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[66,74)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:05 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 75 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=74/75 n=6 ec=58/48 lis/c=65/65 les/c/f=66/66/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[65,74)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:05 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 75 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=74/75 n=5 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[66,74)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:06 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Dec  2 05:53:06 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Dec  2 05:53:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Dec  2 05:53:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Dec  2 05:53:06 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Dec  2 05:53:06 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 76 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=0 lpr=76 pi=[66,76)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:06 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 76 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=0 lpr=76 pi=[66,76)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:06 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 76 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=0 lpr=76 pi=[66,76)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:06 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 76 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=0 lpr=76 pi=[66,76)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:06 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 76 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=74/65 les/c/f=75/66/0 sis=76) [2] r=0 lpr=76 pi=[65,76)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:06 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 76 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=74/65 les/c/f=75/66/0 sis=76) [2] r=0 lpr=76 pi=[65,76)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:06 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 76 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=74/65 les/c/f=75/66/0 sis=76) [2] r=0 lpr=76 pi=[65,76)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:06 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 76 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=74/75 n=5 ec=58/48 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.998142242s) [2] async=[2] r=-1 lpr=76 pi=[66,76)/1 crt=54'385 mlcod 54'385 active pruub 177.020889282s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:06 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 76 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=74/75 n=5 ec=58/48 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.997988701s) [2] r=-1 lpr=76 pi=[66,76)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 177.020889282s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:06 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 76 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=74/65 les/c/f=75/66/0 sis=76) [2] r=0 lpr=76 pi=[65,76)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:06 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 76 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=74/75 n=6 ec=58/48 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.996839523s) [2] async=[2] r=-1 lpr=76 pi=[66,76)/1 crt=54'385 mlcod 54'385 active pruub 177.020538330s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:06 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 76 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=74/75 n=6 ec=58/48 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.996675491s) [2] r=-1 lpr=76 pi=[66,76)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 177.020538330s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:06 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 76 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=74/75 n=5 ec=58/48 lis/c=74/65 les/c/f=75/66/0 sis=76 pruub=14.996624947s) [2] async=[2] r=-1 lpr=76 pi=[65,76)/1 crt=54'385 mlcod 54'385 active pruub 177.020507812s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:06 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 76 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=74/75 n=5 ec=58/48 lis/c=74/65 les/c/f=75/66/0 sis=76 pruub=14.996529579s) [2] r=-1 lpr=76 pi=[65,76)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 177.020507812s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:06 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 76 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=74/75 n=6 ec=58/48 lis/c=74/65 les/c/f=75/66/0 sis=76 pruub=14.996127129s) [2] async=[2] r=-1 lpr=76 pi=[65,76)/1 crt=54'385 mlcod 54'385 active pruub 177.020797729s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:06 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 76 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=74/75 n=6 ec=58/48 lis/c=74/65 les/c/f=75/66/0 sis=76 pruub=14.996071815s) [2] r=-1 lpr=76 pi=[65,76)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 177.020797729s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:06 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 76 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=75/76 n=5 ec=58/48 lis/c=73/58 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[58,75)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:06 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 76 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=75/76 n=5 ec=58/48 lis/c=73/58 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[58,75)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:06 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 76 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=75/76 n=6 ec=58/48 lis/c=73/58 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[58,75)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:06 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 76 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=75/76 n=6 ec=58/48 lis/c=73/58 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[58,75)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v177: 321 pgs: 4 unknown, 317 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:53:07 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 3.8 deep-scrub starts
Dec  2 05:53:07 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 3.8 deep-scrub ok
Dec  2 05:53:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Dec  2 05:53:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Dec  2 05:53:07 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Dec  2 05:53:07 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 77 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=76/77 n=6 ec=58/48 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=0 lpr=76 pi=[66,76)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:07 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 77 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=76/77 n=5 ec=58/48 lis/c=74/65 les/c/f=75/66/0 sis=76) [2] r=0 lpr=76 pi=[65,76)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:07 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 77 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=76/77 n=5 ec=58/48 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=0 lpr=76 pi=[66,76)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:07 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 77 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=76/77 n=6 ec=58/48 lis/c=74/65 les/c/f=75/66/0 sis=76) [2] r=0 lpr=76 pi=[65,76)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v179: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 980 B/s wr, 33 op/s; 184 B/s, 9 objects/s recovering
Dec  2 05:53:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Dec  2 05:53:09 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec  2 05:53:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Dec  2 05:53:09 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  2 05:53:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Dec  2 05:53:09 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Dec  2 05:53:09 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 78 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=78 pruub=8.434226990s) [2] r=-1 lpr=78 pi=[58,78)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 167.224197388s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:09 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 78 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=78 pruub=8.434157372s) [2] r=-1 lpr=78 pi=[58,78)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 167.224197388s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:09 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 78 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=78 pruub=8.433225632s) [2] r=-1 lpr=78 pi=[58,78)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 167.224105835s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:09 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 78 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=78 pruub=8.432848930s) [2] r=-1 lpr=78 pi=[58,78)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 167.224105835s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:09 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec  2 05:53:09 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 78 pg[9.18( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=78) [2] r=0 lpr=78 pi=[58,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:09 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 78 pg[9.8( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=78) [2] r=0 lpr=78 pi=[58,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:09 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Dec  2 05:53:09 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Dec  2 05:53:09 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec  2 05:53:09 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec  2 05:53:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:53:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Dec  2 05:53:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Dec  2 05:53:09 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Dec  2 05:53:09 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 79 pg[9.8( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=79) [2]/[1] r=-1 lpr=79 pi=[58,79)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:09 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 79 pg[9.8( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=79) [2]/[1] r=-1 lpr=79 pi=[58,79)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:09 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 79 pg[9.18( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=79) [2]/[1] r=-1 lpr=79 pi=[58,79)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:09 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 79 pg[9.18( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=79) [2]/[1] r=-1 lpr=79 pi=[58,79)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:09 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 79 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=79) [2]/[1] r=0 lpr=79 pi=[58,79)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:09 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 79 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=79) [2]/[1] r=0 lpr=79 pi=[58,79)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:09 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 79 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=79) [2]/[1] r=0 lpr=79 pi=[58,79)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:09 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 79 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=79) [2]/[1] r=0 lpr=79 pi=[58,79)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:10 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  2 05:53:10 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Dec  2 05:53:10 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Dec  2 05:53:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Dec  2 05:53:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Dec  2 05:53:10 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Dec  2 05:53:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v183: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1023 B/s wr, 34 op/s; 192 B/s, 9 objects/s recovering
Dec  2 05:53:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Dec  2 05:53:11 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec  2 05:53:11 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec  2 05:53:11 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 80 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=79/80 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=79) [2]/[1] async=[2] r=0 lpr=79 pi=[58,79)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:11 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 80 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=79/80 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=79) [2]/[1] async=[2] r=0 lpr=79 pi=[58,79)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:11 np0005542249 systemd[1]: session-34.scope: Deactivated successfully.
Dec  2 05:53:11 np0005542249 systemd[1]: session-34.scope: Consumed 8.561s CPU time.
Dec  2 05:53:11 np0005542249 systemd-logind[787]: Session 34 logged out. Waiting for processes to exit.
Dec  2 05:53:11 np0005542249 systemd-logind[787]: Removed session 34.
Dec  2 05:53:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Dec  2 05:53:11 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  2 05:53:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Dec  2 05:53:11 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Dec  2 05:53:11 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 81 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=79/80 n=5 ec=58/48 lis/c=79/58 les/c/f=80/60/0 sis=81 pruub=15.790933609s) [2] async=[2] r=-1 lpr=81 pi=[58,81)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 177.401840210s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:11 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 81 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=79/80 n=5 ec=58/48 lis/c=79/58 les/c/f=80/60/0 sis=81 pruub=15.790834427s) [2] r=-1 lpr=81 pi=[58,81)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 177.401840210s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:11 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 81 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=79/80 n=6 ec=58/48 lis/c=79/58 les/c/f=80/60/0 sis=81 pruub=15.787396431s) [2] async=[2] r=-1 lpr=81 pi=[58,81)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 177.398544312s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:11 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 81 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=79/80 n=6 ec=58/48 lis/c=79/58 les/c/f=80/60/0 sis=81 pruub=15.787179947s) [2] r=-1 lpr=81 pi=[58,81)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 177.398544312s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:11 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 81 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=79/58 les/c/f=80/60/0 sis=81) [2] r=0 lpr=81 pi=[58,81)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:11 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 81 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=79/58 les/c/f=80/60/0 sis=81) [2] r=0 lpr=81 pi=[58,81)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:11 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 81 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=79/58 les/c/f=80/60/0 sis=81) [2] r=0 lpr=81 pi=[58,81)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:11 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 81 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=79/58 les/c/f=80/60/0 sis=81) [2] r=0 lpr=81 pi=[58,81)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:12 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  2 05:53:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Dec  2 05:53:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Dec  2 05:53:12 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Dec  2 05:53:12 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 82 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=81/82 n=6 ec=58/48 lis/c=79/58 les/c/f=80/60/0 sis=81) [2] r=0 lpr=81 pi=[58,81)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:12 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 82 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=81/82 n=5 ec=58/48 lis/c=79/58 les/c/f=80/60/0 sis=81) [2] r=0 lpr=81 pi=[58,81)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v186: 321 pgs: 2 activating+remapped, 319 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 11/215 objects misplaced (5.116%)
Dec  2 05:53:13 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec  2 05:53:13 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec  2 05:53:14 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 5.12 deep-scrub starts
Dec  2 05:53:14 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 5.12 deep-scrub ok
Dec  2 05:53:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:53:14 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:53:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 05:53:14 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:53:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 05:53:14 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:53:14 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 601187e0-b679-4d62-b3b5-2ab8839f9243 does not exist
Dec  2 05:53:14 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev ceb6887b-aea8-4d8a-99f3-386594f27e80 does not exist
Dec  2 05:53:14 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev d7290694-bd0a-4863-92e6-4fa2ada07014 does not exist
Dec  2 05:53:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 05:53:14 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 05:53:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 05:53:14 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 05:53:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:53:14 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:53:14 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 7.e scrub starts
Dec  2 05:53:14 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 7.e scrub ok
Dec  2 05:53:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:53:14 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:53:14 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:53:14 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 05:53:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v187: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 42 B/s, 2 objects/s recovering
Dec  2 05:53:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Dec  2 05:53:15 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec  2 05:53:15 np0005542249 podman[105949]: 2025-12-02 10:53:15.242670331 +0000 UTC m=+0.041714079 container create 1bd7c1a01ffe77599794fb15bf590792bf7b36d33e2a7a4f45fa450629a577a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  2 05:53:15 np0005542249 systemd[1]: Started libpod-conmon-1bd7c1a01ffe77599794fb15bf590792bf7b36d33e2a7a4f45fa450629a577a3.scope.
Dec  2 05:53:15 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:53:15 np0005542249 podman[105949]: 2025-12-02 10:53:15.221027991 +0000 UTC m=+0.020071759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:53:15 np0005542249 podman[105949]: 2025-12-02 10:53:15.329671471 +0000 UTC m=+0.128715239 container init 1bd7c1a01ffe77599794fb15bf590792bf7b36d33e2a7a4f45fa450629a577a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bardeen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:53:15 np0005542249 podman[105949]: 2025-12-02 10:53:15.337804379 +0000 UTC m=+0.136848127 container start 1bd7c1a01ffe77599794fb15bf590792bf7b36d33e2a7a4f45fa450629a577a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bardeen, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  2 05:53:15 np0005542249 podman[105949]: 2025-12-02 10:53:15.341332803 +0000 UTC m=+0.140376551 container attach 1bd7c1a01ffe77599794fb15bf590792bf7b36d33e2a7a4f45fa450629a577a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bardeen, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  2 05:53:15 np0005542249 sad_bardeen[105965]: 167 167
Dec  2 05:53:15 np0005542249 systemd[1]: libpod-1bd7c1a01ffe77599794fb15bf590792bf7b36d33e2a7a4f45fa450629a577a3.scope: Deactivated successfully.
Dec  2 05:53:15 np0005542249 podman[105949]: 2025-12-02 10:53:15.34569066 +0000 UTC m=+0.144734468 container died 1bd7c1a01ffe77599794fb15bf590792bf7b36d33e2a7a4f45fa450629a577a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bardeen, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:53:15 np0005542249 systemd[1]: var-lib-containers-storage-overlay-5ddc3f967648da4393447c1b4e4da962f1dde4872e981d9ec7cbf60cf4c4e5e6-merged.mount: Deactivated successfully.
Dec  2 05:53:15 np0005542249 podman[105949]: 2025-12-02 10:53:15.395056733 +0000 UTC m=+0.194100481 container remove 1bd7c1a01ffe77599794fb15bf590792bf7b36d33e2a7a4f45fa450629a577a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bardeen, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:53:15 np0005542249 systemd[1]: libpod-conmon-1bd7c1a01ffe77599794fb15bf590792bf7b36d33e2a7a4f45fa450629a577a3.scope: Deactivated successfully.
Dec  2 05:53:15 np0005542249 podman[105990]: 2025-12-02 10:53:15.584152819 +0000 UTC m=+0.050349440 container create 0f1323e64943560df4473e73cda32ca6bd4a78f0a839198f44b0bf55095c691b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jackson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:53:15 np0005542249 systemd[1]: Started libpod-conmon-0f1323e64943560df4473e73cda32ca6bd4a78f0a839198f44b0bf55095c691b.scope.
Dec  2 05:53:15 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:53:15 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a777ef7919503fc4c6e62392a7149763f7ad3583d703b5ae248681e45bd22867/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:53:15 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a777ef7919503fc4c6e62392a7149763f7ad3583d703b5ae248681e45bd22867/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:53:15 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a777ef7919503fc4c6e62392a7149763f7ad3583d703b5ae248681e45bd22867/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:53:15 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a777ef7919503fc4c6e62392a7149763f7ad3583d703b5ae248681e45bd22867/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:53:15 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a777ef7919503fc4c6e62392a7149763f7ad3583d703b5ae248681e45bd22867/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:53:15 np0005542249 podman[105990]: 2025-12-02 10:53:15.562225821 +0000 UTC m=+0.028422492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:53:15 np0005542249 podman[105990]: 2025-12-02 10:53:15.662378065 +0000 UTC m=+0.128574766 container init 0f1323e64943560df4473e73cda32ca6bd4a78f0a839198f44b0bf55095c691b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:53:15 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec  2 05:53:15 np0005542249 podman[105990]: 2025-12-02 10:53:15.679425191 +0000 UTC m=+0.145621812 container start 0f1323e64943560df4473e73cda32ca6bd4a78f0a839198f44b0bf55095c691b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  2 05:53:15 np0005542249 podman[105990]: 2025-12-02 10:53:15.684034074 +0000 UTC m=+0.150230715 container attach 0f1323e64943560df4473e73cda32ca6bd4a78f0a839198f44b0bf55095c691b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jackson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  2 05:53:15 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec  2 05:53:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Dec  2 05:53:15 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec  2 05:53:15 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec  2 05:53:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Dec  2 05:53:15 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Dec  2 05:53:16 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec  2 05:53:16 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec  2 05:53:16 np0005542249 flamboyant_jackson[106006]: --> passed data devices: 0 physical, 3 LVM
Dec  2 05:53:16 np0005542249 flamboyant_jackson[106006]: --> relative data size: 1.0
Dec  2 05:53:16 np0005542249 flamboyant_jackson[106006]: --> All data devices are unavailable
Dec  2 05:53:16 np0005542249 systemd[1]: libpod-0f1323e64943560df4473e73cda32ca6bd4a78f0a839198f44b0bf55095c691b.scope: Deactivated successfully.
Dec  2 05:53:16 np0005542249 podman[105990]: 2025-12-02 10:53:16.908806015 +0000 UTC m=+1.375002646 container died 0f1323e64943560df4473e73cda32ca6bd4a78f0a839198f44b0bf55095c691b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jackson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:53:16 np0005542249 systemd[1]: libpod-0f1323e64943560df4473e73cda32ca6bd4a78f0a839198f44b0bf55095c691b.scope: Consumed 1.152s CPU time.
Dec  2 05:53:16 np0005542249 systemd[1]: var-lib-containers-storage-overlay-a777ef7919503fc4c6e62392a7149763f7ad3583d703b5ae248681e45bd22867-merged.mount: Deactivated successfully.
Dec  2 05:53:16 np0005542249 podman[105990]: 2025-12-02 10:53:16.96838108 +0000 UTC m=+1.434577701 container remove 0f1323e64943560df4473e73cda32ca6bd4a78f0a839198f44b0bf55095c691b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jackson, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:53:16 np0005542249 systemd[1]: libpod-conmon-0f1323e64943560df4473e73cda32ca6bd4a78f0a839198f44b0bf55095c691b.scope: Deactivated successfully.
Dec  2 05:53:16 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec  2 05:53:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v189: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Dec  2 05:53:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Dec  2 05:53:17 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  2 05:53:17 np0005542249 podman[106186]: 2025-12-02 10:53:17.734244307 +0000 UTC m=+0.043116636 container create 0435dc3c202e5de98b47383c500070159fea19ccd8caa15f6a65d54da256f14a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shtern, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:53:17 np0005542249 systemd[1]: Started libpod-conmon-0435dc3c202e5de98b47383c500070159fea19ccd8caa15f6a65d54da256f14a.scope.
Dec  2 05:53:17 np0005542249 podman[106186]: 2025-12-02 10:53:17.713494231 +0000 UTC m=+0.022366540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:53:17 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:53:17 np0005542249 podman[106186]: 2025-12-02 10:53:17.839461485 +0000 UTC m=+0.148333774 container init 0435dc3c202e5de98b47383c500070159fea19ccd8caa15f6a65d54da256f14a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shtern, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  2 05:53:17 np0005542249 podman[106186]: 2025-12-02 10:53:17.850759869 +0000 UTC m=+0.159632148 container start 0435dc3c202e5de98b47383c500070159fea19ccd8caa15f6a65d54da256f14a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:53:17 np0005542249 podman[106186]: 2025-12-02 10:53:17.854643692 +0000 UTC m=+0.163516001 container attach 0435dc3c202e5de98b47383c500070159fea19ccd8caa15f6a65d54da256f14a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shtern, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:53:17 np0005542249 romantic_shtern[106202]: 167 167
Dec  2 05:53:17 np0005542249 systemd[1]: libpod-0435dc3c202e5de98b47383c500070159fea19ccd8caa15f6a65d54da256f14a.scope: Deactivated successfully.
Dec  2 05:53:17 np0005542249 podman[106186]: 2025-12-02 10:53:17.857450047 +0000 UTC m=+0.166322336 container died 0435dc3c202e5de98b47383c500070159fea19ccd8caa15f6a65d54da256f14a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shtern, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:53:17 np0005542249 systemd[1]: var-lib-containers-storage-overlay-b3b6d4baead276f3319c7ef42085affb80d5d81f881edec662cb367be8ed2a08-merged.mount: Deactivated successfully.
Dec  2 05:53:17 np0005542249 podman[106186]: 2025-12-02 10:53:17.902749301 +0000 UTC m=+0.211621630 container remove 0435dc3c202e5de98b47383c500070159fea19ccd8caa15f6a65d54da256f14a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:53:17 np0005542249 systemd[1]: libpod-conmon-0435dc3c202e5de98b47383c500070159fea19ccd8caa15f6a65d54da256f14a.scope: Deactivated successfully.
Dec  2 05:53:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Dec  2 05:53:17 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  2 05:53:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Dec  2 05:53:18 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Dec  2 05:53:18 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  2 05:53:18 np0005542249 podman[106226]: 2025-12-02 10:53:18.126842985 +0000 UTC m=+0.056071903 container create 49af7492069a9312d375f611d66ea527c9025ee3c6542e5b7916b5c20694d679 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_banach, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 05:53:18 np0005542249 systemd[1]: Started libpod-conmon-49af7492069a9312d375f611d66ea527c9025ee3c6542e5b7916b5c20694d679.scope.
Dec  2 05:53:18 np0005542249 podman[106226]: 2025-12-02 10:53:18.095768281 +0000 UTC m=+0.024997250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:53:18 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:53:18 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e85193fccc15edb0333da4b89df4f017f9fa576e28fc9daeb2a9536eb2f0d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:53:18 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e85193fccc15edb0333da4b89df4f017f9fa576e28fc9daeb2a9536eb2f0d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:53:18 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e85193fccc15edb0333da4b89df4f017f9fa576e28fc9daeb2a9536eb2f0d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:53:18 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e85193fccc15edb0333da4b89df4f017f9fa576e28fc9daeb2a9536eb2f0d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:53:18 np0005542249 podman[106226]: 2025-12-02 10:53:18.228081506 +0000 UTC m=+0.157310414 container init 49af7492069a9312d375f611d66ea527c9025ee3c6542e5b7916b5c20694d679 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  2 05:53:18 np0005542249 podman[106226]: 2025-12-02 10:53:18.252552761 +0000 UTC m=+0.181781669 container start 49af7492069a9312d375f611d66ea527c9025ee3c6542e5b7916b5c20694d679 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  2 05:53:18 np0005542249 podman[106226]: 2025-12-02 10:53:18.257280619 +0000 UTC m=+0.186509497 container attach 49af7492069a9312d375f611d66ea527c9025ee3c6542e5b7916b5c20694d679 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_banach, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 05:53:18 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Dec  2 05:53:18 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]: {
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:    "0": [
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:        {
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "devices": [
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "/dev/loop3"
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            ],
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "lv_name": "ceph_lv0",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "lv_size": "21470642176",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "name": "ceph_lv0",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "tags": {
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.cluster_name": "ceph",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.crush_device_class": "",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.encrypted": "0",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.osd_id": "0",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.type": "block",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.vdo": "0"
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            },
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "type": "block",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "vg_name": "ceph_vg0"
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:        }
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:    ],
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:    "1": [
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:        {
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "devices": [
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "/dev/loop4"
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            ],
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "lv_name": "ceph_lv1",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "lv_size": "21470642176",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "name": "ceph_lv1",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "tags": {
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.cluster_name": "ceph",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.crush_device_class": "",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.encrypted": "0",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.osd_id": "1",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.type": "block",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.vdo": "0"
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            },
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "type": "block",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "vg_name": "ceph_vg1"
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:        }
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:    ],
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:    "2": [
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:        {
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "devices": [
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "/dev/loop5"
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            ],
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "lv_name": "ceph_lv2",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "lv_size": "21470642176",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "name": "ceph_lv2",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "tags": {
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.cluster_name": "ceph",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.crush_device_class": "",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.encrypted": "0",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.osd_id": "2",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.type": "block",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:                "ceph.vdo": "0"
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            },
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "type": "block",
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:            "vg_name": "ceph_vg2"
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:        }
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]:    ]
Dec  2 05:53:18 np0005542249 intelligent_banach[106243]: }
Dec  2 05:53:19 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  2 05:53:19 np0005542249 systemd[1]: libpod-49af7492069a9312d375f611d66ea527c9025ee3c6542e5b7916b5c20694d679.scope: Deactivated successfully.
Dec  2 05:53:19 np0005542249 podman[106226]: 2025-12-02 10:53:19.031410717 +0000 UTC m=+0.960639625 container died 49af7492069a9312d375f611d66ea527c9025ee3c6542e5b7916b5c20694d679 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  2 05:53:19 np0005542249 systemd[1]: var-lib-containers-storage-overlay-01e85193fccc15edb0333da4b89df4f017f9fa576e28fc9daeb2a9536eb2f0d7-merged.mount: Deactivated successfully.
Dec  2 05:53:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v191: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 35 B/s, 1 objects/s recovering
Dec  2 05:53:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Dec  2 05:53:19 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec  2 05:53:19 np0005542249 podman[106226]: 2025-12-02 10:53:19.094433705 +0000 UTC m=+1.023662583 container remove 49af7492069a9312d375f611d66ea527c9025ee3c6542e5b7916b5c20694d679 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:53:19 np0005542249 systemd[1]: libpod-conmon-49af7492069a9312d375f611d66ea527c9025ee3c6542e5b7916b5c20694d679.scope: Deactivated successfully.
Dec  2 05:53:19 np0005542249 podman[106400]: 2025-12-02 10:53:19.836109063 +0000 UTC m=+0.048800858 container create 6038350e0dab4f3e16a5033b03ff7df8c8157e892ebdbcfd63f7e9cf678ce3f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galileo, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:53:19 np0005542249 systemd[1]: Started libpod-conmon-6038350e0dab4f3e16a5033b03ff7df8c8157e892ebdbcfd63f7e9cf678ce3f1.scope.
Dec  2 05:53:19 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:53:19 np0005542249 podman[106400]: 2025-12-02 10:53:19.816683243 +0000 UTC m=+0.029375048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:53:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:53:19 np0005542249 podman[106400]: 2025-12-02 10:53:19.925649222 +0000 UTC m=+0.138341087 container init 6038350e0dab4f3e16a5033b03ff7df8c8157e892ebdbcfd63f7e9cf678ce3f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galileo, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  2 05:53:19 np0005542249 podman[106400]: 2025-12-02 10:53:19.932508945 +0000 UTC m=+0.145200770 container start 6038350e0dab4f3e16a5033b03ff7df8c8157e892ebdbcfd63f7e9cf678ce3f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:53:19 np0005542249 podman[106400]: 2025-12-02 10:53:19.936244106 +0000 UTC m=+0.148935981 container attach 6038350e0dab4f3e16a5033b03ff7df8c8157e892ebdbcfd63f7e9cf678ce3f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galileo, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  2 05:53:19 np0005542249 silly_galileo[106416]: 167 167
Dec  2 05:53:19 np0005542249 systemd[1]: libpod-6038350e0dab4f3e16a5033b03ff7df8c8157e892ebdbcfd63f7e9cf678ce3f1.scope: Deactivated successfully.
Dec  2 05:53:19 np0005542249 podman[106400]: 2025-12-02 10:53:19.940082649 +0000 UTC m=+0.152774464 container died 6038350e0dab4f3e16a5033b03ff7df8c8157e892ebdbcfd63f7e9cf678ce3f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  2 05:53:19 np0005542249 systemd[1]: var-lib-containers-storage-overlay-799356cbcc6429785e2ae19008737c800a56b6de1449f12d9b82386766314251-merged.mount: Deactivated successfully.
Dec  2 05:53:19 np0005542249 podman[106400]: 2025-12-02 10:53:19.986715878 +0000 UTC m=+0.199407703 container remove 6038350e0dab4f3e16a5033b03ff7df8c8157e892ebdbcfd63f7e9cf678ce3f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galileo, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:53:19 np0005542249 systemd[1]: libpod-conmon-6038350e0dab4f3e16a5033b03ff7df8c8157e892ebdbcfd63f7e9cf678ce3f1.scope: Deactivated successfully.
Dec  2 05:53:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Dec  2 05:53:20 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec  2 05:53:20 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec  2 05:53:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Dec  2 05:53:20 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Dec  2 05:53:20 np0005542249 podman[106440]: 2025-12-02 10:53:20.158830209 +0000 UTC m=+0.058285503 container create dd6949dd6d13510381544bc743fe9d6e94414165a172f38f8b302873c00087d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:53:20 np0005542249 systemd[1]: Started libpod-conmon-dd6949dd6d13510381544bc743fe9d6e94414165a172f38f8b302873c00087d6.scope.
Dec  2 05:53:20 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:53:20 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcf1bacd265af808951c2de204ebd132736ddec151dc90b6767ad6f909241fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:53:20 np0005542249 podman[106440]: 2025-12-02 10:53:20.139737487 +0000 UTC m=+0.039192801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:53:20 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcf1bacd265af808951c2de204ebd132736ddec151dc90b6767ad6f909241fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:53:20 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcf1bacd265af808951c2de204ebd132736ddec151dc90b6767ad6f909241fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:53:20 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcf1bacd265af808951c2de204ebd132736ddec151dc90b6767ad6f909241fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:53:20 np0005542249 podman[106440]: 2025-12-02 10:53:20.252833857 +0000 UTC m=+0.152289201 container init dd6949dd6d13510381544bc743fe9d6e94414165a172f38f8b302873c00087d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec  2 05:53:20 np0005542249 podman[106440]: 2025-12-02 10:53:20.2618943 +0000 UTC m=+0.161349604 container start dd6949dd6d13510381544bc743fe9d6e94414165a172f38f8b302873c00087d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  2 05:53:20 np0005542249 podman[106440]: 2025-12-02 10:53:20.266252787 +0000 UTC m=+0.165708171 container attach dd6949dd6d13510381544bc743fe9d6e94414165a172f38f8b302873c00087d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  2 05:53:20 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 85 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=85 pruub=13.283486366s) [2] r=-1 lpr=85 pi=[58,85)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 183.224380493s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:20 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 85 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=85 pruub=13.283301353s) [2] r=-1 lpr=85 pi=[58,85)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.224380493s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:20 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 85 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=85 pruub=13.282796860s) [2] r=-1 lpr=85 pi=[58,85)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 183.224685669s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:20 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 85 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=85 pruub=13.282737732s) [2] r=-1 lpr=85 pi=[58,85)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.224685669s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:20 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 85 pg[9.c( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=85) [2] r=0 lpr=85 pi=[58,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:20 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 85 pg[9.1c( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=85) [2] r=0 lpr=85 pi=[58,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:20 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec  2 05:53:20 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec  2 05:53:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Dec  2 05:53:21 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec  2 05:53:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Dec  2 05:53:21 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Dec  2 05:53:21 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 86 pg[9.1c( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=86) [2]/[1] r=-1 lpr=86 pi=[58,86)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:21 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 86 pg[9.1c( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=86) [2]/[1] r=-1 lpr=86 pi=[58,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:21 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 86 pg[9.c( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=86) [2]/[1] r=-1 lpr=86 pi=[58,86)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:21 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 86 pg[9.c( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=86) [2]/[1] r=-1 lpr=86 pi=[58,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:21 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 86 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=86) [2]/[1] r=0 lpr=86 pi=[58,86)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:21 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 86 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=58/60 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=86) [2]/[1] r=0 lpr=86 pi=[58,86)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:21 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 86 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=86) [2]/[1] r=0 lpr=86 pi=[58,86)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:21 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 86 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=58/60 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=86) [2]/[1] r=0 lpr=86 pi=[58,86)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v194: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:53:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Dec  2 05:53:21 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec  2 05:53:21 np0005542249 wizardly_carson[106456]: {
Dec  2 05:53:21 np0005542249 wizardly_carson[106456]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 05:53:21 np0005542249 wizardly_carson[106456]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:53:21 np0005542249 wizardly_carson[106456]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 05:53:21 np0005542249 wizardly_carson[106456]:        "osd_id": 0,
Dec  2 05:53:21 np0005542249 wizardly_carson[106456]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 05:53:21 np0005542249 wizardly_carson[106456]:        "type": "bluestore"
Dec  2 05:53:21 np0005542249 wizardly_carson[106456]:    },
Dec  2 05:53:21 np0005542249 wizardly_carson[106456]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 05:53:21 np0005542249 wizardly_carson[106456]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:53:21 np0005542249 wizardly_carson[106456]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 05:53:21 np0005542249 wizardly_carson[106456]:        "osd_id": 2,
Dec  2 05:53:21 np0005542249 wizardly_carson[106456]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 05:53:21 np0005542249 wizardly_carson[106456]:        "type": "bluestore"
Dec  2 05:53:21 np0005542249 wizardly_carson[106456]:    },
Dec  2 05:53:21 np0005542249 wizardly_carson[106456]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 05:53:21 np0005542249 wizardly_carson[106456]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:53:21 np0005542249 wizardly_carson[106456]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 05:53:21 np0005542249 wizardly_carson[106456]:        "osd_id": 1,
Dec  2 05:53:21 np0005542249 wizardly_carson[106456]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 05:53:21 np0005542249 wizardly_carson[106456]:        "type": "bluestore"
Dec  2 05:53:21 np0005542249 wizardly_carson[106456]:    }
Dec  2 05:53:21 np0005542249 wizardly_carson[106456]: }
Dec  2 05:53:21 np0005542249 systemd[1]: libpod-dd6949dd6d13510381544bc743fe9d6e94414165a172f38f8b302873c00087d6.scope: Deactivated successfully.
Dec  2 05:53:21 np0005542249 podman[106440]: 2025-12-02 10:53:21.353350148 +0000 UTC m=+1.252805462 container died dd6949dd6d13510381544bc743fe9d6e94414165a172f38f8b302873c00087d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  2 05:53:21 np0005542249 systemd[1]: libpod-dd6949dd6d13510381544bc743fe9d6e94414165a172f38f8b302873c00087d6.scope: Consumed 1.102s CPU time.
Dec  2 05:53:21 np0005542249 systemd[1]: var-lib-containers-storage-overlay-0dcf1bacd265af808951c2de204ebd132736ddec151dc90b6767ad6f909241fd-merged.mount: Deactivated successfully.
Dec  2 05:53:21 np0005542249 podman[106440]: 2025-12-02 10:53:21.40155175 +0000 UTC m=+1.301007044 container remove dd6949dd6d13510381544bc743fe9d6e94414165a172f38f8b302873c00087d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:53:21 np0005542249 systemd[1]: libpod-conmon-dd6949dd6d13510381544bc743fe9d6e94414165a172f38f8b302873c00087d6.scope: Deactivated successfully.
Dec  2 05:53:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:53:21 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:53:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:53:21 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:53:21 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev edf695ed-c4f6-436c-8c40-5c41d501e950 does not exist
Dec  2 05:53:21 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 62e9ceeb-f40d-4413-a665-438008d9167c does not exist
Dec  2 05:53:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Dec  2 05:53:22 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec  2 05:53:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Dec  2 05:53:22 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec  2 05:53:22 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:53:22 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:53:22 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Dec  2 05:53:22 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 87 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=86/87 n=5 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=86) [2]/[1] async=[2] r=0 lpr=86 pi=[58,86)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:22 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 87 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=86/87 n=6 ec=58/48 lis/c=58/58 les/c/f=60/60/0 sis=86) [2]/[1] async=[2] r=0 lpr=86 pi=[58,86)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Dec  2 05:53:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Dec  2 05:53:23 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Dec  2 05:53:23 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec  2 05:53:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v197: 321 pgs: 2 remapped+peering, 319 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:53:23 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 88 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=86/87 n=6 ec=58/48 lis/c=86/58 les/c/f=87/60/0 sis=88 pruub=14.988729477s) [2] async=[2] r=-1 lpr=88 pi=[58,88)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 187.753265381s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:23 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 88 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=86/87 n=6 ec=58/48 lis/c=86/58 les/c/f=87/60/0 sis=88 pruub=14.988436699s) [2] r=-1 lpr=88 pi=[58,88)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.753265381s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:23 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 88 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=86/87 n=5 ec=58/48 lis/c=86/58 les/c/f=87/60/0 sis=88 pruub=14.982859612s) [2] async=[2] r=-1 lpr=88 pi=[58,88)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 187.749084473s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:23 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 88 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=86/87 n=5 ec=58/48 lis/c=86/58 les/c/f=87/60/0 sis=88 pruub=14.982698441s) [2] r=-1 lpr=88 pi=[58,88)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.749084473s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:23 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 88 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=86/58 les/c/f=87/60/0 sis=88) [2] r=0 lpr=88 pi=[58,88)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:23 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 88 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=58/48 lis/c=86/58 les/c/f=87/60/0 sis=88) [2] r=0 lpr=88 pi=[58,88)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:23 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 88 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=86/58 les/c/f=87/60/0 sis=88) [2] r=0 lpr=88 pi=[58,88)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:23 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 88 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=86/58 les/c/f=87/60/0 sis=88) [2] r=0 lpr=88 pi=[58,88)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:23 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 2.d scrub starts
Dec  2 05:53:23 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 2.d scrub ok
Dec  2 05:53:23 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Dec  2 05:53:23 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Dec  2 05:53:23 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec  2 05:53:23 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec  2 05:53:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Dec  2 05:53:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Dec  2 05:53:24 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Dec  2 05:53:24 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 89 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=88/89 n=5 ec=58/48 lis/c=86/58 les/c/f=87/60/0 sis=88) [2] r=0 lpr=88 pi=[58,88)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:24 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 89 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=88/89 n=6 ec=58/48 lis/c=86/58 les/c/f=87/60/0 sis=88) [2] r=0 lpr=88 pi=[58,88)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:24 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec  2 05:53:24 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec  2 05:53:24 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Dec  2 05:53:24 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Dec  2 05:53:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:53:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v199: 321 pgs: 2 peering, 319 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Dec  2 05:53:25 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Dec  2 05:53:25 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Dec  2 05:53:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_10:53:26
Dec  2 05:53:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 05:53:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Some PGs (0.006231) are inactive; try again later
Dec  2 05:53:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:53:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:53:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:53:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:53:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:53:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:53:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 05:53:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 05:53:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 05:53:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 05:53:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 05:53:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 05:53:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 05:53:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 05:53:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 05:53:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 05:53:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v200: 321 pgs: 2 peering, 319 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec  2 05:53:27 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 3.c scrub starts
Dec  2 05:53:27 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 3.c scrub ok
Dec  2 05:53:27 np0005542249 systemd-logind[787]: New session 35 of user zuul.
Dec  2 05:53:27 np0005542249 systemd[1]: Started Session 35 of User zuul.
Dec  2 05:53:27 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Dec  2 05:53:27 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Dec  2 05:53:28 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec  2 05:53:28 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec  2 05:53:28 np0005542249 python3.9[106708]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  2 05:53:28 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 7.f scrub starts
Dec  2 05:53:28 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 7.f scrub ok
Dec  2 05:53:28 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Dec  2 05:53:28 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Dec  2 05:53:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v201: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 1 objects/s recovering
Dec  2 05:53:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Dec  2 05:53:29 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec  2 05:53:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Dec  2 05:53:29 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  2 05:53:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Dec  2 05:53:29 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Dec  2 05:53:29 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec  2 05:53:29 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Dec  2 05:53:29 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Dec  2 05:53:29 np0005542249 python3.9[106882]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:53:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:53:30 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  2 05:53:30 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 6.17 scrub starts
Dec  2 05:53:30 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 6.17 scrub ok
Dec  2 05:53:30 np0005542249 python3.9[107038]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:53:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v203: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Dec  2 05:53:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Dec  2 05:53:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec  2 05:53:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Dec  2 05:53:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  2 05:53:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Dec  2 05:53:31 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Dec  2 05:53:31 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec  2 05:53:31 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 3.f scrub starts
Dec  2 05:53:31 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 3.f scrub ok
Dec  2 05:53:31 np0005542249 python3.9[107191]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 05:53:32 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  2 05:53:32 np0005542249 python3.9[107345]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:53:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v205: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:53:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Dec  2 05:53:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec  2 05:53:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Dec  2 05:53:33 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec  2 05:53:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec  2 05:53:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Dec  2 05:53:33 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Dec  2 05:53:33 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Dec  2 05:53:33 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Dec  2 05:53:33 np0005542249 python3.9[107497]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:53:34 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 4.14 deep-scrub starts
Dec  2 05:53:34 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec  2 05:53:34 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 4.14 deep-scrub ok
Dec  2 05:53:34 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Dec  2 05:53:34 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Dec  2 05:53:34 np0005542249 python3.9[107647]: ansible-ansible.builtin.service_facts Invoked
Dec  2 05:53:34 np0005542249 network[107664]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  2 05:53:34 np0005542249 network[107665]: 'network-scripts' will be removed from distribution in near future.
Dec  2 05:53:34 np0005542249 network[107666]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  2 05:53:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:53:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v207: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:53:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Dec  2 05:53:35 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec  2 05:53:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Dec  2 05:53:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 05:53:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:53:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 05:53:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:53:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:53:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:53:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:53:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:53:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:53:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:53:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:53:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:53:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 05:53:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:53:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:53:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:53:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 05:53:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:53:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 05:53:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:53:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:53:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:53:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 05:53:36 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec  2 05:53:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Dec  2 05:53:36 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Dec  2 05:53:36 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Dec  2 05:53:36 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Dec  2 05:53:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v209: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:53:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Dec  2 05:53:37 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec  2 05:53:37 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec  2 05:53:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Dec  2 05:53:37 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec  2 05:53:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Dec  2 05:53:37 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Dec  2 05:53:38 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Dec  2 05:53:38 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Dec  2 05:53:39 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec  2 05:53:39 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec  2 05:53:39 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec  2 05:53:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v211: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:53:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Dec  2 05:53:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec  2 05:53:39 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec  2 05:53:39 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec  2 05:53:39 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Dec  2 05:53:39 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Dec  2 05:53:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:53:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Dec  2 05:53:40 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec  2 05:53:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec  2 05:53:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Dec  2 05:53:40 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Dec  2 05:53:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 95 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=66/67 n=5 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=95 pruub=12.658551216s) [2] r=-1 lpr=95 pi=[66,95)/1 crt=54'385 mlcod 0'0 active pruub 207.786834717s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:40 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 95 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=66/67 n=5 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=95 pruub=12.658497810s) [2] r=-1 lpr=95 pi=[66,95)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 207.786834717s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:40 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 95 pg[9.13( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=95) [2] r=0 lpr=95 pi=[66,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:40 np0005542249 python3.9[107926]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:53:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Dec  2 05:53:41 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec  2 05:53:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Dec  2 05:53:41 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Dec  2 05:53:41 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 96 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=66/67 n=5 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=96) [2]/[0] r=0 lpr=96 pi=[66,96)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:41 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 96 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=66/67 n=5 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=96) [2]/[0] r=0 lpr=96 pi=[66,96)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:41 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 96 pg[9.13( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=96) [2]/[0] r=-1 lpr=96 pi=[66,96)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:41 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 96 pg[9.13( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=96) [2]/[0] r=-1 lpr=96 pi=[66,96)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:41 np0005542249 python3.9[108076]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:53:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v214: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:53:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Dec  2 05:53:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec  2 05:53:41 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec  2 05:53:41 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec  2 05:53:41 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Dec  2 05:53:41 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Dec  2 05:53:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Dec  2 05:53:42 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec  2 05:53:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Dec  2 05:53:42 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Dec  2 05:53:42 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec  2 05:53:42 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 97 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=96/97 n=5 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=96) [2]/[0] async=[2] r=0 lpr=96 pi=[66,96)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:42 np0005542249 python3.9[108230]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:53:42 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Dec  2 05:53:42 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Dec  2 05:53:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Dec  2 05:53:43 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec  2 05:53:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Dec  2 05:53:43 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Dec  2 05:53:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v217: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Dec  2 05:53:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Dec  2 05:53:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec  2 05:53:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 98 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=96/66 les/c/f=97/67/0 sis=98) [2] r=0 lpr=98 pi=[66,98)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:43 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 98 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=96/66 les/c/f=97/67/0 sis=98) [2] r=0 lpr=98 pi=[66,98)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 98 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=96/97 n=5 ec=58/48 lis/c=96/66 les/c/f=97/67/0 sis=98 pruub=14.982657433s) [2] async=[2] r=-1 lpr=98 pi=[66,98)/1 crt=54'385 mlcod 54'385 active pruub 213.163619995s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:43 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 98 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=96/97 n=5 ec=58/48 lis/c=96/66 les/c/f=97/67/0 sis=98 pruub=14.982576370s) [2] r=-1 lpr=98 pi=[66,98)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 213.163619995s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:43 np0005542249 python3.9[108388]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 05:53:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Dec  2 05:53:44 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec  2 05:53:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Dec  2 05:53:44 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Dec  2 05:53:44 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec  2 05:53:44 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 99 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=98/99 n=5 ec=58/48 lis/c=96/66 les/c/f=97/67/0 sis=98) [2] r=0 lpr=98 pi=[66,98)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 99 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=66/67 n=5 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=99 pruub=8.539644241s) [1] r=-1 lpr=99 pi=[66,99)/1 crt=54'385 mlcod 0'0 active pruub 207.784072876s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 99 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=66/67 n=5 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=99 pruub=8.539479256s) [1] r=-1 lpr=99 pi=[66,99)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 207.784072876s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=99) [1] r=0 lpr=99 pi=[66,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:44 np0005542249 python3.9[108472]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 05:53:44 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Dec  2 05:53:44 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Dec  2 05:53:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:53:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Dec  2 05:53:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Dec  2 05:53:44 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Dec  2 05:53:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 100 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=66/67 n=5 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=100) [1]/[0] r=0 lpr=100 pi=[66,100)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:44 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 100 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=66/67 n=5 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=100) [1]/[0] r=0 lpr=100 pi=[66,100)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[66,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:44 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[66,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v220: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Dec  2 05:53:45 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec  2 05:53:45 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec  2 05:53:45 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec  2 05:53:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Dec  2 05:53:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Dec  2 05:53:45 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Dec  2 05:53:46 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec  2 05:53:46 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec  2 05:53:46 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 101 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=100/101 n=5 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[66,100)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:46 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Dec  2 05:53:46 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Dec  2 05:53:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v222: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:53:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Dec  2 05:53:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Dec  2 05:53:47 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Dec  2 05:53:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 102 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=100/101 n=5 ec=58/48 lis/c=100/66 les/c/f=101/67/0 sis=102 pruub=15.306921005s) [1] async=[1] r=-1 lpr=102 pi=[66,102)/1 crt=54'385 mlcod 54'385 active pruub 217.536331177s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:47 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 102 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=100/101 n=5 ec=58/48 lis/c=100/66 les/c/f=101/67/0 sis=102 pruub=15.306427956s) [1] r=-1 lpr=102 pi=[66,102)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 217.536331177s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:47 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 102 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=100/66 les/c/f=101/67/0 sis=102) [1] r=0 lpr=102 pi=[66,102)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:47 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 102 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=100/66 les/c/f=101/67/0 sis=102) [1] r=0 lpr=102 pi=[66,102)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:47 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 2.f scrub starts
Dec  2 05:53:47 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 2.f scrub ok
Dec  2 05:53:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Dec  2 05:53:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Dec  2 05:53:48 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Dec  2 05:53:48 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 103 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=102/103 n=5 ec=58/48 lis/c=100/66 les/c/f=101/67/0 sis=102) [1] r=0 lpr=102 pi=[66,102)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:48 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec  2 05:53:48 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec  2 05:53:48 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec  2 05:53:48 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec  2 05:53:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v225: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:53:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Dec  2 05:53:49 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec  2 05:53:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Dec  2 05:53:49 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec  2 05:53:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Dec  2 05:53:49 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec  2 05:53:49 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Dec  2 05:53:49 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec  2 05:53:49 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec  2 05:53:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:53:49 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Dec  2 05:53:49 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Dec  2 05:53:50 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec  2 05:53:50 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Dec  2 05:53:50 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Dec  2 05:53:50 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec  2 05:53:50 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec  2 05:53:50 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 104 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=75/76 n=5 ec=58/48 lis/c=75/75 les/c/f=76/76/0 sis=104 pruub=12.062348366s) [0] r=-1 lpr=104 pi=[75,104)/1 crt=54'385 mlcod 0'0 active pruub 206.685897827s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:50 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 104 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=75/76 n=5 ec=58/48 lis/c=75/75 les/c/f=76/76/0 sis=104 pruub=12.062284470s) [0] r=-1 lpr=104 pi=[75,104)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 206.685897827s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:50 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 104 pg[9.16( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=75/75 les/c/f=76/76/0 sis=104) [0] r=0 lpr=104 pi=[75,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v227: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:53:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Dec  2 05:53:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec  2 05:53:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Dec  2 05:53:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec  2 05:53:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Dec  2 05:53:51 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Dec  2 05:53:51 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec  2 05:53:51 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 105 pg[9.16( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=75/75 les/c/f=76/76/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[75,105)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:51 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 105 pg[9.16( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=75/75 les/c/f=76/76/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[75,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:51 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 105 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=75/76 n=5 ec=58/48 lis/c=75/75 les/c/f=76/76/0 sis=105) [0]/[2] r=0 lpr=105 pi=[75,105)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:51 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 105 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=75/76 n=5 ec=58/48 lis/c=75/75 les/c/f=76/76/0 sis=105) [0]/[2] r=0 lpr=105 pi=[75,105)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:51 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 2.b deep-scrub starts
Dec  2 05:53:51 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 2.b deep-scrub ok
Dec  2 05:53:51 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Dec  2 05:53:51 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Dec  2 05:53:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Dec  2 05:53:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Dec  2 05:53:52 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Dec  2 05:53:52 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec  2 05:53:52 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Dec  2 05:53:52 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 106 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=105/106 n=5 ec=58/48 lis/c=75/75 les/c/f=76/76/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[75,105)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:52 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Dec  2 05:53:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v230: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Dec  2 05:53:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Dec  2 05:53:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec  2 05:53:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Dec  2 05:53:53 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec  2 05:53:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec  2 05:53:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Dec  2 05:53:53 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Dec  2 05:53:53 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 107 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=105/75 les/c/f=106/76/0 sis=107) [0] r=0 lpr=107 pi=[75,107)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:53 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 107 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=105/75 les/c/f=106/76/0 sis=107) [0] r=0 lpr=107 pi=[75,107)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:53 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 107 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=105/106 n=5 ec=58/48 lis/c=105/75 les/c/f=106/76/0 sis=107 pruub=15.214763641s) [0] async=[0] r=-1 lpr=107 pi=[75,107)/1 crt=54'385 mlcod 54'385 active pruub 212.299926758s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:53 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 107 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=105/106 n=5 ec=58/48 lis/c=105/75 les/c/f=106/76/0 sis=107 pruub=15.214703560s) [0] r=-1 lpr=107 pi=[75,107)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 212.299926758s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:53 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Dec  2 05:53:53 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Dec  2 05:53:53 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 6.f deep-scrub starts
Dec  2 05:53:53 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 6.f deep-scrub ok
Dec  2 05:53:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Dec  2 05:53:54 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec  2 05:53:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Dec  2 05:53:54 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Dec  2 05:53:54 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 108 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=107/108 n=5 ec=58/48 lis/c=105/75 les/c/f=106/76/0 sis=107) [0] r=0 lpr=107 pi=[75,107)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:54 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Dec  2 05:53:54 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Dec  2 05:53:54 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Dec  2 05:53:54 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Dec  2 05:53:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:53:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v233: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Dec  2 05:53:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Dec  2 05:53:55 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec  2 05:53:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Dec  2 05:53:55 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec  2 05:53:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Dec  2 05:53:55 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec  2 05:53:55 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Dec  2 05:53:55 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Dec  2 05:53:55 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Dec  2 05:53:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:53:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:53:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:53:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:53:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:53:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:53:56 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec  2 05:53:56 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 109 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=66/67 n=5 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=109 pruub=12.225300789s) [2] r=-1 lpr=109 pi=[66,109)/1 crt=54'385 mlcod 0'0 active pruub 223.787322998s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:56 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 109 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=66/67 n=5 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=109 pruub=12.225245476s) [2] r=-1 lpr=109 pi=[66,109)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 223.787322998s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:56 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 109 pg[9.19( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=109) [2] r=0 lpr=109 pi=[66,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v235: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:53:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Dec  2 05:53:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec  2 05:53:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Dec  2 05:53:57 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec  2 05:53:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec  2 05:53:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Dec  2 05:53:57 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Dec  2 05:53:57 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 110 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=66/67 n=5 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=110) [2]/[0] r=0 lpr=110 pi=[66,110)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:57 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 110 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=66/67 n=5 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=110) [2]/[0] r=0 lpr=110 pi=[66,110)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:57 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 110 pg[9.19( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[66,110)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:57 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 110 pg[9.19( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[66,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:58 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec  2 05:53:58 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec  2 05:53:58 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Dec  2 05:53:58 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Dec  2 05:53:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Dec  2 05:53:58 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec  2 05:53:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Dec  2 05:53:58 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Dec  2 05:53:58 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 111 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=110/111 n=5 ec=58/48 lis/c=66/66 les/c/f=67/67/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[66,110)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:53:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v238: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 0 objects/s recovering
Dec  2 05:53:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Dec  2 05:53:59 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec  2 05:53:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Dec  2 05:53:59 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec  2 05:53:59 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec  2 05:53:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Dec  2 05:53:59 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Dec  2 05:53:59 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 112 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=110/66 les/c/f=111/67/0 sis=112) [2] r=0 lpr=112 pi=[66,112)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:59 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 112 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=110/66 les/c/f=111/67/0 sis=112) [2] r=0 lpr=112 pi=[66,112)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:53:59 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 112 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=110/111 n=5 ec=58/48 lis/c=110/66 les/c/f=111/67/0 sis=112 pruub=15.159121513s) [2] async=[2] r=-1 lpr=112 pi=[66,112)/1 crt=54'385 mlcod 54'385 active pruub 229.970199585s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:53:59 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 112 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=110/111 n=5 ec=58/48 lis/c=110/66 les/c/f=111/67/0 sis=112 pruub=15.159024239s) [2] r=-1 lpr=112 pi=[66,112)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 229.970199585s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:53:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:54:00 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec  2 05:54:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Dec  2 05:54:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Dec  2 05:54:00 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Dec  2 05:54:00 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 113 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=112/113 n=5 ec=58/48 lis/c=110/66 les/c/f=111/67/0 sis=112) [2] r=0 lpr=112 pi=[66,112)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:54:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v241: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Dec  2 05:54:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Dec  2 05:54:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec  2 05:54:01 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Dec  2 05:54:01 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Dec  2 05:54:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Dec  2 05:54:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec  2 05:54:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Dec  2 05:54:01 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Dec  2 05:54:01 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 114 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=88/89 n=5 ec=58/48 lis/c=88/88 les/c/f=89/89/0 sis=114 pruub=10.374859810s) [0] r=-1 lpr=114 pi=[88,114)/1 crt=54'385 mlcod 0'0 active pruub 215.845153809s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:54:01 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 114 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=88/89 n=5 ec=58/48 lis/c=88/88 les/c/f=89/89/0 sis=114 pruub=10.374756813s) [0] r=-1 lpr=114 pi=[88,114)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 215.845153809s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:54:01 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec  2 05:54:01 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 114 pg[9.1c( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=88/88 les/c/f=89/89/0 sis=114) [0] r=0 lpr=114 pi=[88,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:54:01 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 4.a scrub starts
Dec  2 05:54:01 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 4.a scrub ok
Dec  2 05:54:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Dec  2 05:54:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Dec  2 05:54:02 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Dec  2 05:54:02 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 115 pg[9.1c( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=88/88 les/c/f=89/89/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[88,115)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:54:02 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 115 pg[9.1c( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=88/88 les/c/f=89/89/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[88,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:54:02 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec  2 05:54:02 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 115 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=88/89 n=5 ec=58/48 lis/c=88/88 les/c/f=89/89/0 sis=115) [0]/[2] r=0 lpr=115 pi=[88,115)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:54:02 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 115 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=88/89 n=5 ec=58/48 lis/c=88/88 les/c/f=89/89/0 sis=115) [0]/[2] r=0 lpr=115 pi=[88,115)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:54:02 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 4.e scrub starts
Dec  2 05:54:02 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 4.e scrub ok
Dec  2 05:54:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v244: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:54:03 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Dec  2 05:54:03 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Dec  2 05:54:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Dec  2 05:54:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Dec  2 05:54:03 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Dec  2 05:54:04 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 116 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=115/116 n=5 ec=58/48 lis/c=88/88 les/c/f=89/89/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[88,115)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:54:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Dec  2 05:54:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Dec  2 05:54:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v246: 321 pgs: 1 activating+remapped, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 7/215 objects misplaced (3.256%); 25 B/s, 1 objects/s recovering
Dec  2 05:54:05 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Dec  2 05:54:05 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 117 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=115/116 n=5 ec=58/48 lis/c=115/88 les/c/f=116/89/0 sis=117 pruub=15.336836815s) [0] async=[0] r=-1 lpr=117 pi=[88,117)/1 crt=54'385 mlcod 54'385 active pruub 224.203582764s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:54:05 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 117 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=115/116 n=5 ec=58/48 lis/c=115/88 les/c/f=116/89/0 sis=117 pruub=15.336023331s) [0] r=-1 lpr=117 pi=[88,117)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 224.203582764s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:54:05 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 117 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=115/88 les/c/f=116/89/0 sis=117) [0] r=0 lpr=117 pi=[88,117)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:54:05 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 117 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=115/88 les/c/f=116/89/0 sis=117) [0] r=0 lpr=117 pi=[88,117)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:54:05 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Dec  2 05:54:05 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Dec  2 05:54:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Dec  2 05:54:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Dec  2 05:54:06 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Dec  2 05:54:06 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 118 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=117/118 n=5 ec=58/48 lis/c=115/88 les/c/f=116/89/0 sis=117) [0] r=0 lpr=117 pi=[88,117)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:54:06 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Dec  2 05:54:06 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Dec  2 05:54:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v249: 321 pgs: 1 activating+remapped, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 235 B/s wr, 7 op/s; 7/215 objects misplaced (3.256%); 50 B/s, 3 objects/s recovering
Dec  2 05:54:07 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Dec  2 05:54:07 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Dec  2 05:54:08 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 4.9 deep-scrub starts
Dec  2 05:54:08 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 4.9 deep-scrub ok
Dec  2 05:54:08 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Dec  2 05:54:08 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Dec  2 05:54:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v250: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 5 op/s; 54 B/s, 3 objects/s recovering
Dec  2 05:54:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Dec  2 05:54:09 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec  2 05:54:09 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 4.4 deep-scrub starts
Dec  2 05:54:09 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 4.4 deep-scrub ok
Dec  2 05:54:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Dec  2 05:54:09 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec  2 05:54:09 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec  2 05:54:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Dec  2 05:54:09 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Dec  2 05:54:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:54:10 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec  2 05:54:10 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec  2 05:54:10 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec  2 05:54:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v252: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec  2 05:54:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Dec  2 05:54:11 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec  2 05:54:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Dec  2 05:54:12 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec  2 05:54:12 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec  2 05:54:12 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Dec  2 05:54:12 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Dec  2 05:54:12 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Dec  2 05:54:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v253: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Dec  2 05:54:13 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Dec  2 05:54:14 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec  2 05:54:14 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 7.3 deep-scrub starts
Dec  2 05:54:14 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Dec  2 05:54:14 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Dec  2 05:54:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Dec  2 05:54:14 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec  2 05:54:14 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec  2 05:54:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v254: 321 pgs: 2 active+clean+scrubbing, 319 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Dec  2 05:54:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Dec  2 05:54:15 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec  2 05:54:15 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Dec  2 05:54:15 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec  2 05:54:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Dec  2 05:54:15 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Dec  2 05:54:15 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 7.3 deep-scrub ok
Dec  2 05:54:15 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec  2 05:54:15 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec  2 05:54:15 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Dec  2 05:54:15 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec  2 05:54:15 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 7.1f deep-scrub starts
Dec  2 05:54:15 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 7.1f deep-scrub ok
Dec  2 05:54:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Dec  2 05:54:16 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec  2 05:54:16 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec  2 05:54:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Dec  2 05:54:16 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Dec  2 05:54:16 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 120 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=75/76 n=5 ec=58/48 lis/c=75/75 les/c/f=76/76/0 sis=120 pruub=10.742986679s) [0] r=-1 lpr=120 pi=[75,120)/1 crt=54'385 mlcod 0'0 active pruub 230.686599731s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:54:16 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 121 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=75/76 n=5 ec=58/48 lis/c=75/75 les/c/f=76/76/0 sis=120 pruub=10.742918015s) [0] r=-1 lpr=120 pi=[75,120)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 230.686599731s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:54:16 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 121 pg[9.1e( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=75/75 les/c/f=76/76/0 sis=120) [0] r=0 lpr=121 pi=[75,120)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:54:16 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Dec  2 05:54:16 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Dec  2 05:54:16 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec  2 05:54:16 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec  2 05:54:16 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec  2 05:54:16 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec  2 05:54:16 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec  2 05:54:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v257: 321 pgs: 2 active+clean+scrubbing, 319 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:54:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  2 05:54:17 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  2 05:54:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Dec  2 05:54:17 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Dec  2 05:54:17 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  2 05:54:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Dec  2 05:54:17 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Dec  2 05:54:17 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Dec  2 05:54:17 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 122 pg[9.1e( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=75/75 les/c/f=76/76/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[75,122)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:54:17 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 122 pg[9.1e( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=75/75 les/c/f=76/76/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[75,122)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:54:17 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 122 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=76/77 n=5 ec=58/48 lis/c=76/76 les/c/f=77/77/0 sis=122 pruub=10.729275703s) [1] r=-1 lpr=122 pi=[76,122)/1 crt=54'385 mlcod 0'0 active pruub 231.687866211s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:54:17 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 122 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=76/77 n=5 ec=58/48 lis/c=76/76 les/c/f=77/77/0 sis=122 pruub=10.729196548s) [1] r=-1 lpr=122 pi=[76,122)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 231.687866211s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:54:17 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 122 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=75/76 n=5 ec=58/48 lis/c=75/75 les/c/f=76/76/0 sis=122) [0]/[2] r=0 lpr=122 pi=[75,122)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:54:17 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 122 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=76/76 les/c/f=77/77/0 sis=122) [1] r=0 lpr=122 pi=[76,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:54:17 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 122 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=75/76 n=5 ec=58/48 lis/c=75/75 les/c/f=76/76/0 sis=122) [0]/[2] r=0 lpr=122 pi=[75,122)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:54:17 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  2 05:54:17 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  2 05:54:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Dec  2 05:54:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Dec  2 05:54:18 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Dec  2 05:54:18 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 123 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=76/77 n=5 ec=58/48 lis/c=76/76 les/c/f=77/77/0 sis=123) [1]/[2] r=0 lpr=123 pi=[76,123)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:54:18 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 123 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=76/77 n=5 ec=58/48 lis/c=76/76 les/c/f=77/77/0 sis=123) [1]/[2] r=0 lpr=123 pi=[76,123)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  2 05:54:18 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 123 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=76/76 les/c/f=77/77/0 sis=123) [1]/[2] r=-1 lpr=123 pi=[76,123)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:54:18 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 123 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=76/76 les/c/f=77/77/0 sis=123) [1]/[2] r=-1 lpr=123 pi=[76,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  2 05:54:18 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Dec  2 05:54:18 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Dec  2 05:54:18 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 123 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=122/123 n=5 ec=58/48 lis/c=75/75 les/c/f=76/76/0 sis=122) [0]/[2] async=[0] r=0 lpr=122 pi=[75,122)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:54:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v260: 321 pgs: 2 active+clean+scrubbing, 319 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:54:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Dec  2 05:54:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Dec  2 05:54:19 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Dec  2 05:54:19 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 124 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=122/123 n=5 ec=58/48 lis/c=122/75 les/c/f=123/76/0 sis=124 pruub=15.216670036s) [0] async=[0] r=-1 lpr=124 pi=[75,124)/1 crt=54'385 mlcod 54'385 active pruub 238.176620483s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:54:19 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 124 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=122/123 n=5 ec=58/48 lis/c=122/75 les/c/f=123/76/0 sis=124 pruub=15.216349602s) [0] r=-1 lpr=124 pi=[75,124)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 238.176620483s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:54:19 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 124 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=122/75 les/c/f=123/76/0 sis=124) [0] r=0 lpr=124 pi=[75,124)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:54:19 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 124 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=122/75 les/c/f=123/76/0 sis=124) [0] r=0 lpr=124 pi=[75,124)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:54:19 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 124 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=123/124 n=5 ec=58/48 lis/c=76/76 les/c/f=77/77/0 sis=123) [1]/[2] async=[1] r=0 lpr=123 pi=[76,123)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:54:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:54:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Dec  2 05:54:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Dec  2 05:54:19 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Dec  2 05:54:19 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 125 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=123/76 les/c/f=124/77/0 sis=125) [1] r=0 lpr=125 pi=[76,125)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:54:19 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 125 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=58/48 lis/c=123/76 les/c/f=124/77/0 sis=125) [1] r=0 lpr=125 pi=[76,125)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  2 05:54:19 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 125 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=123/124 n=5 ec=58/48 lis/c=123/76 les/c/f=124/77/0 sis=125 pruub=15.370412827s) [1] async=[1] r=-1 lpr=125 pi=[76,125)/1 crt=54'385 mlcod 54'385 active pruub 239.032394409s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  2 05:54:19 np0005542249 ceph-osd[91055]: osd.2 pg_epoch: 125 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=123/124 n=5 ec=58/48 lis/c=123/76 les/c/f=124/77/0 sis=125 pruub=15.370059967s) [1] r=-1 lpr=125 pi=[76,125)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 239.032394409s@ mbc={}] state<Start>: transitioning to Stray
Dec  2 05:54:19 np0005542249 ceph-osd[88961]: osd.0 pg_epoch: 125 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=124/125 n=5 ec=58/48 lis/c=122/75 les/c/f=123/76/0 sis=124) [0] r=0 lpr=124 pi=[75,124)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:54:20 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 3.a scrub starts
Dec  2 05:54:20 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 3.a scrub ok
Dec  2 05:54:20 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Dec  2 05:54:20 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Dec  2 05:54:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Dec  2 05:54:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Dec  2 05:54:20 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Dec  2 05:54:20 np0005542249 ceph-osd[89966]: osd.1 pg_epoch: 126 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=125/126 n=5 ec=58/48 lis/c=123/76 les/c/f=124/77/0 sis=125) [1] r=0 lpr=125 pi=[76,125)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  2 05:54:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v264: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 28 B/s, 1 objects/s recovering
Dec  2 05:54:21 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Dec  2 05:54:21 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Dec  2 05:54:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:54:22 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:54:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 05:54:22 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:54:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 05:54:22 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:54:22 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 330bab61-f9fc-4270-9a00-638c95e00d64 does not exist
Dec  2 05:54:22 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 2c82fc65-6af4-4e82-afc9-9790aa19afff does not exist
Dec  2 05:54:22 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev e91dde96-ffed-4496-8870-533949131c87 does not exist
Dec  2 05:54:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 05:54:22 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 05:54:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 05:54:22 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 05:54:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:54:22 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:54:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v265: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 2 objects/s recovering
Dec  2 05:54:23 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Dec  2 05:54:23 np0005542249 podman[108886]: 2025-12-02 10:54:23.309453624 +0000 UTC m=+0.033521029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:54:23 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Dec  2 05:54:23 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:54:23 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:54:23 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 05:54:23 np0005542249 podman[108886]: 2025-12-02 10:54:23.447982316 +0000 UTC m=+0.172049701 container create b8ec8d1ff50ff225eb3a0e96ba7d88ede67bdcee721a0f634b5bd52e3eaa9fa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:54:23 np0005542249 systemd[1]: Started libpod-conmon-b8ec8d1ff50ff225eb3a0e96ba7d88ede67bdcee721a0f634b5bd52e3eaa9fa4.scope.
Dec  2 05:54:23 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:54:23 np0005542249 podman[108886]: 2025-12-02 10:54:23.835953389 +0000 UTC m=+0.560020884 container init b8ec8d1ff50ff225eb3a0e96ba7d88ede67bdcee721a0f634b5bd52e3eaa9fa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:54:23 np0005542249 podman[108886]: 2025-12-02 10:54:23.842848463 +0000 UTC m=+0.566915878 container start b8ec8d1ff50ff225eb3a0e96ba7d88ede67bdcee721a0f634b5bd52e3eaa9fa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_diffie, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:54:23 np0005542249 podman[108886]: 2025-12-02 10:54:23.849108191 +0000 UTC m=+0.573175596 container attach b8ec8d1ff50ff225eb3a0e96ba7d88ede67bdcee721a0f634b5bd52e3eaa9fa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  2 05:54:23 np0005542249 optimistic_diffie[108903]: 167 167
Dec  2 05:54:23 np0005542249 systemd[1]: libpod-b8ec8d1ff50ff225eb3a0e96ba7d88ede67bdcee721a0f634b5bd52e3eaa9fa4.scope: Deactivated successfully.
Dec  2 05:54:23 np0005542249 conmon[108903]: conmon b8ec8d1ff50ff225eb3a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8ec8d1ff50ff225eb3a0e96ba7d88ede67bdcee721a0f634b5bd52e3eaa9fa4.scope/container/memory.events
Dec  2 05:54:23 np0005542249 podman[108886]: 2025-12-02 10:54:23.853805907 +0000 UTC m=+0.577873332 container died b8ec8d1ff50ff225eb3a0e96ba7d88ede67bdcee721a0f634b5bd52e3eaa9fa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:54:24 np0005542249 systemd[1]: var-lib-containers-storage-overlay-3bef07b9bdd64fa9f1b475dffe1cda832f6fe57d6a80dc5e0973dd605ce54e92-merged.mount: Deactivated successfully.
Dec  2 05:54:24 np0005542249 podman[108886]: 2025-12-02 10:54:24.049773217 +0000 UTC m=+0.773840602 container remove b8ec8d1ff50ff225eb3a0e96ba7d88ede67bdcee721a0f634b5bd52e3eaa9fa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:54:24 np0005542249 systemd[1]: libpod-conmon-b8ec8d1ff50ff225eb3a0e96ba7d88ede67bdcee721a0f634b5bd52e3eaa9fa4.scope: Deactivated successfully.
Dec  2 05:54:24 np0005542249 podman[108929]: 2025-12-02 10:54:24.287702241 +0000 UTC m=+0.092293884 container create 30e0e2b0aa5e954e84c1c68c8da9ba09464ed64887432c236af492b1a53af7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lalande, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:54:24 np0005542249 podman[108929]: 2025-12-02 10:54:24.223748247 +0000 UTC m=+0.028339970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:54:24 np0005542249 systemd[1]: Started libpod-conmon-30e0e2b0aa5e954e84c1c68c8da9ba09464ed64887432c236af492b1a53af7af.scope.
Dec  2 05:54:24 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:54:24 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c5956b8a25e86b524ff764be8777e2c5116075af0289d6be1b500164926ef3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:54:24 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c5956b8a25e86b524ff764be8777e2c5116075af0289d6be1b500164926ef3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:54:24 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c5956b8a25e86b524ff764be8777e2c5116075af0289d6be1b500164926ef3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:54:24 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c5956b8a25e86b524ff764be8777e2c5116075af0289d6be1b500164926ef3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:54:24 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c5956b8a25e86b524ff764be8777e2c5116075af0289d6be1b500164926ef3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:54:24 np0005542249 podman[108929]: 2025-12-02 10:54:24.409804112 +0000 UTC m=+0.214395765 container init 30e0e2b0aa5e954e84c1c68c8da9ba09464ed64887432c236af492b1a53af7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lalande, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:54:24 np0005542249 podman[108929]: 2025-12-02 10:54:24.419874321 +0000 UTC m=+0.224465964 container start 30e0e2b0aa5e954e84c1c68c8da9ba09464ed64887432c236af492b1a53af7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lalande, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:54:24 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Dec  2 05:54:24 np0005542249 podman[108929]: 2025-12-02 10:54:24.439203609 +0000 UTC m=+0.243795242 container attach 30e0e2b0aa5e954e84c1c68c8da9ba09464ed64887432c236af492b1a53af7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:54:24 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Dec  2 05:54:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:54:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v266: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec  2 05:54:25 np0005542249 dazzling_lalande[108946]: --> passed data devices: 0 physical, 3 LVM
Dec  2 05:54:25 np0005542249 dazzling_lalande[108946]: --> relative data size: 1.0
Dec  2 05:54:25 np0005542249 dazzling_lalande[108946]: --> All data devices are unavailable
Dec  2 05:54:25 np0005542249 systemd[1]: libpod-30e0e2b0aa5e954e84c1c68c8da9ba09464ed64887432c236af492b1a53af7af.scope: Deactivated successfully.
Dec  2 05:54:25 np0005542249 podman[108929]: 2025-12-02 10:54:25.455224437 +0000 UTC m=+1.259816170 container died 30e0e2b0aa5e954e84c1c68c8da9ba09464ed64887432c236af492b1a53af7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lalande, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  2 05:54:25 np0005542249 systemd[1]: var-lib-containers-storage-overlay-49c5956b8a25e86b524ff764be8777e2c5116075af0289d6be1b500164926ef3-merged.mount: Deactivated successfully.
Dec  2 05:54:25 np0005542249 podman[108929]: 2025-12-02 10:54:25.514080413 +0000 UTC m=+1.318672046 container remove 30e0e2b0aa5e954e84c1c68c8da9ba09464ed64887432c236af492b1a53af7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lalande, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:54:25 np0005542249 systemd[1]: libpod-conmon-30e0e2b0aa5e954e84c1c68c8da9ba09464ed64887432c236af492b1a53af7af.scope: Deactivated successfully.
Dec  2 05:54:25 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Dec  2 05:54:25 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Dec  2 05:54:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_10:54:26
Dec  2 05:54:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 05:54:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 05:54:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['backups', 'volumes', '.rgw.root', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'vms']
Dec  2 05:54:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 05:54:26 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Dec  2 05:54:26 np0005542249 podman[109127]: 2025-12-02 10:54:26.265671528 +0000 UTC m=+0.043660401 container create 4e295e10ee007f835c4af05bdba2eba70e927e42259d3d752d5842e44f6a1b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pascal, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:54:26 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Dec  2 05:54:26 np0005542249 systemd[1]: Started libpod-conmon-4e295e10ee007f835c4af05bdba2eba70e927e42259d3d752d5842e44f6a1b00.scope.
Dec  2 05:54:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:54:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:54:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:54:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:54:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:54:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:54:26 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:54:26 np0005542249 podman[109127]: 2025-12-02 10:54:26.244584253 +0000 UTC m=+0.022573146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:54:26 np0005542249 podman[109127]: 2025-12-02 10:54:26.348329912 +0000 UTC m=+0.126318805 container init 4e295e10ee007f835c4af05bdba2eba70e927e42259d3d752d5842e44f6a1b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pascal, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  2 05:54:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 05:54:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 05:54:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 05:54:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 05:54:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 05:54:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 05:54:26 np0005542249 podman[109127]: 2025-12-02 10:54:26.356220574 +0000 UTC m=+0.134209457 container start 4e295e10ee007f835c4af05bdba2eba70e927e42259d3d752d5842e44f6a1b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pascal, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:54:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 05:54:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 05:54:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 05:54:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 05:54:26 np0005542249 podman[109127]: 2025-12-02 10:54:26.359691576 +0000 UTC m=+0.137680489 container attach 4e295e10ee007f835c4af05bdba2eba70e927e42259d3d752d5842e44f6a1b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pascal, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  2 05:54:26 np0005542249 sad_pascal[109143]: 167 167
Dec  2 05:54:26 np0005542249 systemd[1]: libpod-4e295e10ee007f835c4af05bdba2eba70e927e42259d3d752d5842e44f6a1b00.scope: Deactivated successfully.
Dec  2 05:54:26 np0005542249 podman[109127]: 2025-12-02 10:54:26.362595594 +0000 UTC m=+0.140584467 container died 4e295e10ee007f835c4af05bdba2eba70e927e42259d3d752d5842e44f6a1b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pascal, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  2 05:54:26 np0005542249 systemd[1]: var-lib-containers-storage-overlay-e8c7fa5798374ef8f8f5aa11105080a9fd173edd8541eab8c7e95826d84f91a8-merged.mount: Deactivated successfully.
Dec  2 05:54:26 np0005542249 podman[109127]: 2025-12-02 10:54:26.394455868 +0000 UTC m=+0.172444741 container remove 4e295e10ee007f835c4af05bdba2eba70e927e42259d3d752d5842e44f6a1b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:54:26 np0005542249 systemd[1]: libpod-conmon-4e295e10ee007f835c4af05bdba2eba70e927e42259d3d752d5842e44f6a1b00.scope: Deactivated successfully.
Dec  2 05:54:26 np0005542249 podman[109166]: 2025-12-02 10:54:26.56923279 +0000 UTC m=+0.049263661 container create 51d296df1b22eac593ebbb8003a22e2ae64664dda44ad84a2c2a87a1bbe8619a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chandrasekhar, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  2 05:54:26 np0005542249 systemd[1]: Started libpod-conmon-51d296df1b22eac593ebbb8003a22e2ae64664dda44ad84a2c2a87a1bbe8619a.scope.
Dec  2 05:54:26 np0005542249 podman[109166]: 2025-12-02 10:54:26.548447703 +0000 UTC m=+0.028478574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:54:26 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:54:26 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4799e5fafe6b835b3bb092e8191ffb9c07d242e3075efe6e8be5fb3ad6ad18f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:54:26 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4799e5fafe6b835b3bb092e8191ffb9c07d242e3075efe6e8be5fb3ad6ad18f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:54:26 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4799e5fafe6b835b3bb092e8191ffb9c07d242e3075efe6e8be5fb3ad6ad18f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:54:26 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4799e5fafe6b835b3bb092e8191ffb9c07d242e3075efe6e8be5fb3ad6ad18f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:54:26 np0005542249 podman[109166]: 2025-12-02 10:54:26.674717165 +0000 UTC m=+0.154748056 container init 51d296df1b22eac593ebbb8003a22e2ae64664dda44ad84a2c2a87a1bbe8619a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:54:26 np0005542249 podman[109166]: 2025-12-02 10:54:26.686460011 +0000 UTC m=+0.166490852 container start 51d296df1b22eac593ebbb8003a22e2ae64664dda44ad84a2c2a87a1bbe8619a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chandrasekhar, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:54:26 np0005542249 podman[109166]: 2025-12-02 10:54:26.691824384 +0000 UTC m=+0.171855305 container attach 51d296df1b22eac593ebbb8003a22e2ae64664dda44ad84a2c2a87a1bbe8619a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chandrasekhar, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  2 05:54:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v267: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]: {
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:    "0": [
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:        {
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "devices": [
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "/dev/loop3"
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            ],
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "lv_name": "ceph_lv0",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "lv_size": "21470642176",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "name": "ceph_lv0",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "tags": {
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.cluster_name": "ceph",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.crush_device_class": "",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.encrypted": "0",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.osd_id": "0",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.type": "block",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.vdo": "0"
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            },
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "type": "block",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "vg_name": "ceph_vg0"
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:        }
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:    ],
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:    "1": [
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:        {
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "devices": [
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "/dev/loop4"
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            ],
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "lv_name": "ceph_lv1",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "lv_size": "21470642176",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "name": "ceph_lv1",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "tags": {
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.cluster_name": "ceph",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.crush_device_class": "",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.encrypted": "0",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.osd_id": "1",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.type": "block",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.vdo": "0"
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            },
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "type": "block",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "vg_name": "ceph_vg1"
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:        }
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:    ],
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:    "2": [
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:        {
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "devices": [
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "/dev/loop5"
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            ],
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "lv_name": "ceph_lv2",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "lv_size": "21470642176",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "name": "ceph_lv2",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "tags": {
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.cluster_name": "ceph",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.crush_device_class": "",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.encrypted": "0",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.osd_id": "2",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.type": "block",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:                "ceph.vdo": "0"
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            },
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "type": "block",
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:            "vg_name": "ceph_vg2"
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:        }
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]:    ]
Dec  2 05:54:27 np0005542249 charming_chandrasekhar[109183]: }
Dec  2 05:54:27 np0005542249 systemd[1]: libpod-51d296df1b22eac593ebbb8003a22e2ae64664dda44ad84a2c2a87a1bbe8619a.scope: Deactivated successfully.
Dec  2 05:54:27 np0005542249 podman[109166]: 2025-12-02 10:54:27.484135 +0000 UTC m=+0.964165881 container died 51d296df1b22eac593ebbb8003a22e2ae64664dda44ad84a2c2a87a1bbe8619a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  2 05:54:27 np0005542249 systemd[1]: var-lib-containers-storage-overlay-4799e5fafe6b835b3bb092e8191ffb9c07d242e3075efe6e8be5fb3ad6ad18f6-merged.mount: Deactivated successfully.
Dec  2 05:54:27 np0005542249 podman[109166]: 2025-12-02 10:54:27.947575655 +0000 UTC m=+1.427606506 container remove 51d296df1b22eac593ebbb8003a22e2ae64664dda44ad84a2c2a87a1bbe8619a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chandrasekhar, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec  2 05:54:27 np0005542249 systemd[1]: libpod-conmon-51d296df1b22eac593ebbb8003a22e2ae64664dda44ad84a2c2a87a1bbe8619a.scope: Deactivated successfully.
Dec  2 05:54:28 np0005542249 podman[109347]: 2025-12-02 10:54:28.624070137 +0000 UTC m=+0.058781755 container create 1e8af35d17a7e8f1a597942cd4d70236b9305845268831abc07e20449cf2037e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lewin, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  2 05:54:28 np0005542249 systemd[1]: Started libpod-conmon-1e8af35d17a7e8f1a597942cd4d70236b9305845268831abc07e20449cf2037e.scope.
Dec  2 05:54:28 np0005542249 podman[109347]: 2025-12-02 10:54:28.59467193 +0000 UTC m=+0.029383638 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:54:28 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:54:28 np0005542249 podman[109347]: 2025-12-02 10:54:28.704596674 +0000 UTC m=+0.139308312 container init 1e8af35d17a7e8f1a597942cd4d70236b9305845268831abc07e20449cf2037e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lewin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  2 05:54:28 np0005542249 podman[109347]: 2025-12-02 10:54:28.716056451 +0000 UTC m=+0.150768079 container start 1e8af35d17a7e8f1a597942cd4d70236b9305845268831abc07e20449cf2037e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:54:28 np0005542249 podman[109347]: 2025-12-02 10:54:28.719837182 +0000 UTC m=+0.154548820 container attach 1e8af35d17a7e8f1a597942cd4d70236b9305845268831abc07e20449cf2037e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lewin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  2 05:54:28 np0005542249 dreamy_lewin[109363]: 167 167
Dec  2 05:54:28 np0005542249 systemd[1]: libpod-1e8af35d17a7e8f1a597942cd4d70236b9305845268831abc07e20449cf2037e.scope: Deactivated successfully.
Dec  2 05:54:28 np0005542249 podman[109347]: 2025-12-02 10:54:28.722127474 +0000 UTC m=+0.156839142 container died 1e8af35d17a7e8f1a597942cd4d70236b9305845268831abc07e20449cf2037e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Dec  2 05:54:28 np0005542249 systemd[1]: var-lib-containers-storage-overlay-df52946856509849befe5f4ae51ca98e2cdcbc2d50019d45759fa020d9ee77fd-merged.mount: Deactivated successfully.
Dec  2 05:54:28 np0005542249 podman[109347]: 2025-12-02 10:54:28.777783084 +0000 UTC m=+0.212494742 container remove 1e8af35d17a7e8f1a597942cd4d70236b9305845268831abc07e20449cf2037e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lewin, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Dec  2 05:54:28 np0005542249 systemd[1]: libpod-conmon-1e8af35d17a7e8f1a597942cd4d70236b9305845268831abc07e20449cf2037e.scope: Deactivated successfully.
Dec  2 05:54:29 np0005542249 podman[109387]: 2025-12-02 10:54:29.003398069 +0000 UTC m=+0.054794560 container create 049e9fa6aae80600551c27baf9e059170d700831538be2d0608fba1acb1e0dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dewdney, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  2 05:54:29 np0005542249 systemd[1]: Started libpod-conmon-049e9fa6aae80600551c27baf9e059170d700831538be2d0608fba1acb1e0dc8.scope.
Dec  2 05:54:29 np0005542249 podman[109387]: 2025-12-02 10:54:28.978513582 +0000 UTC m=+0.029910153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:54:29 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:54:29 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec382bb9d13e390f3938675df69dae573ce0441cd731627f8237f99cea8b321e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:54:29 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec382bb9d13e390f3938675df69dae573ce0441cd731627f8237f99cea8b321e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:54:29 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec382bb9d13e390f3938675df69dae573ce0441cd731627f8237f99cea8b321e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:54:29 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec382bb9d13e390f3938675df69dae573ce0441cd731627f8237f99cea8b321e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:54:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v268: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec  2 05:54:29 np0005542249 podman[109387]: 2025-12-02 10:54:29.271961763 +0000 UTC m=+0.323358354 container init 049e9fa6aae80600551c27baf9e059170d700831538be2d0608fba1acb1e0dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dewdney, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  2 05:54:29 np0005542249 podman[109387]: 2025-12-02 10:54:29.286279977 +0000 UTC m=+0.337676498 container start 049e9fa6aae80600551c27baf9e059170d700831538be2d0608fba1acb1e0dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  2 05:54:29 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Dec  2 05:54:29 np0005542249 podman[109387]: 2025-12-02 10:54:29.30805061 +0000 UTC m=+0.359447171 container attach 049e9fa6aae80600551c27baf9e059170d700831538be2d0608fba1acb1e0dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dewdney, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  2 05:54:29 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Dec  2 05:54:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:54:30 np0005542249 busy_dewdney[109404]: {
Dec  2 05:54:30 np0005542249 busy_dewdney[109404]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 05:54:30 np0005542249 busy_dewdney[109404]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:54:30 np0005542249 busy_dewdney[109404]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 05:54:30 np0005542249 busy_dewdney[109404]:        "osd_id": 0,
Dec  2 05:54:30 np0005542249 busy_dewdney[109404]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 05:54:30 np0005542249 busy_dewdney[109404]:        "type": "bluestore"
Dec  2 05:54:30 np0005542249 busy_dewdney[109404]:    },
Dec  2 05:54:30 np0005542249 busy_dewdney[109404]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 05:54:30 np0005542249 busy_dewdney[109404]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:54:30 np0005542249 busy_dewdney[109404]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 05:54:30 np0005542249 busy_dewdney[109404]:        "osd_id": 2,
Dec  2 05:54:30 np0005542249 busy_dewdney[109404]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 05:54:30 np0005542249 busy_dewdney[109404]:        "type": "bluestore"
Dec  2 05:54:30 np0005542249 busy_dewdney[109404]:    },
Dec  2 05:54:30 np0005542249 busy_dewdney[109404]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 05:54:30 np0005542249 busy_dewdney[109404]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:54:30 np0005542249 busy_dewdney[109404]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 05:54:30 np0005542249 busy_dewdney[109404]:        "osd_id": 1,
Dec  2 05:54:30 np0005542249 busy_dewdney[109404]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 05:54:30 np0005542249 busy_dewdney[109404]:        "type": "bluestore"
Dec  2 05:54:30 np0005542249 busy_dewdney[109404]:    }
Dec  2 05:54:30 np0005542249 busy_dewdney[109404]: }
Dec  2 05:54:30 np0005542249 systemd[1]: libpod-049e9fa6aae80600551c27baf9e059170d700831538be2d0608fba1acb1e0dc8.scope: Deactivated successfully.
Dec  2 05:54:30 np0005542249 podman[109387]: 2025-12-02 10:54:30.267926854 +0000 UTC m=+1.319323345 container died 049e9fa6aae80600551c27baf9e059170d700831538be2d0608fba1acb1e0dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:54:30 np0005542249 systemd[1]: var-lib-containers-storage-overlay-ec382bb9d13e390f3938675df69dae573ce0441cd731627f8237f99cea8b321e-merged.mount: Deactivated successfully.
Dec  2 05:54:30 np0005542249 podman[109387]: 2025-12-02 10:54:30.538798859 +0000 UTC m=+1.590195350 container remove 049e9fa6aae80600551c27baf9e059170d700831538be2d0608fba1acb1e0dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dewdney, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  2 05:54:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:54:30 np0005542249 systemd[1]: libpod-conmon-049e9fa6aae80600551c27baf9e059170d700831538be2d0608fba1acb1e0dc8.scope: Deactivated successfully.
Dec  2 05:54:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:54:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:54:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:54:30 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 4134c943-c2f1-4bce-bc6e-b93c8df35118 does not exist
Dec  2 05:54:30 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 3bdded54-bd72-44a4-aea2-965927737b53 does not exist
Dec  2 05:54:30 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 10.a scrub starts
Dec  2 05:54:30 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 10.a scrub ok
Dec  2 05:54:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v269: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec  2 05:54:31 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Dec  2 05:54:31 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Dec  2 05:54:31 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:54:31 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:54:31 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 10.c scrub starts
Dec  2 05:54:31 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 10.c scrub ok
Dec  2 05:54:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v270: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec  2 05:54:34 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Dec  2 05:54:34 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Dec  2 05:54:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:54:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v271: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:54:35 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Dec  2 05:54:35 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Dec  2 05:54:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 05:54:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:54:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 05:54:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:54:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:54:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:54:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:54:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:54:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:54:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:54:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:54:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:54:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 05:54:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:54:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:54:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:54:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 05:54:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:54:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 05:54:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:54:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:54:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:54:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 05:54:35 np0005542249 python3.9[109649]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:54:35 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 10.18 deep-scrub starts
Dec  2 05:54:35 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 10.18 deep-scrub ok
Dec  2 05:54:36 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 8.b scrub starts
Dec  2 05:54:36 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 8.b scrub ok
Dec  2 05:54:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v272: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:54:37 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Dec  2 05:54:37 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Dec  2 05:54:37 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Dec  2 05:54:37 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Dec  2 05:54:37 np0005542249 python3.9[109936]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  2 05:54:38 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Dec  2 05:54:38 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Dec  2 05:54:38 np0005542249 python3.9[110088]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  2 05:54:38 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Dec  2 05:54:38 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Dec  2 05:54:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v273: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:54:39 np0005542249 python3.9[110240]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:54:39 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Dec  2 05:54:39 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Dec  2 05:54:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:54:40 np0005542249 python3.9[110392]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  2 05:54:40 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Dec  2 05:54:40 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Dec  2 05:54:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v274: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:54:41 np0005542249 python3.9[110544]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:54:42 np0005542249 python3.9[110696]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:54:42 np0005542249 python3.9[110774]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:54:42 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 10.1d deep-scrub starts
Dec  2 05:54:42 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 10.1d deep-scrub ok
Dec  2 05:54:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v275: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:54:43 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Dec  2 05:54:43 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Dec  2 05:54:43 np0005542249 python3.9[110926]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 05:54:43 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Dec  2 05:54:43 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Dec  2 05:54:44 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Dec  2 05:54:44 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Dec  2 05:54:44 np0005542249 python3.9[111080]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  2 05:54:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:54:44 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Dec  2 05:54:44 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Dec  2 05:54:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v276: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:54:45 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Dec  2 05:54:45 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Dec  2 05:54:45 np0005542249 python3.9[111233]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  2 05:54:45 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Dec  2 05:54:45 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Dec  2 05:54:46 np0005542249 python3.9[111386]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  2 05:54:47 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Dec  2 05:54:47 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Dec  2 05:54:47 np0005542249 python3.9[111538]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  2 05:54:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v277: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:54:47 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Dec  2 05:54:47 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Dec  2 05:54:47 np0005542249 python3.9[111690]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 05:54:48 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Dec  2 05:54:48 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Dec  2 05:54:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v278: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:54:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:54:50 np0005542249 python3.9[111843]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:54:50 np0005542249 python3.9[111995]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:54:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v279: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:54:51 np0005542249 python3.9[112073]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:54:51 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 8.a scrub starts
Dec  2 05:54:51 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 8.a scrub ok
Dec  2 05:54:52 np0005542249 python3.9[112225]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:54:52 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Dec  2 05:54:52 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Dec  2 05:54:52 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Dec  2 05:54:52 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Dec  2 05:54:52 np0005542249 python3.9[112303]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:54:53 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Dec  2 05:54:53 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Dec  2 05:54:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v280: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:54:53 np0005542249 python3.9[112455]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 05:54:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:54:55 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Dec  2 05:54:55 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Dec  2 05:54:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v281: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:54:55 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 11.14 deep-scrub starts
Dec  2 05:54:55 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 11.14 deep-scrub ok
Dec  2 05:54:55 np0005542249 python3.9[112606]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 05:54:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:54:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:54:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:54:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:54:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:54:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:54:56 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 8.f scrub starts
Dec  2 05:54:56 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 8.f scrub ok
Dec  2 05:54:56 np0005542249 python3.9[112758]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  2 05:54:57 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 11.d scrub starts
Dec  2 05:54:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v282: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:54:57 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 11.d scrub ok
Dec  2 05:54:57 np0005542249 python3.9[112908]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 05:54:58 np0005542249 python3.9[113060]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 05:54:58 np0005542249 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  2 05:54:59 np0005542249 systemd[1]: tuned.service: Deactivated successfully.
Dec  2 05:54:59 np0005542249 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  2 05:54:59 np0005542249 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  2 05:54:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v283: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:54:59 np0005542249 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  2 05:54:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:55:00 np0005542249 python3.9[113221]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  2 05:55:00 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Dec  2 05:55:00 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Dec  2 05:55:00 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Dec  2 05:55:00 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Dec  2 05:55:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v284: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:55:01 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 8.c scrub starts
Dec  2 05:55:01 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 8.c scrub ok
Dec  2 05:55:02 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Dec  2 05:55:02 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Dec  2 05:55:02 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 11.e scrub starts
Dec  2 05:55:02 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 11.e scrub ok
Dec  2 05:55:02 np0005542249 python3.9[113373]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 05:55:03 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Dec  2 05:55:03 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Dec  2 05:55:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v285: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:55:03 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Dec  2 05:55:03 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Dec  2 05:55:03 np0005542249 python3.9[113527]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 05:55:04 np0005542249 systemd[1]: session-35.scope: Deactivated successfully.
Dec  2 05:55:04 np0005542249 systemd[1]: session-35.scope: Consumed 1min 7.734s CPU time.
Dec  2 05:55:04 np0005542249 systemd-logind[787]: Session 35 logged out. Waiting for processes to exit.
Dec  2 05:55:04 np0005542249 systemd-logind[787]: Removed session 35.
Dec  2 05:55:04 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 10.d scrub starts
Dec  2 05:55:04 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 10.d scrub ok
Dec  2 05:55:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:55:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v286: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:55:05 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Dec  2 05:55:05 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Dec  2 05:55:05 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 8.e scrub starts
Dec  2 05:55:05 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 8.e scrub ok
Dec  2 05:55:06 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 8.d scrub starts
Dec  2 05:55:06 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 8.d scrub ok
Dec  2 05:55:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v287: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:55:07 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Dec  2 05:55:07 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Dec  2 05:55:07 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 10.7 deep-scrub starts
Dec  2 05:55:07 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 10.7 deep-scrub ok
Dec  2 05:55:08 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 9.4 deep-scrub starts
Dec  2 05:55:08 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 9.4 deep-scrub ok
Dec  2 05:55:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v288: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:55:09 np0005542249 systemd-logind[787]: New session 36 of user zuul.
Dec  2 05:55:09 np0005542249 systemd[1]: Started Session 36 of User zuul.
Dec  2 05:55:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:55:10 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 9.a scrub starts
Dec  2 05:55:10 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 9.a scrub ok
Dec  2 05:55:10 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Dec  2 05:55:10 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Dec  2 05:55:10 np0005542249 python3.9[113707]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:55:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v289: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:55:11 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 9.10 deep-scrub starts
Dec  2 05:55:11 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 9.10 deep-scrub ok
Dec  2 05:55:11 np0005542249 systemd[76634]: Created slice User Background Tasks Slice.
Dec  2 05:55:11 np0005542249 systemd[76634]: Starting Cleanup of User's Temporary Files and Directories...
Dec  2 05:55:11 np0005542249 systemd[76634]: Finished Cleanup of User's Temporary Files and Directories.
Dec  2 05:55:12 np0005542249 python3.9[113863]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec  2 05:55:12 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Dec  2 05:55:12 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Dec  2 05:55:12 np0005542249 python3.9[114017]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 05:55:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v290: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:55:13 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Dec  2 05:55:13 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Dec  2 05:55:13 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 9.12 deep-scrub starts
Dec  2 05:55:13 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 9.12 deep-scrub ok
Dec  2 05:55:13 np0005542249 python3.9[114101]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  2 05:55:14 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Dec  2 05:55:14 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Dec  2 05:55:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:55:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v291: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:55:15 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Dec  2 05:55:15 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Dec  2 05:55:15 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Dec  2 05:55:15 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Dec  2 05:55:16 np0005542249 python3.9[114254]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 05:55:16 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Dec  2 05:55:16 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Dec  2 05:55:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v292: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:55:17 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Dec  2 05:55:17 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Dec  2 05:55:18 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Dec  2 05:55:18 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Dec  2 05:55:18 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Dec  2 05:55:18 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Dec  2 05:55:18 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 8.18 deep-scrub starts
Dec  2 05:55:18 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 8.18 deep-scrub ok
Dec  2 05:55:18 np0005542249 python3.9[114407]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  2 05:55:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v293: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:55:19 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Dec  2 05:55:19 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Dec  2 05:55:19 np0005542249 python3.9[114560]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:55:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:55:20 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Dec  2 05:55:20 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Dec  2 05:55:20 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Dec  2 05:55:20 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Dec  2 05:55:20 np0005542249 python3.9[114712]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec  2 05:55:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v294: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:55:21 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 11.a scrub starts
Dec  2 05:55:21 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 11.a scrub ok
Dec  2 05:55:21 np0005542249 python3.9[114862]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:55:22 np0005542249 python3.9[115020]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 05:55:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v295: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:55:23 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Dec  2 05:55:23 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Dec  2 05:55:24 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Dec  2 05:55:24 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Dec  2 05:55:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:55:24 np0005542249 python3.9[115173]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:55:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v296: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:55:25 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Dec  2 05:55:25 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Dec  2 05:55:25 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 11.c scrub starts
Dec  2 05:55:25 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 11.c scrub ok
Dec  2 05:55:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_10:55:26
Dec  2 05:55:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 05:55:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 05:55:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'backups', 'vms', 'volumes']
Dec  2 05:55:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 05:55:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:55:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:55:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:55:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:55:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:55:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:55:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 05:55:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 05:55:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 05:55:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 05:55:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 05:55:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 05:55:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 05:55:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 05:55:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 05:55:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 05:55:26 np0005542249 python3.9[115460]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  2 05:55:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v297: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:55:27 np0005542249 python3.9[115610]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 05:55:28 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Dec  2 05:55:28 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Dec  2 05:55:28 np0005542249 python3.9[115764]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 05:55:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v298: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:55:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:55:30 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Dec  2 05:55:30 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Dec  2 05:55:30 np0005542249 python3.9[115917]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 05:55:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v299: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:55:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:55:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:55:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 05:55:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:55:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 05:55:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:55:31 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev a014ba1a-e089-4316-8a9e-c365803505d2 does not exist
Dec  2 05:55:31 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev ad105e72-b4ef-4393-b061-5ee65c3d779d does not exist
Dec  2 05:55:31 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev da91cd28-1602-48a3-a46e-a1d7e3b28ce8 does not exist
Dec  2 05:55:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 05:55:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 05:55:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 05:55:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 05:55:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:55:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:55:32 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Dec  2 05:55:32 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Dec  2 05:55:32 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Dec  2 05:55:32 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Dec  2 05:55:32 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:55:32 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:55:32 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 05:55:32 np0005542249 python3.9[116299]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 05:55:32 np0005542249 podman[116356]: 2025-12-02 10:55:32.591930409 +0000 UTC m=+0.041295332 container create 5b0cedf40c7cba35eeb1c689072a2cd805651c28c3abf1f77fce34f458ffcf53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_benz, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  2 05:55:32 np0005542249 systemd[1]: Started libpod-conmon-5b0cedf40c7cba35eeb1c689072a2cd805651c28c3abf1f77fce34f458ffcf53.scope.
Dec  2 05:55:32 np0005542249 podman[116356]: 2025-12-02 10:55:32.570627526 +0000 UTC m=+0.019992419 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:55:32 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:55:32 np0005542249 podman[116356]: 2025-12-02 10:55:32.76355677 +0000 UTC m=+0.212921733 container init 5b0cedf40c7cba35eeb1c689072a2cd805651c28c3abf1f77fce34f458ffcf53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  2 05:55:32 np0005542249 podman[116356]: 2025-12-02 10:55:32.772144671 +0000 UTC m=+0.221509584 container start 5b0cedf40c7cba35eeb1c689072a2cd805651c28c3abf1f77fce34f458ffcf53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_benz, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:55:32 np0005542249 podman[116356]: 2025-12-02 10:55:32.776626361 +0000 UTC m=+0.225991324 container attach 5b0cedf40c7cba35eeb1c689072a2cd805651c28c3abf1f77fce34f458ffcf53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_benz, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:55:32 np0005542249 stupefied_benz[116404]: 167 167
Dec  2 05:55:32 np0005542249 systemd[1]: libpod-5b0cedf40c7cba35eeb1c689072a2cd805651c28c3abf1f77fce34f458ffcf53.scope: Deactivated successfully.
Dec  2 05:55:32 np0005542249 podman[116356]: 2025-12-02 10:55:32.782595292 +0000 UTC m=+0.231960195 container died 5b0cedf40c7cba35eeb1c689072a2cd805651c28c3abf1f77fce34f458ffcf53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  2 05:55:32 np0005542249 systemd[1]: var-lib-containers-storage-overlay-050f4e440e20c7df4e6ed449610cf10f6d6254ff70739b3c42fa2055c8dd302a-merged.mount: Deactivated successfully.
Dec  2 05:55:32 np0005542249 podman[116356]: 2025-12-02 10:55:32.840336917 +0000 UTC m=+0.289701800 container remove 5b0cedf40c7cba35eeb1c689072a2cd805651c28c3abf1f77fce34f458ffcf53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_benz, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Dec  2 05:55:32 np0005542249 systemd[1]: libpod-conmon-5b0cedf40c7cba35eeb1c689072a2cd805651c28c3abf1f77fce34f458ffcf53.scope: Deactivated successfully.
Dec  2 05:55:33 np0005542249 podman[116482]: 2025-12-02 10:55:33.049711403 +0000 UTC m=+0.047512340 container create 0c71b71d806ad3913a325ccbbceb123f7fc6a66fca5ff521460f70ac47d94912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elbakyan, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  2 05:55:33 np0005542249 systemd[1]: Started libpod-conmon-0c71b71d806ad3913a325ccbbceb123f7fc6a66fca5ff521460f70ac47d94912.scope.
Dec  2 05:55:33 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:55:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155e55323a30f0f10133f297357f6ca6bf1dbbc980d91dc94562a639d18658c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:55:33 np0005542249 podman[116482]: 2025-12-02 10:55:33.031645836 +0000 UTC m=+0.029446793 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:55:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155e55323a30f0f10133f297357f6ca6bf1dbbc980d91dc94562a639d18658c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:55:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155e55323a30f0f10133f297357f6ca6bf1dbbc980d91dc94562a639d18658c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:55:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155e55323a30f0f10133f297357f6ca6bf1dbbc980d91dc94562a639d18658c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:55:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155e55323a30f0f10133f297357f6ca6bf1dbbc980d91dc94562a639d18658c4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:55:33 np0005542249 podman[116482]: 2025-12-02 10:55:33.134823104 +0000 UTC m=+0.132624061 container init 0c71b71d806ad3913a325ccbbceb123f7fc6a66fca5ff521460f70ac47d94912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  2 05:55:33 np0005542249 podman[116482]: 2025-12-02 10:55:33.150359373 +0000 UTC m=+0.148160340 container start 0c71b71d806ad3913a325ccbbceb123f7fc6a66fca5ff521460f70ac47d94912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elbakyan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  2 05:55:33 np0005542249 podman[116482]: 2025-12-02 10:55:33.154261058 +0000 UTC m=+0.152062005 container attach 0c71b71d806ad3913a325ccbbceb123f7fc6a66fca5ff521460f70ac47d94912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elbakyan, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  2 05:55:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v300: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:55:33 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Dec  2 05:55:33 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Dec  2 05:55:33 np0005542249 ceph-osd[88961]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Dec  2 05:55:33 np0005542249 ceph-osd[89966]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Dec  2 05:55:33 np0005542249 python3.9[116554]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Dec  2 05:55:34 np0005542249 systemd[1]: session-36.scope: Deactivated successfully.
Dec  2 05:55:34 np0005542249 systemd[1]: session-36.scope: Consumed 18.887s CPU time.
Dec  2 05:55:34 np0005542249 systemd-logind[787]: Session 36 logged out. Waiting for processes to exit.
Dec  2 05:55:34 np0005542249 systemd-logind[787]: Removed session 36.
Dec  2 05:55:34 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 11.11 deep-scrub starts
Dec  2 05:55:34 np0005542249 ceph-osd[91055]: log_channel(cluster) log [DBG] : 11.11 deep-scrub ok
Dec  2 05:55:34 np0005542249 pensive_elbakyan[116547]: --> passed data devices: 0 physical, 3 LVM
Dec  2 05:55:34 np0005542249 pensive_elbakyan[116547]: --> relative data size: 1.0
Dec  2 05:55:34 np0005542249 pensive_elbakyan[116547]: --> All data devices are unavailable
Dec  2 05:55:34 np0005542249 systemd[1]: libpod-0c71b71d806ad3913a325ccbbceb123f7fc6a66fca5ff521460f70ac47d94912.scope: Deactivated successfully.
Dec  2 05:55:34 np0005542249 systemd[1]: libpod-0c71b71d806ad3913a325ccbbceb123f7fc6a66fca5ff521460f70ac47d94912.scope: Consumed 1.007s CPU time.
Dec  2 05:55:34 np0005542249 podman[116482]: 2025-12-02 10:55:34.206791613 +0000 UTC m=+1.204592550 container died 0c71b71d806ad3913a325ccbbceb123f7fc6a66fca5ff521460f70ac47d94912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:55:34 np0005542249 systemd[1]: var-lib-containers-storage-overlay-155e55323a30f0f10133f297357f6ca6bf1dbbc980d91dc94562a639d18658c4-merged.mount: Deactivated successfully.
Dec  2 05:55:34 np0005542249 podman[116482]: 2025-12-02 10:55:34.27357088 +0000 UTC m=+1.271371817 container remove 0c71b71d806ad3913a325ccbbceb123f7fc6a66fca5ff521460f70ac47d94912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elbakyan, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:55:34 np0005542249 systemd[1]: libpod-conmon-0c71b71d806ad3913a325ccbbceb123f7fc6a66fca5ff521460f70ac47d94912.scope: Deactivated successfully.
Dec  2 05:57:24 np0005542249 python3.9[130606]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  2 05:57:24 np0005542249 rsyslogd[1005]: imjournal: 1340 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec  2 05:57:24 np0005542249 systemd[1]: session-40.scope: Deactivated successfully.
Dec  2 05:57:24 np0005542249 systemd[1]: session-40.scope: Consumed 32.071s CPU time.
Dec  2 05:57:24 np0005542249 systemd-logind[787]: Session 40 logged out. Waiting for processes to exit.
Dec  2 05:57:24 np0005542249 systemd-logind[787]: Removed session 40.
Dec  2 05:57:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:57:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v356: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:57:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_10:57:26
Dec  2 05:57:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 05:57:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 05:57:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'images', 'vms', 'cephfs.cephfs.data', '.mgr', '.rgw.root']
Dec  2 05:57:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 05:57:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:57:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:57:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:57:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:57:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:57:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:57:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 05:57:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 05:57:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 05:57:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 05:57:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 05:57:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 05:57:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 05:57:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 05:57:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 05:57:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 05:57:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v357: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:57:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v358: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:57:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:57:30 np0005542249 systemd-logind[787]: New session 41 of user zuul.
Dec  2 05:57:30 np0005542249 systemd[1]: Started Session 41 of User zuul.
Dec  2 05:57:30 np0005542249 python3.9[130786]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  2 05:57:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v359: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:57:31 np0005542249 python3.9[130938]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 05:57:32 np0005542249 python3.9[131092]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec  2 05:57:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v360: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:57:33 np0005542249 python3.9[131244]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.4ttmoqg2 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:57:34 np0005542249 python3.9[131369]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.4ttmoqg2 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764673052.7613256-44-89432622710542/.source.4ttmoqg2 _original_basename=.200qdb1k follow=False checksum=9c35a8be685cba885c357a9f79ad5513043fa03b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:57:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:57:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v361: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:57:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 05:57:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:57:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 05:57:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:57:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:57:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:57:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:57:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:57:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:57:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:57:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:57:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:57:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 05:57:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:57:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:57:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:57:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 05:57:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:57:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 05:57:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:57:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:57:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:57:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 05:57:35 np0005542249 python3.9[131521]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:57:36 np0005542249 python3.9[131673]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDO5YUsR3fxYV3/TLEvn2kvFiCsy0ibUl13I6MuVQRBRgKNtM/tOYqhU31vVY2EcGpB8b5ao4DERWVEInN01roi+g/wpHWb+f0/6nAbkJWbbDJ1clCd8jPymGAPak/cDMU0ovZHrQIfOCX/49oaIuAKDUkTe54rO4FW+BGD4GHqYEQADga4n/O4EGAcD1anPVb+GuuXOGssT1joWGD9Evx8h0280Y8+/hyYVrwZPzAvk1G6/Y70ZnR/Zy/KVzOXbxyD6wHRPlAhho2bW+ygitvUaism/b+gzWlPoZAgR98v436doBlCIx3m+NKfPw+RcDdiyyyQDJQkE+fK/qBvLrDl+MJ7ORZqlOaQdIvZoX5LFH6mblXEHXtugVcGoThKFYVq4pEbKBDdGsguFNBxcmAhAfAoUxaDOel7ejpg/UNFGuxCwjD7y55H9fps5JLnWBLfTRz2L9nXhcbqBGwseTrdOOUZs9eD0tBEScux1PVMxojtddX8/T2YE6UEV8IfpfE=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICoMMbun3HGPT9l61gMTTQXqOB2JMpSQRQyFehEDSUF2#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFR3Tt/mM6Z+ZOHrRq1Cxrr5ZNsDxS+Oz4ZFS7R6FnxIu2gcOh19i4U5/YwvyNQQ11yS8zrPmp7cstiLzUkJoxs=#012 create=True mode=0644 path=/tmp/ansible.4ttmoqg2 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:57:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v362: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:57:37 np0005542249 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  2 05:57:37 np0005542249 python3.9[131827]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.4ttmoqg2' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:57:38 np0005542249 python3.9[131981]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.4ttmoqg2 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:57:38 np0005542249 systemd[1]: session-41.scope: Deactivated successfully.
Dec  2 05:57:38 np0005542249 systemd[1]: session-41.scope: Consumed 5.527s CPU time.
Dec  2 05:57:38 np0005542249 systemd-logind[787]: Session 41 logged out. Waiting for processes to exit.
Dec  2 05:57:39 np0005542249 systemd-logind[787]: Removed session 41.
Dec  2 05:57:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v363: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:57:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:57:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v364: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:57:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v365: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:57:44 np0005542249 systemd-logind[787]: New session 42 of user zuul.
Dec  2 05:57:44 np0005542249 systemd[1]: Started Session 42 of User zuul.
Dec  2 05:57:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:57:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v366: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:57:45 np0005542249 python3.9[132159]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:57:46 np0005542249 python3.9[132315]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  2 05:57:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:57:47 np0005542249 systemd-logind[787]: Session 18 logged out. Waiting for processes to exit.
Dec  2 05:57:47 np0005542249 systemd[1]: session-18.scope: Deactivated successfully.
Dec  2 05:57:47 np0005542249 systemd[1]: session-18.scope: Consumed 1min 31.106s CPU time.
Dec  2 05:57:47 np0005542249 systemd-logind[787]: Removed session 18.
Dec  2 05:57:47 np0005542249 python3.9[132543]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 05:57:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:57:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:57:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 05:57:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:57:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 05:57:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:57:48 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 35e7518f-ee4c-41ca-a3de-42b69baf361d does not exist
Dec  2 05:57:48 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 3feba86c-2b91-4ac0-a93c-290976485852 does not exist
Dec  2 05:57:48 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev f46c5613-5a37-45dd-a81d-1865c3b394fc does not exist
Dec  2 05:57:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 05:57:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 05:57:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 05:57:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 05:57:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:57:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:57:48 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:57:48 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:57:48 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 05:57:48 np0005542249 python3.9[132853]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:57:48 np0005542249 podman[132920]: 2025-12-02 10:57:48.96157439 +0000 UTC m=+0.063395830 container create c59528ab6e8388299ce0d559ea45f563e98402fde77c4d22d67fee1ae6612428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  2 05:57:49 np0005542249 podman[132920]: 2025-12-02 10:57:48.91914939 +0000 UTC m=+0.020970870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:57:49 np0005542249 systemd[1]: Started libpod-conmon-c59528ab6e8388299ce0d559ea45f563e98402fde77c4d22d67fee1ae6612428.scope.
Dec  2 05:57:49 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:57:49 np0005542249 podman[132920]: 2025-12-02 10:57:49.174538512 +0000 UTC m=+0.276360002 container init c59528ab6e8388299ce0d559ea45f563e98402fde77c4d22d67fee1ae6612428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_heyrovsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Dec  2 05:57:49 np0005542249 podman[132920]: 2025-12-02 10:57:49.186385398 +0000 UTC m=+0.288206858 container start c59528ab6e8388299ce0d559ea45f563e98402fde77c4d22d67fee1ae6612428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_heyrovsky, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  2 05:57:49 np0005542249 podman[132920]: 2025-12-02 10:57:49.191297818 +0000 UTC m=+0.293119338 container attach c59528ab6e8388299ce0d559ea45f563e98402fde77c4d22d67fee1ae6612428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_heyrovsky, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Dec  2 05:57:49 np0005542249 funny_heyrovsky[132983]: 167 167
Dec  2 05:57:49 np0005542249 systemd[1]: libpod-c59528ab6e8388299ce0d559ea45f563e98402fde77c4d22d67fee1ae6612428.scope: Deactivated successfully.
Dec  2 05:57:49 np0005542249 podman[132920]: 2025-12-02 10:57:49.195747156 +0000 UTC m=+0.297568576 container died c59528ab6e8388299ce0d559ea45f563e98402fde77c4d22d67fee1ae6612428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  2 05:57:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:57:49 np0005542249 systemd[1]: var-lib-containers-storage-overlay-39df5ab6d4aa9c34e0c8ec67e56910fd5780704ae378e5c85f68129be11d3a9d-merged.mount: Deactivated successfully.
Dec  2 05:57:49 np0005542249 podman[132920]: 2025-12-02 10:57:49.316458702 +0000 UTC m=+0.418280122 container remove c59528ab6e8388299ce0d559ea45f563e98402fde77c4d22d67fee1ae6612428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:57:49 np0005542249 systemd[1]: libpod-conmon-c59528ab6e8388299ce0d559ea45f563e98402fde77c4d22d67fee1ae6612428.scope: Deactivated successfully.
Dec  2 05:57:49 np0005542249 podman[133084]: 2025-12-02 10:57:49.478869198 +0000 UTC m=+0.023616360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:57:49 np0005542249 podman[133084]: 2025-12-02 10:57:49.59383117 +0000 UTC m=+0.138578312 container create 893241729e502c6bbac85f1938eeb1714ef1d9b460a16540cdb1fc65279b7bee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  2 05:57:49 np0005542249 systemd[1]: Started libpod-conmon-893241729e502c6bbac85f1938eeb1714ef1d9b460a16540cdb1fc65279b7bee.scope.
Dec  2 05:57:49 np0005542249 python3.9[133102]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 05:57:49 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:57:49 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cfbafa6110637de867d34f31a2b5e4328fec3951a2d84ad797a082a4a43fdb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:57:49 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cfbafa6110637de867d34f31a2b5e4328fec3951a2d84ad797a082a4a43fdb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:57:49 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cfbafa6110637de867d34f31a2b5e4328fec3951a2d84ad797a082a4a43fdb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:57:49 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cfbafa6110637de867d34f31a2b5e4328fec3951a2d84ad797a082a4a43fdb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:57:49 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cfbafa6110637de867d34f31a2b5e4328fec3951a2d84ad797a082a4a43fdb4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:57:49 np0005542249 podman[133084]: 2025-12-02 10:57:49.711425352 +0000 UTC m=+0.256172494 container init 893241729e502c6bbac85f1938eeb1714ef1d9b460a16540cdb1fc65279b7bee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mcclintock, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  2 05:57:49 np0005542249 podman[133084]: 2025-12-02 10:57:49.723213305 +0000 UTC m=+0.267960447 container start 893241729e502c6bbac85f1938eeb1714ef1d9b460a16540cdb1fc65279b7bee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  2 05:57:49 np0005542249 podman[133084]: 2025-12-02 10:57:49.727359696 +0000 UTC m=+0.272106838 container attach 893241729e502c6bbac85f1938eeb1714ef1d9b460a16540cdb1fc65279b7bee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mcclintock, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:57:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:57:50 np0005542249 python3.9[133261]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:57:50 np0005542249 cool_mcclintock[133105]: --> passed data devices: 0 physical, 3 LVM
Dec  2 05:57:50 np0005542249 cool_mcclintock[133105]: --> relative data size: 1.0
Dec  2 05:57:50 np0005542249 cool_mcclintock[133105]: --> All data devices are unavailable
Dec  2 05:57:50 np0005542249 systemd-logind[787]: Session 42 logged out. Waiting for processes to exit.
Dec  2 05:57:50 np0005542249 systemd[1]: session-42.scope: Deactivated successfully.
Dec  2 05:57:50 np0005542249 systemd[1]: session-42.scope: Consumed 4.393s CPU time.
Dec  2 05:57:50 np0005542249 systemd-logind[787]: Removed session 42.
Dec  2 05:57:50 np0005542249 systemd[1]: libpod-893241729e502c6bbac85f1938eeb1714ef1d9b460a16540cdb1fc65279b7bee.scope: Deactivated successfully.
Dec  2 05:57:50 np0005542249 podman[133084]: 2025-12-02 10:57:50.942516521 +0000 UTC m=+1.487263633 container died 893241729e502c6bbac85f1938eeb1714ef1d9b460a16540cdb1fc65279b7bee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Dec  2 05:57:50 np0005542249 systemd[1]: libpod-893241729e502c6bbac85f1938eeb1714ef1d9b460a16540cdb1fc65279b7bee.scope: Consumed 1.150s CPU time.
Dec  2 05:57:50 np0005542249 systemd[1]: var-lib-containers-storage-overlay-0cfbafa6110637de867d34f31a2b5e4328fec3951a2d84ad797a082a4a43fdb4-merged.mount: Deactivated successfully.
Dec  2 05:57:51 np0005542249 podman[133084]: 2025-12-02 10:57:51.002448638 +0000 UTC m=+1.547195750 container remove 893241729e502c6bbac85f1938eeb1714ef1d9b460a16540cdb1fc65279b7bee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mcclintock, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:57:51 np0005542249 systemd[1]: libpod-conmon-893241729e502c6bbac85f1938eeb1714ef1d9b460a16540cdb1fc65279b7bee.scope: Deactivated successfully.
Dec  2 05:57:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:57:51 np0005542249 podman[133463]: 2025-12-02 10:57:51.709765787 +0000 UTC m=+0.049314525 container create efa58bd5f3f76a0f462a47b7496d391a99ed9c21d9b3080fba78ef5410ac0ffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_faraday, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  2 05:57:51 np0005542249 systemd[1]: Started libpod-conmon-efa58bd5f3f76a0f462a47b7496d391a99ed9c21d9b3080fba78ef5410ac0ffc.scope.
Dec  2 05:57:51 np0005542249 podman[133463]: 2025-12-02 10:57:51.686705902 +0000 UTC m=+0.026254650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:57:51 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:57:51 np0005542249 podman[133463]: 2025-12-02 10:57:51.832371612 +0000 UTC m=+0.171920430 container init efa58bd5f3f76a0f462a47b7496d391a99ed9c21d9b3080fba78ef5410ac0ffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 05:57:51 np0005542249 podman[133463]: 2025-12-02 10:57:51.845252585 +0000 UTC m=+0.184801343 container start efa58bd5f3f76a0f462a47b7496d391a99ed9c21d9b3080fba78ef5410ac0ffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_faraday, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  2 05:57:51 np0005542249 podman[133463]: 2025-12-02 10:57:51.849038666 +0000 UTC m=+0.188587424 container attach efa58bd5f3f76a0f462a47b7496d391a99ed9c21d9b3080fba78ef5410ac0ffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  2 05:57:51 np0005542249 interesting_faraday[133480]: 167 167
Dec  2 05:57:51 np0005542249 systemd[1]: libpod-efa58bd5f3f76a0f462a47b7496d391a99ed9c21d9b3080fba78ef5410ac0ffc.scope: Deactivated successfully.
Dec  2 05:57:51 np0005542249 conmon[133480]: conmon efa58bd5f3f76a0f462a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-efa58bd5f3f76a0f462a47b7496d391a99ed9c21d9b3080fba78ef5410ac0ffc.scope/container/memory.events
Dec  2 05:57:51 np0005542249 podman[133463]: 2025-12-02 10:57:51.85368455 +0000 UTC m=+0.193233318 container died efa58bd5f3f76a0f462a47b7496d391a99ed9c21d9b3080fba78ef5410ac0ffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_faraday, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  2 05:57:51 np0005542249 systemd[1]: var-lib-containers-storage-overlay-9dd824176b6ee042d8896f19669a92a51d5d62e2de741810274d10c84d4466d3-merged.mount: Deactivated successfully.
Dec  2 05:57:51 np0005542249 podman[133463]: 2025-12-02 10:57:51.906432124 +0000 UTC m=+0.245980882 container remove efa58bd5f3f76a0f462a47b7496d391a99ed9c21d9b3080fba78ef5410ac0ffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_faraday, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:57:51 np0005542249 systemd[1]: libpod-conmon-efa58bd5f3f76a0f462a47b7496d391a99ed9c21d9b3080fba78ef5410ac0ffc.scope: Deactivated successfully.
Dec  2 05:57:52 np0005542249 podman[133503]: 2025-12-02 10:57:52.066914069 +0000 UTC m=+0.026662791 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:57:52 np0005542249 podman[133503]: 2025-12-02 10:57:52.177134585 +0000 UTC m=+0.136883297 container create d88179bdf8206ec7809cae2a65e5708b3fd1df919aac8679d58047ae2e674484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  2 05:57:52 np0005542249 systemd[1]: Started libpod-conmon-d88179bdf8206ec7809cae2a65e5708b3fd1df919aac8679d58047ae2e674484.scope.
Dec  2 05:57:52 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:57:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ca19379c706827781acf9a6d2e07bad7eb8662f540693fca02d1297923831a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:57:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ca19379c706827781acf9a6d2e07bad7eb8662f540693fca02d1297923831a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:57:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ca19379c706827781acf9a6d2e07bad7eb8662f540693fca02d1297923831a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:57:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ca19379c706827781acf9a6d2e07bad7eb8662f540693fca02d1297923831a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:57:52 np0005542249 podman[133503]: 2025-12-02 10:57:52.325804014 +0000 UTC m=+0.285552756 container init d88179bdf8206ec7809cae2a65e5708b3fd1df919aac8679d58047ae2e674484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldwasser, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  2 05:57:52 np0005542249 podman[133503]: 2025-12-02 10:57:52.336425477 +0000 UTC m=+0.296174199 container start d88179bdf8206ec7809cae2a65e5708b3fd1df919aac8679d58047ae2e674484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  2 05:57:52 np0005542249 podman[133503]: 2025-12-02 10:57:52.345669403 +0000 UTC m=+0.305418125 container attach d88179bdf8206ec7809cae2a65e5708b3fd1df919aac8679d58047ae2e674484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]: {
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:    "0": [
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:        {
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "devices": [
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "/dev/loop3"
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            ],
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "lv_name": "ceph_lv0",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "lv_size": "21470642176",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "name": "ceph_lv0",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "tags": {
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.cluster_name": "ceph",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.crush_device_class": "",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.encrypted": "0",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.osd_id": "0",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.type": "block",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.vdo": "0"
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            },
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "type": "block",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "vg_name": "ceph_vg0"
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:        }
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:    ],
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:    "1": [
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:        {
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "devices": [
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "/dev/loop4"
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            ],
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "lv_name": "ceph_lv1",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "lv_size": "21470642176",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "name": "ceph_lv1",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "tags": {
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.cluster_name": "ceph",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.crush_device_class": "",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.encrypted": "0",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.osd_id": "1",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.type": "block",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.vdo": "0"
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            },
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "type": "block",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "vg_name": "ceph_vg1"
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:        }
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:    ],
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:    "2": [
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:        {
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "devices": [
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "/dev/loop5"
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            ],
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "lv_name": "ceph_lv2",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "lv_size": "21470642176",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "name": "ceph_lv2",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "tags": {
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.cluster_name": "ceph",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.crush_device_class": "",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.encrypted": "0",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.osd_id": "2",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.type": "block",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:                "ceph.vdo": "0"
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            },
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "type": "block",
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:            "vg_name": "ceph_vg2"
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:        }
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]:    ]
Dec  2 05:57:53 np0005542249 optimistic_goldwasser[133521]: }
Dec  2 05:57:53 np0005542249 systemd[1]: libpod-d88179bdf8206ec7809cae2a65e5708b3fd1df919aac8679d58047ae2e674484.scope: Deactivated successfully.
Dec  2 05:57:53 np0005542249 podman[133503]: 2025-12-02 10:57:53.119961876 +0000 UTC m=+1.079710638 container died d88179bdf8206ec7809cae2a65e5708b3fd1df919aac8679d58047ae2e674484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldwasser, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:57:53 np0005542249 systemd[1]: var-lib-containers-storage-overlay-a8ca19379c706827781acf9a6d2e07bad7eb8662f540693fca02d1297923831a-merged.mount: Deactivated successfully.
Dec  2 05:57:53 np0005542249 podman[133503]: 2025-12-02 10:57:53.202418303 +0000 UTC m=+1.162167055 container remove d88179bdf8206ec7809cae2a65e5708b3fd1df919aac8679d58047ae2e674484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:57:53 np0005542249 systemd[1]: libpod-conmon-d88179bdf8206ec7809cae2a65e5708b3fd1df919aac8679d58047ae2e674484.scope: Deactivated successfully.
Dec  2 05:57:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:57:53 np0005542249 podman[133682]: 2025-12-02 10:57:53.976696245 +0000 UTC m=+0.049806948 container create e1018552d9119db79267854491fad718a6f27fbf3e6b3b9cd0aa4a60e8df8707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_williamson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  2 05:57:54 np0005542249 systemd[1]: Started libpod-conmon-e1018552d9119db79267854491fad718a6f27fbf3e6b3b9cd0aa4a60e8df8707.scope.
Dec  2 05:57:54 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:57:54 np0005542249 podman[133682]: 2025-12-02 10:57:53.956527898 +0000 UTC m=+0.029638631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:57:54 np0005542249 podman[133682]: 2025-12-02 10:57:54.055588096 +0000 UTC m=+0.128698789 container init e1018552d9119db79267854491fad718a6f27fbf3e6b3b9cd0aa4a60e8df8707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:57:54 np0005542249 podman[133682]: 2025-12-02 10:57:54.062688525 +0000 UTC m=+0.135799228 container start e1018552d9119db79267854491fad718a6f27fbf3e6b3b9cd0aa4a60e8df8707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_williamson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  2 05:57:54 np0005542249 nervous_williamson[133698]: 167 167
Dec  2 05:57:54 np0005542249 podman[133682]: 2025-12-02 10:57:54.066755563 +0000 UTC m=+0.139866256 container attach e1018552d9119db79267854491fad718a6f27fbf3e6b3b9cd0aa4a60e8df8707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_williamson, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:57:54 np0005542249 systemd[1]: libpod-e1018552d9119db79267854491fad718a6f27fbf3e6b3b9cd0aa4a60e8df8707.scope: Deactivated successfully.
Dec  2 05:57:54 np0005542249 podman[133682]: 2025-12-02 10:57:54.068675225 +0000 UTC m=+0.141785958 container died e1018552d9119db79267854491fad718a6f27fbf3e6b3b9cd0aa4a60e8df8707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  2 05:57:54 np0005542249 systemd[1]: var-lib-containers-storage-overlay-3b1c9c1ca8bba3c198a4ce35553cf3a7ce8d3be5bbee75b2276aaaa12ca36a87-merged.mount: Deactivated successfully.
Dec  2 05:57:54 np0005542249 podman[133682]: 2025-12-02 10:57:54.127977814 +0000 UTC m=+0.201088507 container remove e1018552d9119db79267854491fad718a6f27fbf3e6b3b9cd0aa4a60e8df8707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:57:54 np0005542249 systemd[1]: libpod-conmon-e1018552d9119db79267854491fad718a6f27fbf3e6b3b9cd0aa4a60e8df8707.scope: Deactivated successfully.
Dec  2 05:57:54 np0005542249 podman[133722]: 2025-12-02 10:57:54.295540538 +0000 UTC m=+0.041662241 container create 7f5503db4c913216b2734d531c7ce5b48d7f9b80bd1359f0d847bb13dc49bf39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_blackburn, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:57:54 np0005542249 systemd[1]: Started libpod-conmon-7f5503db4c913216b2734d531c7ce5b48d7f9b80bd1359f0d847bb13dc49bf39.scope.
Dec  2 05:57:54 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:57:54 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c72f65225a08c626ebf74fee58a811de457d9799cb9e7e270a2e722a930b354/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:57:54 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c72f65225a08c626ebf74fee58a811de457d9799cb9e7e270a2e722a930b354/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:57:54 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c72f65225a08c626ebf74fee58a811de457d9799cb9e7e270a2e722a930b354/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:57:54 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c72f65225a08c626ebf74fee58a811de457d9799cb9e7e270a2e722a930b354/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:57:54 np0005542249 podman[133722]: 2025-12-02 10:57:54.278669178 +0000 UTC m=+0.024790901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:57:54 np0005542249 podman[133722]: 2025-12-02 10:57:54.378565689 +0000 UTC m=+0.124687422 container init 7f5503db4c913216b2734d531c7ce5b48d7f9b80bd1359f0d847bb13dc49bf39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Dec  2 05:57:54 np0005542249 podman[133722]: 2025-12-02 10:57:54.391116513 +0000 UTC m=+0.137238256 container start 7f5503db4c913216b2734d531c7ce5b48d7f9b80bd1359f0d847bb13dc49bf39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  2 05:57:54 np0005542249 podman[133722]: 2025-12-02 10:57:54.395493539 +0000 UTC m=+0.141615332 container attach 7f5503db4c913216b2734d531c7ce5b48d7f9b80bd1359f0d847bb13dc49bf39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_blackburn, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:57:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:57:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:57:55 np0005542249 priceless_blackburn[133738]: {
Dec  2 05:57:55 np0005542249 priceless_blackburn[133738]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 05:57:55 np0005542249 priceless_blackburn[133738]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:57:55 np0005542249 priceless_blackburn[133738]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 05:57:55 np0005542249 priceless_blackburn[133738]:        "osd_id": 0,
Dec  2 05:57:55 np0005542249 priceless_blackburn[133738]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 05:57:55 np0005542249 priceless_blackburn[133738]:        "type": "bluestore"
Dec  2 05:57:55 np0005542249 priceless_blackburn[133738]:    },
Dec  2 05:57:55 np0005542249 priceless_blackburn[133738]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 05:57:55 np0005542249 priceless_blackburn[133738]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:57:55 np0005542249 priceless_blackburn[133738]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 05:57:55 np0005542249 priceless_blackburn[133738]:        "osd_id": 2,
Dec  2 05:57:55 np0005542249 priceless_blackburn[133738]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 05:57:55 np0005542249 priceless_blackburn[133738]:        "type": "bluestore"
Dec  2 05:57:55 np0005542249 priceless_blackburn[133738]:    },
Dec  2 05:57:55 np0005542249 priceless_blackburn[133738]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 05:57:55 np0005542249 priceless_blackburn[133738]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:57:55 np0005542249 priceless_blackburn[133738]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 05:57:55 np0005542249 priceless_blackburn[133738]:        "osd_id": 1,
Dec  2 05:57:55 np0005542249 priceless_blackburn[133738]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 05:57:55 np0005542249 priceless_blackburn[133738]:        "type": "bluestore"
Dec  2 05:57:55 np0005542249 priceless_blackburn[133738]:    }
Dec  2 05:57:55 np0005542249 priceless_blackburn[133738]: }
Dec  2 05:57:55 np0005542249 systemd[1]: libpod-7f5503db4c913216b2734d531c7ce5b48d7f9b80bd1359f0d847bb13dc49bf39.scope: Deactivated successfully.
Dec  2 05:57:55 np0005542249 systemd[1]: libpod-7f5503db4c913216b2734d531c7ce5b48d7f9b80bd1359f0d847bb13dc49bf39.scope: Consumed 1.095s CPU time.
Dec  2 05:57:55 np0005542249 podman[133722]: 2025-12-02 10:57:55.47906271 +0000 UTC m=+1.225184453 container died 7f5503db4c913216b2734d531c7ce5b48d7f9b80bd1359f0d847bb13dc49bf39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_blackburn, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:57:55 np0005542249 systemd[1]: var-lib-containers-storage-overlay-6c72f65225a08c626ebf74fee58a811de457d9799cb9e7e270a2e722a930b354-merged.mount: Deactivated successfully.
Dec  2 05:57:55 np0005542249 podman[133722]: 2025-12-02 10:57:55.549736742 +0000 UTC m=+1.295858475 container remove 7f5503db4c913216b2734d531c7ce5b48d7f9b80bd1359f0d847bb13dc49bf39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  2 05:57:55 np0005542249 systemd[1]: libpod-conmon-7f5503db4c913216b2734d531c7ce5b48d7f9b80bd1359f0d847bb13dc49bf39.scope: Deactivated successfully.
Dec  2 05:57:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:57:55 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:57:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:57:55 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:57:55 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 1d6e8947-5a9a-444d-95be-0d544c542221 does not exist
Dec  2 05:57:55 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 46345064-4e68-45f3-b182-93f5492d3491 does not exist
Dec  2 05:57:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:57:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:57:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:57:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:57:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:57:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:57:56 np0005542249 systemd-logind[787]: New session 43 of user zuul.
Dec  2 05:57:56 np0005542249 systemd[1]: Started Session 43 of User zuul.
Dec  2 05:57:56 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:57:56 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:57:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:57:57 np0005542249 python3.9[133988]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:57:58 np0005542249 python3.9[134144]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 05:57:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:57:59 np0005542249 python3.9[134228]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  2 05:57:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:58:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:01 np0005542249 python3.9[134379]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:58:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:03 np0005542249 python3.9[134530]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  2 05:58:04 np0005542249 python3.9[134680]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 05:58:04 np0005542249 python3.9[134830]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 05:58:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:58:05 np0005542249 systemd[1]: session-43.scope: Deactivated successfully.
Dec  2 05:58:05 np0005542249 systemd[1]: session-43.scope: Consumed 6.388s CPU time.
Dec  2 05:58:05 np0005542249 systemd-logind[787]: Session 43 logged out. Waiting for processes to exit.
Dec  2 05:58:05 np0005542249 systemd-logind[787]: Removed session 43.
Dec  2 05:58:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:58:10 np0005542249 systemd-logind[787]: New session 44 of user zuul.
Dec  2 05:58:10 np0005542249 systemd[1]: Started Session 44 of User zuul.
Dec  2 05:58:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:11 np0005542249 python3.9[135008]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:58:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:13 np0005542249 python3.9[135164]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:58:14 np0005542249 python3.9[135316]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:58:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:58:15 np0005542249 python3.9[135468]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:58:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:15 np0005542249 python3.9[135591]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673094.3994782-65-137199241441590/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=c056c722d798658b0d0e36cf3ba6b9ebe5a6e581 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:58:16 np0005542249 python3.9[135743]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:58:17 np0005542249 python3.9[135866]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673095.9796321-65-221676793333698/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=d9f815407cbcf4edad4c26a1db3d925de3040737 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:58:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:17 np0005542249 python3.9[136018]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:58:18 np0005542249 python3.9[136141]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673097.3533843-65-124913347528391/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=07e0dfb8aa73f64c3fa911e185b0f613bc3cce90 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:58:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:19 np0005542249 python3.9[136293]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:58:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:58:19 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Dec  2 05:58:19 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:58:19.976612) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  2 05:58:19 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Dec  2 05:58:19 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673099976635, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1519, "num_deletes": 251, "total_data_size": 2253317, "memory_usage": 2282904, "flush_reason": "Manual Compaction"}
Dec  2 05:58:19 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Dec  2 05:58:19 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673099993080, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1313734, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7434, "largest_seqno": 8952, "table_properties": {"data_size": 1308574, "index_size": 2300, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14316, "raw_average_key_size": 20, "raw_value_size": 1296718, "raw_average_value_size": 1855, "num_data_blocks": 109, "num_entries": 699, "num_filter_entries": 699, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672951, "oldest_key_time": 1764672951, "file_creation_time": 1764673099, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Dec  2 05:58:19 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 16512 microseconds, and 3664 cpu microseconds.
Dec  2 05:58:19 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 05:58:19 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:58:19.993121) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1313734 bytes OK
Dec  2 05:58:19 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:58:19.993138) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Dec  2 05:58:19 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:58:19.994599) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Dec  2 05:58:19 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:58:19.994611) EVENT_LOG_v1 {"time_micros": 1764673099994606, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  2 05:58:19 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:58:19.994626) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  2 05:58:19 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2246513, prev total WAL file size 2246513, number of live WAL files 2.
Dec  2 05:58:19 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 05:58:19 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:58:19.995278) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Dec  2 05:58:19 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  2 05:58:19 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1282KB)], [20(7231KB)]
Dec  2 05:58:19 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673099995341, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8719036, "oldest_snapshot_seqno": -1}
Dec  2 05:58:20 np0005542249 python3.9[136445]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:58:20 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3377 keys, 6900041 bytes, temperature: kUnknown
Dec  2 05:58:20 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673100047253, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6900041, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6873955, "index_size": 16568, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 81113, "raw_average_key_size": 24, "raw_value_size": 6809372, "raw_average_value_size": 2016, "num_data_blocks": 734, "num_entries": 3377, "num_filter_entries": 3377, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672515, "oldest_key_time": 0, "file_creation_time": 1764673099, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec  2 05:58:20 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 05:58:20 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:58:20.047783) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6900041 bytes
Dec  2 05:58:20 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:58:20.049331) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.1 rd, 132.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.1 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(11.9) write-amplify(5.3) OK, records in: 3822, records dropped: 445 output_compression: NoCompression
Dec  2 05:58:20 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:58:20.049353) EVENT_LOG_v1 {"time_micros": 1764673100049343, "job": 6, "event": "compaction_finished", "compaction_time_micros": 52170, "compaction_time_cpu_micros": 14306, "output_level": 6, "num_output_files": 1, "total_output_size": 6900041, "num_input_records": 3822, "num_output_records": 3377, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  2 05:58:20 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 05:58:20 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673100049894, "job": 6, "event": "table_file_deletion", "file_number": 22}
Dec  2 05:58:20 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 05:58:20 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673100051419, "job": 6, "event": "table_file_deletion", "file_number": 20}
Dec  2 05:58:20 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:58:19.995197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 05:58:20 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:58:20.051520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 05:58:20 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:58:20.051526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 05:58:20 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:58:20.051527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 05:58:20 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:58:20.051529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 05:58:20 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:58:20.051530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 05:58:20 np0005542249 python3.9[136597]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:58:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:21 np0005542249 python3.9[136720]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673100.2697892-124-99623779847070/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=494913bb5476b680080bb8f802353f96619446ab backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:58:22 np0005542249 python3.9[136872]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:58:22 np0005542249 python3.9[136995]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673101.5833585-124-226225221715650/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=cb3d3c584ad463932f40d925b334d1b635db0772 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:58:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:23 np0005542249 python3.9[137147]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:58:24 np0005542249 python3.9[137270]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673102.9452453-124-61898292947889/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=be197ed009736a38c22bb0ab804937b32606596d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:58:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:58:25 np0005542249 python3.9[137422]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:58:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:25 np0005542249 python3.9[137574]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:58:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_10:58:26
Dec  2 05:58:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 05:58:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 05:58:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'backups', 'vms', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', '.mgr']
Dec  2 05:58:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 05:58:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:58:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:58:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:58:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:58:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:58:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:58:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 05:58:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 05:58:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 05:58:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 05:58:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 05:58:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 05:58:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 05:58:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 05:58:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 05:58:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 05:58:26 np0005542249 python3.9[137726]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:58:27 np0005542249 python3.9[137849]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673105.9654894-183-84163198759477/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=7e4366ee7095a80a9811f0b1e5048f215f4012c7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:58:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:27 np0005542249 python3.9[138001]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:58:28 np0005542249 python3.9[138124]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673107.2916229-183-133639432828396/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=cb3d3c584ad463932f40d925b334d1b635db0772 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:58:29 np0005542249 python3.9[138276]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:58:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:29 np0005542249 python3.9[138399]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673108.6106014-183-15710769068456/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=b8f05fff3e426fd2f36e8b7cf29153f5d8e380bc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:58:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:58:31 np0005542249 python3.9[138551]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:58:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:31 np0005542249 python3.9[138703]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:58:32 np0005542249 python3.9[138826]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673111.281473-251-103002084510698/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=d7146d9f3fadd3a8b8a7aad758bb65bc8e959c93 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:58:33 np0005542249 python3.9[138978]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:58:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:33 np0005542249 python3.9[139130]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:58:34 np0005542249 python3.9[139253]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673113.3764417-275-73350478469847/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=d7146d9f3fadd3a8b8a7aad758bb65bc8e959c93 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:58:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:58:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:35 np0005542249 python3.9[139405]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:58:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 05:58:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:58:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 05:58:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:58:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:58:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:58:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:58:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:58:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:58:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:58:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:58:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:58:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 05:58:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:58:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:58:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:58:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 05:58:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:58:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 05:58:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:58:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:58:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:58:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 05:58:36 np0005542249 python3.9[139557]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:58:36 np0005542249 python3.9[139680]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673115.559878-299-220743958368605/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=d7146d9f3fadd3a8b8a7aad758bb65bc8e959c93 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:58:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:37 np0005542249 python3.9[139832]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:58:38 np0005542249 python3.9[139984]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:58:38 np0005542249 python3.9[140107]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673117.755388-323-157818444822617/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=d7146d9f3fadd3a8b8a7aad758bb65bc8e959c93 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:58:39 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  2 05:58:39 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 1995 writes, 8948 keys, 1995 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 1995 writes, 1995 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1995 writes, 8948 keys, 1995 commit groups, 1.0 writes per commit group, ingest: 11.04 MB, 0.02 MB/s#012Interval WAL: 1995 writes, 1995 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    124.9      0.07              0.02         3    0.022       0      0       0.0       0.0#012  L6      1/0    6.58 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    162.3    143.6      0.10              0.03         2    0.048    7235    735       0.0       0.0#012 Sum      1/0    6.58 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     95.2    135.8      0.16              0.05         5    0.032    7235    735       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     96.9    137.9      0.16              0.05         4    0.040    7235    735       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    162.3    143.6      0.10              0.03         2    0.048    7235    735       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    129.4      0.06              0.02         2    0.032       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.008, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560e2b4e71f0#2 capacity: 308.00 MB usage: 574.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(35,483.39 KB,0.153267%) FilterBlock(6,28.55 KB,0.00905124%) IndexBlock(6,62.92 KB,0.0199504%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  2 05:58:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:39 np0005542249 python3.9[140259]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:58:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:58:40 np0005542249 python3.9[140411]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:58:40 np0005542249 python3.9[140534]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673119.8626647-347-93723755330535/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=d7146d9f3fadd3a8b8a7aad758bb65bc8e959c93 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:58:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:41 np0005542249 python3.9[140686]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:58:42 np0005542249 python3.9[140838]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:58:42 np0005542249 python3.9[140961]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673121.8167534-371-16638619079325/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=d7146d9f3fadd3a8b8a7aad758bb65bc8e959c93 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:58:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:43 np0005542249 systemd-logind[787]: Session 44 logged out. Waiting for processes to exit.
Dec  2 05:58:43 np0005542249 systemd[1]: session-44.scope: Deactivated successfully.
Dec  2 05:58:43 np0005542249 systemd[1]: session-44.scope: Consumed 25.498s CPU time.
Dec  2 05:58:43 np0005542249 systemd-logind[787]: Removed session 44.
Dec  2 05:58:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:58:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v397: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v398: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:49 np0005542249 systemd-logind[787]: New session 45 of user zuul.
Dec  2 05:58:49 np0005542249 systemd[1]: Started Session 45 of User zuul.
Dec  2 05:58:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:58:50 np0005542249 python3.9[141141]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:58:51 np0005542249 python3.9[141293]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:58:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v399: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:51 np0005542249 python3.9[141416]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764673130.421606-34-153557091145376/.source.conf _original_basename=ceph.conf follow=False checksum=ee02dcfd1676ae806877ad66f59904fb8eb76198 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:58:52 np0005542249 python3.9[141568]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:58:52 np0005542249 python3.9[141691]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764673131.8535163-34-242857411473413/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=17ecae96b4c7e30ce02024b078d2b5cdfc359db1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:58:53 np0005542249 systemd[1]: session-45.scope: Deactivated successfully.
Dec  2 05:58:53 np0005542249 systemd[1]: session-45.scope: Consumed 2.760s CPU time.
Dec  2 05:58:53 np0005542249 systemd-logind[787]: Session 45 logged out. Waiting for processes to exit.
Dec  2 05:58:53 np0005542249 systemd-logind[787]: Removed session 45.
Dec  2 05:58:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v400: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:58:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v401: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:58:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:58:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:58:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:58:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:58:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:58:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:58:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:58:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 05:58:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:58:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 05:58:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:58:56 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev b96dfa77-3c4e-41b6-8bcc-1e7e62cdade7 does not exist
Dec  2 05:58:56 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev c1dc6905-0e39-407e-9c02-fa1470cd0d50 does not exist
Dec  2 05:58:56 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 2a26c191-7f3b-49d2-9963-535ec0edd36f does not exist
Dec  2 05:58:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 05:58:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 05:58:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 05:58:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 05:58:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 05:58:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 05:58:57 np0005542249 podman[141988]: 2025-12-02 10:58:57.17691503 +0000 UTC m=+0.059039353 container create 0fa1dc72ee823e0816616dc8c36c6ece6ccfeae1cf7607cf273db5a63d30a7f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:58:57 np0005542249 systemd[1]: Started libpod-conmon-0fa1dc72ee823e0816616dc8c36c6ece6ccfeae1cf7607cf273db5a63d30a7f3.scope.
Dec  2 05:58:57 np0005542249 podman[141988]: 2025-12-02 10:58:57.157292251 +0000 UTC m=+0.039416604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:58:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v402: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:57 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:58:57 np0005542249 podman[141988]: 2025-12-02 10:58:57.277735382 +0000 UTC m=+0.159859765 container init 0fa1dc72ee823e0816616dc8c36c6ece6ccfeae1cf7607cf273db5a63d30a7f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_moore, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  2 05:58:57 np0005542249 podman[141988]: 2025-12-02 10:58:57.289031064 +0000 UTC m=+0.171155397 container start 0fa1dc72ee823e0816616dc8c36c6ece6ccfeae1cf7607cf273db5a63d30a7f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_moore, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  2 05:58:57 np0005542249 podman[141988]: 2025-12-02 10:58:57.292605886 +0000 UTC m=+0.174730259 container attach 0fa1dc72ee823e0816616dc8c36c6ece6ccfeae1cf7607cf273db5a63d30a7f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_moore, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:58:57 np0005542249 serene_moore[142004]: 167 167
Dec  2 05:58:57 np0005542249 systemd[1]: libpod-0fa1dc72ee823e0816616dc8c36c6ece6ccfeae1cf7607cf273db5a63d30a7f3.scope: Deactivated successfully.
Dec  2 05:58:57 np0005542249 podman[141988]: 2025-12-02 10:58:57.297661621 +0000 UTC m=+0.179785984 container died 0fa1dc72ee823e0816616dc8c36c6ece6ccfeae1cf7607cf273db5a63d30a7f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec  2 05:58:57 np0005542249 systemd[1]: var-lib-containers-storage-overlay-75b59869410763383451fd5f137829971dd5b6d7548e46b2199f6fdd4dfc73c8-merged.mount: Deactivated successfully.
Dec  2 05:58:57 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 05:58:57 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:58:57 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 05:58:57 np0005542249 podman[141988]: 2025-12-02 10:58:57.349968191 +0000 UTC m=+0.232092514 container remove 0fa1dc72ee823e0816616dc8c36c6ece6ccfeae1cf7607cf273db5a63d30a7f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:58:57 np0005542249 systemd[1]: libpod-conmon-0fa1dc72ee823e0816616dc8c36c6ece6ccfeae1cf7607cf273db5a63d30a7f3.scope: Deactivated successfully.
Dec  2 05:58:57 np0005542249 podman[142027]: 2025-12-02 10:58:57.538649137 +0000 UTC m=+0.051837498 container create ea176b74589da91f396c9aaa6dc5b72ffd8641d0ef7388c44bacf3b12d2b7e86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_swanson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Dec  2 05:58:57 np0005542249 systemd[1]: Started libpod-conmon-ea176b74589da91f396c9aaa6dc5b72ffd8641d0ef7388c44bacf3b12d2b7e86.scope.
Dec  2 05:58:57 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:58:57 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb27f20272831fda209173135d49f24e615369803998ebb48d19c453a1b492e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:58:57 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb27f20272831fda209173135d49f24e615369803998ebb48d19c453a1b492e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:58:57 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb27f20272831fda209173135d49f24e615369803998ebb48d19c453a1b492e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:58:57 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb27f20272831fda209173135d49f24e615369803998ebb48d19c453a1b492e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:58:57 np0005542249 podman[142027]: 2025-12-02 10:58:57.511516163 +0000 UTC m=+0.024704524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:58:57 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb27f20272831fda209173135d49f24e615369803998ebb48d19c453a1b492e9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 05:58:57 np0005542249 podman[142027]: 2025-12-02 10:58:57.624374709 +0000 UTC m=+0.137563050 container init ea176b74589da91f396c9aaa6dc5b72ffd8641d0ef7388c44bacf3b12d2b7e86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_swanson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  2 05:58:57 np0005542249 podman[142027]: 2025-12-02 10:58:57.631954305 +0000 UTC m=+0.145142636 container start ea176b74589da91f396c9aaa6dc5b72ffd8641d0ef7388c44bacf3b12d2b7e86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_swanson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:58:57 np0005542249 podman[142027]: 2025-12-02 10:58:57.635214578 +0000 UTC m=+0.148402919 container attach ea176b74589da91f396c9aaa6dc5b72ffd8641d0ef7388c44bacf3b12d2b7e86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_swanson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  2 05:58:58 np0005542249 adoring_swanson[142043]: --> passed data devices: 0 physical, 3 LVM
Dec  2 05:58:58 np0005542249 adoring_swanson[142043]: --> relative data size: 1.0
Dec  2 05:58:58 np0005542249 adoring_swanson[142043]: --> All data devices are unavailable
Dec  2 05:58:58 np0005542249 systemd[1]: libpod-ea176b74589da91f396c9aaa6dc5b72ffd8641d0ef7388c44bacf3b12d2b7e86.scope: Deactivated successfully.
Dec  2 05:58:58 np0005542249 systemd[1]: libpod-ea176b74589da91f396c9aaa6dc5b72ffd8641d0ef7388c44bacf3b12d2b7e86.scope: Consumed 1.043s CPU time.
Dec  2 05:58:58 np0005542249 podman[142073]: 2025-12-02 10:58:58.769134746 +0000 UTC m=+0.031890610 container died ea176b74589da91f396c9aaa6dc5b72ffd8641d0ef7388c44bacf3b12d2b7e86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_swanson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  2 05:58:58 np0005542249 systemd-logind[787]: New session 46 of user zuul.
Dec  2 05:58:58 np0005542249 systemd[1]: Started Session 46 of User zuul.
Dec  2 05:58:58 np0005542249 systemd[1]: var-lib-containers-storage-overlay-bb27f20272831fda209173135d49f24e615369803998ebb48d19c453a1b492e9-merged.mount: Deactivated successfully.
Dec  2 05:58:58 np0005542249 podman[142073]: 2025-12-02 10:58:58.878361118 +0000 UTC m=+0.141116972 container remove ea176b74589da91f396c9aaa6dc5b72ffd8641d0ef7388c44bacf3b12d2b7e86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec  2 05:58:58 np0005542249 systemd[1]: libpod-conmon-ea176b74589da91f396c9aaa6dc5b72ffd8641d0ef7388c44bacf3b12d2b7e86.scope: Deactivated successfully.
Dec  2 05:58:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v403: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:58:59 np0005542249 podman[142297]: 2025-12-02 10:58:59.482176212 +0000 UTC m=+0.044639803 container create 79be1c0529501b69ab854f76ff388539892adc8a16954e16511b04d5bfcda151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_kepler, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:58:59 np0005542249 systemd[1]: Started libpod-conmon-79be1c0529501b69ab854f76ff388539892adc8a16954e16511b04d5bfcda151.scope.
Dec  2 05:58:59 np0005542249 podman[142297]: 2025-12-02 10:58:59.462369488 +0000 UTC m=+0.024833069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:58:59 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:58:59 np0005542249 podman[142297]: 2025-12-02 10:58:59.573323459 +0000 UTC m=+0.135787040 container init 79be1c0529501b69ab854f76ff388539892adc8a16954e16511b04d5bfcda151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_kepler, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  2 05:58:59 np0005542249 podman[142297]: 2025-12-02 10:58:59.580060381 +0000 UTC m=+0.142523952 container start 79be1c0529501b69ab854f76ff388539892adc8a16954e16511b04d5bfcda151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:58:59 np0005542249 podman[142297]: 2025-12-02 10:58:59.583444447 +0000 UTC m=+0.145908008 container attach 79be1c0529501b69ab854f76ff388539892adc8a16954e16511b04d5bfcda151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_kepler, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:58:59 np0005542249 wizardly_kepler[142346]: 167 167
Dec  2 05:58:59 np0005542249 systemd[1]: libpod-79be1c0529501b69ab854f76ff388539892adc8a16954e16511b04d5bfcda151.scope: Deactivated successfully.
Dec  2 05:58:59 np0005542249 conmon[142346]: conmon 79be1c0529501b69ab85 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-79be1c0529501b69ab854f76ff388539892adc8a16954e16511b04d5bfcda151.scope/container/memory.events
Dec  2 05:58:59 np0005542249 podman[142297]: 2025-12-02 10:58:59.58563627 +0000 UTC m=+0.148099821 container died 79be1c0529501b69ab854f76ff388539892adc8a16954e16511b04d5bfcda151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  2 05:58:59 np0005542249 systemd[1]: var-lib-containers-storage-overlay-aa11ffeb24d74984fc504858ade9df4152eaa664a220715f11a80e8caaf0d5aa-merged.mount: Deactivated successfully.
Dec  2 05:58:59 np0005542249 podman[142297]: 2025-12-02 10:58:59.621610284 +0000 UTC m=+0.184073845 container remove 79be1c0529501b69ab854f76ff388539892adc8a16954e16511b04d5bfcda151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_kepler, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 05:58:59 np0005542249 systemd[1]: libpod-conmon-79be1c0529501b69ab854f76ff388539892adc8a16954e16511b04d5bfcda151.scope: Deactivated successfully.
Dec  2 05:58:59 np0005542249 podman[142421]: 2025-12-02 10:58:59.833545264 +0000 UTC m=+0.094554566 container create e8dff393c4ac75de53be5c960f4bedd82b633e361b59a8b8b8ca2cb3a0fe1ca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendel, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:58:59 np0005542249 podman[142421]: 2025-12-02 10:58:59.765924716 +0000 UTC m=+0.026934038 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:58:59 np0005542249 systemd[1]: Started libpod-conmon-e8dff393c4ac75de53be5c960f4bedd82b633e361b59a8b8b8ca2cb3a0fe1ca5.scope.
Dec  2 05:58:59 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:58:59 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d84087706654852e94ce41eddef1e6f34cf44e77378511348c7fd6e0ad78bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:58:59 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d84087706654852e94ce41eddef1e6f34cf44e77378511348c7fd6e0ad78bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:58:59 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d84087706654852e94ce41eddef1e6f34cf44e77378511348c7fd6e0ad78bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:58:59 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d84087706654852e94ce41eddef1e6f34cf44e77378511348c7fd6e0ad78bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:58:59 np0005542249 podman[142421]: 2025-12-02 10:58:59.941792257 +0000 UTC m=+0.202801589 container init e8dff393c4ac75de53be5c960f4bedd82b633e361b59a8b8b8ca2cb3a0fe1ca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  2 05:58:59 np0005542249 podman[142421]: 2025-12-02 10:58:59.949833867 +0000 UTC m=+0.210843169 container start e8dff393c4ac75de53be5c960f4bedd82b633e361b59a8b8b8ca2cb3a0fe1ca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendel, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 05:58:59 np0005542249 podman[142421]: 2025-12-02 10:58:59.953315085 +0000 UTC m=+0.214324387 container attach e8dff393c4ac75de53be5c960f4bedd82b633e361b59a8b8b8ca2cb3a0fe1ca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:58:59 np0005542249 python3.9[142415]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:58:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]: {
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:    "0": [
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:        {
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "devices": [
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "/dev/loop3"
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            ],
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "lv_name": "ceph_lv0",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "lv_size": "21470642176",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "name": "ceph_lv0",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "tags": {
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.cluster_name": "ceph",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.crush_device_class": "",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.encrypted": "0",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.osd_id": "0",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.type": "block",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.vdo": "0"
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            },
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "type": "block",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "vg_name": "ceph_vg0"
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:        }
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:    ],
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:    "1": [
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:        {
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "devices": [
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "/dev/loop4"
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            ],
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "lv_name": "ceph_lv1",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "lv_size": "21470642176",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "name": "ceph_lv1",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "tags": {
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.cluster_name": "ceph",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.crush_device_class": "",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.encrypted": "0",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.osd_id": "1",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.type": "block",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.vdo": "0"
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            },
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "type": "block",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "vg_name": "ceph_vg1"
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:        }
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:    ],
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:    "2": [
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:        {
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "devices": [
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "/dev/loop5"
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            ],
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "lv_name": "ceph_lv2",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "lv_size": "21470642176",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "name": "ceph_lv2",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "tags": {
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.cephx_lockbox_secret": "",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.cluster_name": "ceph",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.crush_device_class": "",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.encrypted": "0",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.osd_id": "2",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.type": "block",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:                "ceph.vdo": "0"
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            },
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "type": "block",
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:            "vg_name": "ceph_vg2"
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:        }
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]:    ]
Dec  2 05:59:00 np0005542249 jovial_mendel[142438]: }
Dec  2 05:59:00 np0005542249 systemd[1]: libpod-e8dff393c4ac75de53be5c960f4bedd82b633e361b59a8b8b8ca2cb3a0fe1ca5.scope: Deactivated successfully.
Dec  2 05:59:00 np0005542249 podman[142421]: 2025-12-02 10:59:00.885510946 +0000 UTC m=+1.146520298 container died e8dff393c4ac75de53be5c960f4bedd82b633e361b59a8b8b8ca2cb3a0fe1ca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  2 05:59:01 np0005542249 systemd[1]: var-lib-containers-storage-overlay-18d84087706654852e94ce41eddef1e6f34cf44e77378511348c7fd6e0ad78bd-merged.mount: Deactivated successfully.
Dec  2 05:59:01 np0005542249 podman[142421]: 2025-12-02 10:59:01.211216126 +0000 UTC m=+1.472225428 container remove e8dff393c4ac75de53be5c960f4bedd82b633e361b59a8b8b8ca2cb3a0fe1ca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendel, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  2 05:59:01 np0005542249 systemd[1]: libpod-conmon-e8dff393c4ac75de53be5c960f4bedd82b633e361b59a8b8b8ca2cb3a0fe1ca5.scope: Deactivated successfully.
Dec  2 05:59:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v404: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:01 np0005542249 python3.9[142613]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:59:01 np0005542249 podman[142905]: 2025-12-02 10:59:01.797939893 +0000 UTC m=+0.044412316 container create bde1d513019ac32ca83a1a4b740e34f054f0fbbf7b67b77267ed41d10e49a5ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:59:01 np0005542249 systemd[1]: Started libpod-conmon-bde1d513019ac32ca83a1a4b740e34f054f0fbbf7b67b77267ed41d10e49a5ce.scope.
Dec  2 05:59:01 np0005542249 podman[142905]: 2025-12-02 10:59:01.779039234 +0000 UTC m=+0.025511667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:59:01 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:59:01 np0005542249 podman[142905]: 2025-12-02 10:59:01.924498974 +0000 UTC m=+0.170971407 container init bde1d513019ac32ca83a1a4b740e34f054f0fbbf7b67b77267ed41d10e49a5ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 05:59:01 np0005542249 podman[142905]: 2025-12-02 10:59:01.931427621 +0000 UTC m=+0.177900034 container start bde1d513019ac32ca83a1a4b740e34f054f0fbbf7b67b77267ed41d10e49a5ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_keller, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  2 05:59:01 np0005542249 podman[142905]: 2025-12-02 10:59:01.934743975 +0000 UTC m=+0.181216388 container attach bde1d513019ac32ca83a1a4b740e34f054f0fbbf7b67b77267ed41d10e49a5ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:59:01 np0005542249 lucid_keller[142921]: 167 167
Dec  2 05:59:01 np0005542249 systemd[1]: libpod-bde1d513019ac32ca83a1a4b740e34f054f0fbbf7b67b77267ed41d10e49a5ce.scope: Deactivated successfully.
Dec  2 05:59:01 np0005542249 podman[142905]: 2025-12-02 10:59:01.938433061 +0000 UTC m=+0.184905474 container died bde1d513019ac32ca83a1a4b740e34f054f0fbbf7b67b77267ed41d10e49a5ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 05:59:01 np0005542249 python3.9[142902]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:59:01 np0005542249 systemd[1]: var-lib-containers-storage-overlay-1a057e93b5ccec0c608d7294ae5d9524fa84503914e6c7b87de5659f03fb1db2-merged.mount: Deactivated successfully.
Dec  2 05:59:01 np0005542249 podman[142905]: 2025-12-02 10:59:01.975504297 +0000 UTC m=+0.221976710 container remove bde1d513019ac32ca83a1a4b740e34f054f0fbbf7b67b77267ed41d10e49a5ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 05:59:01 np0005542249 systemd[1]: libpod-conmon-bde1d513019ac32ca83a1a4b740e34f054f0fbbf7b67b77267ed41d10e49a5ce.scope: Deactivated successfully.
Dec  2 05:59:02 np0005542249 podman[142969]: 2025-12-02 10:59:02.113230251 +0000 UTC m=+0.021466092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 05:59:02 np0005542249 podman[142969]: 2025-12-02 10:59:02.323314086 +0000 UTC m=+0.231549887 container create a3b3e406f955498a3418472ba913aa0c64afd344b197babe1de5c5d96af2d81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  2 05:59:02 np0005542249 systemd[1]: Started libpod-conmon-a3b3e406f955498a3418472ba913aa0c64afd344b197babe1de5c5d96af2d81b.scope.
Dec  2 05:59:02 np0005542249 systemd[1]: Started libcrun container.
Dec  2 05:59:02 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3aee7354917a484a2452abda039bd25bc1d1dbf2e5ca2fdb938be11c8572cc7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 05:59:02 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3aee7354917a484a2452abda039bd25bc1d1dbf2e5ca2fdb938be11c8572cc7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 05:59:02 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3aee7354917a484a2452abda039bd25bc1d1dbf2e5ca2fdb938be11c8572cc7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 05:59:02 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3aee7354917a484a2452abda039bd25bc1d1dbf2e5ca2fdb938be11c8572cc7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 05:59:02 np0005542249 podman[142969]: 2025-12-02 10:59:02.430072667 +0000 UTC m=+0.338308488 container init a3b3e406f955498a3418472ba913aa0c64afd344b197babe1de5c5d96af2d81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 05:59:02 np0005542249 podman[142969]: 2025-12-02 10:59:02.436577863 +0000 UTC m=+0.344813654 container start a3b3e406f955498a3418472ba913aa0c64afd344b197babe1de5c5d96af2d81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:59:02 np0005542249 podman[142969]: 2025-12-02 10:59:02.440427603 +0000 UTC m=+0.348663424 container attach a3b3e406f955498a3418472ba913aa0c64afd344b197babe1de5c5d96af2d81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  2 05:59:02 np0005542249 python3.9[143116]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:59:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v405: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:03 np0005542249 compassionate_elgamal[143067]: {
Dec  2 05:59:03 np0005542249 compassionate_elgamal[143067]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 05:59:03 np0005542249 compassionate_elgamal[143067]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:59:03 np0005542249 compassionate_elgamal[143067]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 05:59:03 np0005542249 compassionate_elgamal[143067]:        "osd_id": 0,
Dec  2 05:59:03 np0005542249 compassionate_elgamal[143067]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 05:59:03 np0005542249 compassionate_elgamal[143067]:        "type": "bluestore"
Dec  2 05:59:03 np0005542249 compassionate_elgamal[143067]:    },
Dec  2 05:59:03 np0005542249 compassionate_elgamal[143067]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 05:59:03 np0005542249 compassionate_elgamal[143067]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:59:03 np0005542249 compassionate_elgamal[143067]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 05:59:03 np0005542249 compassionate_elgamal[143067]:        "osd_id": 2,
Dec  2 05:59:03 np0005542249 compassionate_elgamal[143067]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 05:59:03 np0005542249 compassionate_elgamal[143067]:        "type": "bluestore"
Dec  2 05:59:03 np0005542249 compassionate_elgamal[143067]:    },
Dec  2 05:59:03 np0005542249 compassionate_elgamal[143067]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 05:59:03 np0005542249 compassionate_elgamal[143067]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 05:59:03 np0005542249 compassionate_elgamal[143067]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 05:59:03 np0005542249 compassionate_elgamal[143067]:        "osd_id": 1,
Dec  2 05:59:03 np0005542249 compassionate_elgamal[143067]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 05:59:03 np0005542249 compassionate_elgamal[143067]:        "type": "bluestore"
Dec  2 05:59:03 np0005542249 compassionate_elgamal[143067]:    }
Dec  2 05:59:03 np0005542249 compassionate_elgamal[143067]: }
Dec  2 05:59:03 np0005542249 systemd[1]: libpod-a3b3e406f955498a3418472ba913aa0c64afd344b197babe1de5c5d96af2d81b.scope: Deactivated successfully.
Dec  2 05:59:03 np0005542249 podman[142969]: 2025-12-02 10:59:03.460699469 +0000 UTC m=+1.368935260 container died a3b3e406f955498a3418472ba913aa0c64afd344b197babe1de5c5d96af2d81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 05:59:03 np0005542249 systemd[1]: libpod-a3b3e406f955498a3418472ba913aa0c64afd344b197babe1de5c5d96af2d81b.scope: Consumed 1.027s CPU time.
Dec  2 05:59:03 np0005542249 python3.9[143297]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  2 05:59:03 np0005542249 systemd[1]: var-lib-containers-storage-overlay-f3aee7354917a484a2452abda039bd25bc1d1dbf2e5ca2fdb938be11c8572cc7-merged.mount: Deactivated successfully.
Dec  2 05:59:03 np0005542249 podman[142969]: 2025-12-02 10:59:03.929981369 +0000 UTC m=+1.838217160 container remove a3b3e406f955498a3418472ba913aa0c64afd344b197babe1de5c5d96af2d81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Dec  2 05:59:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 05:59:03 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:59:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 05:59:03 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:59:03 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev dfbdec17-ec7d-4733-96c4-400ea86ec0f5 does not exist
Dec  2 05:59:03 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 513273eb-5b37-4101-ac6a-243e012906e5 does not exist
Dec  2 05:59:04 np0005542249 systemd[1]: libpod-conmon-a3b3e406f955498a3418472ba913aa0c64afd344b197babe1de5c5d96af2d81b.scope: Deactivated successfully.
Dec  2 05:59:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:59:04 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:59:04 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 05:59:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v406: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:05 np0005542249 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec  2 05:59:05 np0005542249 python3.9[143514]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 05:59:06 np0005542249 python3.9[143598]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 05:59:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:09 np0005542249 python3.9[143751]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  2 05:59:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:59:10 np0005542249 python3[143906]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec  2 05:59:10 np0005542249 python3.9[144058]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:59:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:11 np0005542249 python3.9[144210]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:59:12 np0005542249 python3.9[144288]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:59:13 np0005542249 python3.9[144440]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:59:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:13 np0005542249 python3.9[144518]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ci50fic4 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:59:14 np0005542249 python3.9[144670]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:59:14 np0005542249 python3.9[144748]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:59:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:59:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:15 np0005542249 python3.9[144900]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:59:16 np0005542249 python3[145053]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  2 05:59:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:17 np0005542249 python3.9[145205]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:59:18 np0005542249 python3.9[145330]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673157.0317986-157-162489982578414/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:59:18 np0005542249 python3.9[145482]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:59:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:19 np0005542249 python3.9[145607]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673158.4704733-172-80245088616262/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:59:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:59:20 np0005542249 python3.9[145759]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:59:20 np0005542249 python3.9[145884]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673159.7880788-187-103141682136420/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:59:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:21 np0005542249 python3.9[146036]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:59:22 np0005542249 python3.9[146161]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673161.167463-202-230415422289744/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:59:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:23 np0005542249 python3.9[146313]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:59:23 np0005542249 python3.9[146438]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673162.6018372-217-17486849457457/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:59:24 np0005542249 python3.9[146590]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:59:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:59:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:25 np0005542249 python3.9[146742]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:59:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_10:59:26
Dec  2 05:59:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 05:59:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 05:59:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'images', 'vms', 'volumes', 'backups', 'default.rgw.control', '.rgw.root', 'default.rgw.meta']
Dec  2 05:59:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 05:59:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:59:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:59:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:59:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:59:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:59:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:59:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 05:59:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 05:59:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 05:59:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 05:59:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 05:59:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 05:59:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 05:59:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 05:59:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 05:59:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 05:59:26 np0005542249 python3.9[146897]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:59:27 np0005542249 python3.9[147049]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:59:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:28 np0005542249 python3.9[147202]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 05:59:28 np0005542249 python3.9[147356]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:59:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:29 np0005542249 python3.9[147511]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:59:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:59:31 np0005542249 python3.9[147661]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 05:59:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:32 np0005542249 python3.9[147814]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:59:32 np0005542249 ovs-vsctl[147815]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec  2 05:59:33 np0005542249 python3.9[147967]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:59:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:33 np0005542249 python3.9[148122]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 05:59:33 np0005542249 ovs-vsctl[148123]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec  2 05:59:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:59:35.002348) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673175002377, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 824, "num_deletes": 251, "total_data_size": 1094431, "memory_usage": 1110336, "flush_reason": "Manual Compaction"}
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673175010767, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1084365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8953, "largest_seqno": 9776, "table_properties": {"data_size": 1080227, "index_size": 1854, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8764, "raw_average_key_size": 18, "raw_value_size": 1071938, "raw_average_value_size": 2275, "num_data_blocks": 86, "num_entries": 471, "num_filter_entries": 471, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764673100, "oldest_key_time": 1764673100, "file_creation_time": 1764673175, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 8472 microseconds, and 3564 cpu microseconds.
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:59:35.010814) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1084365 bytes OK
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:59:35.010838) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:59:35.012442) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:59:35.012459) EVENT_LOG_v1 {"time_micros": 1764673175012453, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:59:35.012479) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1090356, prev total WAL file size 1090356, number of live WAL files 2.
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:59:35.013173) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1058KB)], [23(6738KB)]
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673175013251, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7984406, "oldest_snapshot_seqno": -1}
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3334 keys, 6360395 bytes, temperature: kUnknown
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673175057874, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6360395, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6335668, "index_size": 15274, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8389, "raw_key_size": 80952, "raw_average_key_size": 24, "raw_value_size": 6272918, "raw_average_value_size": 1881, "num_data_blocks": 666, "num_entries": 3334, "num_filter_entries": 3334, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672515, "oldest_key_time": 0, "file_creation_time": 1764673175, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:59:35.058136) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6360395 bytes
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:59:35.059933) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.7 rd, 142.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 6.6 +0.0 blob) out(6.1 +0.0 blob), read-write-amplify(13.2) write-amplify(5.9) OK, records in: 3848, records dropped: 514 output_compression: NoCompression
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:59:35.059975) EVENT_LOG_v1 {"time_micros": 1764673175059955, "job": 8, "event": "compaction_finished", "compaction_time_micros": 44682, "compaction_time_cpu_micros": 21081, "output_level": 6, "num_output_files": 1, "total_output_size": 6360395, "num_input_records": 3848, "num_output_records": 3334, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673175060557, "job": 8, "event": "table_file_deletion", "file_number": 25}
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673175063223, "job": 8, "event": "table_file_deletion", "file_number": 23}
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:59:35.013046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:59:35.063305) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:59:35.063311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:59:35.063313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:59:35.063315) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 05:59:35 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-10:59:35.063317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 05:59:35 np0005542249 python3.9[148273]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 05:59:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 05:59:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:59:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 05:59:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:59:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:59:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:59:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:59:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:59:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:59:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:59:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:59:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:59:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 05:59:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:59:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:59:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:59:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 05:59:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:59:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 05:59:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:59:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 05:59:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 05:59:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 05:59:36 np0005542249 python3.9[148427]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:59:36 np0005542249 python3.9[148579]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:59:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:37 np0005542249 python3.9[148657]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:59:38 np0005542249 python3.9[148809]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:59:38 np0005542249 python3.9[148887]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:59:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:39 np0005542249 python3.9[149039]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:59:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:59:40 np0005542249 python3.9[149191]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:59:40 np0005542249 python3.9[149269]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:59:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:41 np0005542249 python3.9[149421]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:59:42 np0005542249 python3.9[149499]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:59:42 np0005542249 python3.9[149651]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 05:59:42 np0005542249 systemd[1]: Reloading.
Dec  2 05:59:43 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:59:43 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:59:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:44 np0005542249 python3.9[149840]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:59:44 np0005542249 python3.9[149918]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:59:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:59:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:45 np0005542249 python3.9[150070]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:59:46 np0005542249 python3.9[150148]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:59:47 np0005542249 python3.9[150300]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 05:59:47 np0005542249 systemd[1]: Reloading.
Dec  2 05:59:47 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 05:59:47 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 05:59:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:47 np0005542249 systemd[1]: Starting Create netns directory...
Dec  2 05:59:47 np0005542249 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  2 05:59:47 np0005542249 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  2 05:59:47 np0005542249 systemd[1]: Finished Create netns directory.
Dec  2 05:59:48 np0005542249 python3.9[150493]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:59:48 np0005542249 python3.9[150645]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:59:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:49 np0005542249 python3.9[150768]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764673188.4718947-468-138171291240070/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:59:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:59:50 np0005542249 python3.9[150920]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 05:59:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:51 np0005542249 python3.9[151072]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 05:59:51 np0005542249 python3.9[151195]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764673190.8092022-493-164324021472966/.source.json _original_basename=.626uzt50 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:59:52 np0005542249 python3.9[151347]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 05:59:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 05:59:55 np0005542249 python3.9[151774]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec  2 05:59:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:56 np0005542249 python3.9[151926]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  2 05:59:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:59:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:59:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:59:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:59:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 05:59:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 05:59:57 np0005542249 python3.9[152078]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  2 05:59:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:58 np0005542249 python3[152257]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  2 05:59:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 05:59:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:00:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:03 np0005542249 podman[152269]: 2025-12-02 11:00:03.687749521 +0000 UTC m=+4.850328635 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  2 06:00:03 np0005542249 podman[152386]: 2025-12-02 11:00:03.800596185 +0000 UTC m=+0.038006816 container create 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:00:03 np0005542249 podman[152386]: 2025-12-02 11:00:03.780406474 +0000 UTC m=+0.017817125 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  2 06:00:03 np0005542249 python3[152257]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  2 06:00:04 np0005542249 python3.9[152675]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 06:00:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:00:04 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:00:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:00:04 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:00:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:00:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:00:05 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:00:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:00:05 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:00:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:00:05 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:00:05 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 907f5dd4-5ab7-4ee9-8ef2-60741353ccfb does not exist
Dec  2 06:00:05 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 5d3b4eb9-8765-4500-9d55-9db7a0be1506 does not exist
Dec  2 06:00:05 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 3d470ad5-8449-4377-8665-3af974a764eb does not exist
Dec  2 06:00:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:00:05 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:00:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:00:05 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:00:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:00:05 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:00:05 np0005542249 python3.9[152970]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:00:05 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:00:05 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:00:05 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:00:05 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:00:05 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:00:06 np0005542249 python3.9[153165]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 06:00:06 np0005542249 podman[153198]: 2025-12-02 11:00:06.155109297 +0000 UTC m=+0.064925908 container create 4d0a14d19ab04bcbd9011e2c6a58c9ece8edc859d8dfe7ab0ef1424d9b62dedb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Dec  2 06:00:06 np0005542249 systemd[1]: Started libpod-conmon-4d0a14d19ab04bcbd9011e2c6a58c9ece8edc859d8dfe7ab0ef1424d9b62dedb.scope.
Dec  2 06:00:06 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:00:06 np0005542249 podman[153198]: 2025-12-02 11:00:06.131320659 +0000 UTC m=+0.041137360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:00:06 np0005542249 podman[153198]: 2025-12-02 11:00:06.23745296 +0000 UTC m=+0.147269611 container init 4d0a14d19ab04bcbd9011e2c6a58c9ece8edc859d8dfe7ab0ef1424d9b62dedb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  2 06:00:06 np0005542249 podman[153198]: 2025-12-02 11:00:06.247928396 +0000 UTC m=+0.157745037 container start 4d0a14d19ab04bcbd9011e2c6a58c9ece8edc859d8dfe7ab0ef1424d9b62dedb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_perlman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:00:06 np0005542249 podman[153198]: 2025-12-02 11:00:06.25326531 +0000 UTC m=+0.163081921 container attach 4d0a14d19ab04bcbd9011e2c6a58c9ece8edc859d8dfe7ab0ef1424d9b62dedb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:00:06 np0005542249 charming_perlman[153254]: 167 167
Dec  2 06:00:06 np0005542249 systemd[1]: libpod-4d0a14d19ab04bcbd9011e2c6a58c9ece8edc859d8dfe7ab0ef1424d9b62dedb.scope: Deactivated successfully.
Dec  2 06:00:06 np0005542249 podman[153198]: 2025-12-02 11:00:06.254920466 +0000 UTC m=+0.164737107 container died 4d0a14d19ab04bcbd9011e2c6a58c9ece8edc859d8dfe7ab0ef1424d9b62dedb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:00:06 np0005542249 systemd[1]: var-lib-containers-storage-overlay-0d2f8130283be5d1709609710d1a09ef26c4d35ff6b8311ba67f9f8f5b45f358-merged.mount: Deactivated successfully.
Dec  2 06:00:06 np0005542249 podman[153198]: 2025-12-02 11:00:06.302264305 +0000 UTC m=+0.212080916 container remove 4d0a14d19ab04bcbd9011e2c6a58c9ece8edc859d8dfe7ab0ef1424d9b62dedb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_perlman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 06:00:06 np0005542249 systemd[1]: libpod-conmon-4d0a14d19ab04bcbd9011e2c6a58c9ece8edc859d8dfe7ab0ef1424d9b62dedb.scope: Deactivated successfully.
Dec  2 06:00:06 np0005542249 podman[153310]: 2025-12-02 11:00:06.460405122 +0000 UTC m=+0.043997119 container create e723d5ca6672a2529da1a982d8e511991f331b3f78f925d900ada516c7079fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  2 06:00:06 np0005542249 systemd[1]: Started libpod-conmon-e723d5ca6672a2529da1a982d8e511991f331b3f78f925d900ada516c7079fde.scope.
Dec  2 06:00:06 np0005542249 podman[153310]: 2025-12-02 11:00:06.437667583 +0000 UTC m=+0.021259580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:00:06 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:00:06 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a879cf8dfcd078ad5cbf17769fb5c43d3b896f230a0f669aa62f86ff47886e0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:00:06 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a879cf8dfcd078ad5cbf17769fb5c43d3b896f230a0f669aa62f86ff47886e0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:00:06 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a879cf8dfcd078ad5cbf17769fb5c43d3b896f230a0f669aa62f86ff47886e0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:00:06 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a879cf8dfcd078ad5cbf17769fb5c43d3b896f230a0f669aa62f86ff47886e0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:00:06 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a879cf8dfcd078ad5cbf17769fb5c43d3b896f230a0f669aa62f86ff47886e0a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:00:06 np0005542249 podman[153310]: 2025-12-02 11:00:06.592698165 +0000 UTC m=+0.176290242 container init e723d5ca6672a2529da1a982d8e511991f331b3f78f925d900ada516c7079fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:00:06 np0005542249 podman[153310]: 2025-12-02 11:00:06.602397829 +0000 UTC m=+0.185989816 container start e723d5ca6672a2529da1a982d8e511991f331b3f78f925d900ada516c7079fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cohen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:00:06 np0005542249 podman[153310]: 2025-12-02 11:00:06.60610725 +0000 UTC m=+0.189699327 container attach e723d5ca6672a2529da1a982d8e511991f331b3f78f925d900ada516c7079fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cohen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  2 06:00:06 np0005542249 python3.9[153407]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764673206.155573-581-184182937030284/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:00:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:07 np0005542249 python3.9[153483]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 06:00:07 np0005542249 systemd[1]: Reloading.
Dec  2 06:00:07 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:00:07 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:00:07 np0005542249 exciting_cohen[153350]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:00:07 np0005542249 exciting_cohen[153350]: --> relative data size: 1.0
Dec  2 06:00:07 np0005542249 exciting_cohen[153350]: --> All data devices are unavailable
Dec  2 06:00:07 np0005542249 podman[153310]: 2025-12-02 11:00:07.730523213 +0000 UTC m=+1.314115210 container died e723d5ca6672a2529da1a982d8e511991f331b3f78f925d900ada516c7079fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:00:07 np0005542249 systemd[1]: libpod-e723d5ca6672a2529da1a982d8e511991f331b3f78f925d900ada516c7079fde.scope: Deactivated successfully.
Dec  2 06:00:07 np0005542249 systemd[1]: libpod-e723d5ca6672a2529da1a982d8e511991f331b3f78f925d900ada516c7079fde.scope: Consumed 1.073s CPU time.
Dec  2 06:00:07 np0005542249 systemd[1]: var-lib-containers-storage-overlay-a879cf8dfcd078ad5cbf17769fb5c43d3b896f230a0f669aa62f86ff47886e0a-merged.mount: Deactivated successfully.
Dec  2 06:00:07 np0005542249 podman[153310]: 2025-12-02 11:00:07.857910993 +0000 UTC m=+1.441502990 container remove e723d5ca6672a2529da1a982d8e511991f331b3f78f925d900ada516c7079fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cohen, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  2 06:00:07 np0005542249 systemd[1]: libpod-conmon-e723d5ca6672a2529da1a982d8e511991f331b3f78f925d900ada516c7079fde.scope: Deactivated successfully.
Dec  2 06:00:08 np0005542249 python3.9[153706]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 06:00:08 np0005542249 systemd[1]: Reloading.
Dec  2 06:00:08 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:00:08 np0005542249 podman[153770]: 2025-12-02 11:00:08.586186437 +0000 UTC m=+0.065983498 container create cf48b904a4765b4931f2f0020d76ede77b3fe5fc8d276a8b0210c2a05c353d1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_chatelet, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:00:08 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:00:08 np0005542249 podman[153770]: 2025-12-02 11:00:08.559424568 +0000 UTC m=+0.039221689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:00:08 np0005542249 systemd[1]: Started libpod-conmon-cf48b904a4765b4931f2f0020d76ede77b3fe5fc8d276a8b0210c2a05c353d1f.scope.
Dec  2 06:00:08 np0005542249 systemd[1]: Starting ovn_controller container...
Dec  2 06:00:08 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:00:08 np0005542249 podman[153770]: 2025-12-02 11:00:08.913451289 +0000 UTC m=+0.393248440 container init cf48b904a4765b4931f2f0020d76ede77b3fe5fc8d276a8b0210c2a05c353d1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:00:08 np0005542249 podman[153770]: 2025-12-02 11:00:08.929148207 +0000 UTC m=+0.408945308 container start cf48b904a4765b4931f2f0020d76ede77b3fe5fc8d276a8b0210c2a05c353d1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_chatelet, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:00:08 np0005542249 podman[153770]: 2025-12-02 11:00:08.937318089 +0000 UTC m=+0.417115250 container attach cf48b904a4765b4931f2f0020d76ede77b3fe5fc8d276a8b0210c2a05c353d1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_chatelet, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:00:08 np0005542249 boring_chatelet[153822]: 167 167
Dec  2 06:00:08 np0005542249 systemd[1]: libpod-cf48b904a4765b4931f2f0020d76ede77b3fe5fc8d276a8b0210c2a05c353d1f.scope: Deactivated successfully.
Dec  2 06:00:08 np0005542249 conmon[153822]: conmon cf48b904a4765b4931f2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cf48b904a4765b4931f2f0020d76ede77b3fe5fc8d276a8b0210c2a05c353d1f.scope/container/memory.events
Dec  2 06:00:09 np0005542249 podman[153839]: 2025-12-02 11:00:09.01189369 +0000 UTC m=+0.047323540 container died cf48b904a4765b4931f2f0020d76ede77b3fe5fc8d276a8b0210c2a05c353d1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_chatelet, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:00:09 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:00:09 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86b1726cb1d10deee462ab1a56f93e2eb0b73920ca53ea4fe2c8c15b9ddeb76c/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  2 06:00:09 np0005542249 systemd[1]: var-lib-containers-storage-overlay-55e4851bfe1489f24040983c5c9dbdcaad2ea764347176cdad79d45a22c38fb7-merged.mount: Deactivated successfully.
Dec  2 06:00:09 np0005542249 podman[153839]: 2025-12-02 11:00:09.06510693 +0000 UTC m=+0.100536790 container remove cf48b904a4765b4931f2f0020d76ede77b3fe5fc8d276a8b0210c2a05c353d1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_chatelet, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:00:09 np0005542249 systemd[1]: Started /usr/bin/podman healthcheck run 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998.
Dec  2 06:00:09 np0005542249 systemd[1]: libpod-conmon-cf48b904a4765b4931f2f0020d76ede77b3fe5fc8d276a8b0210c2a05c353d1f.scope: Deactivated successfully.
Dec  2 06:00:09 np0005542249 podman[153826]: 2025-12-02 11:00:09.077482067 +0000 UTC m=+0.194665143 container init 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: + sudo -E kolla_set_configs
Dec  2 06:00:09 np0005542249 podman[153826]: 2025-12-02 11:00:09.121714881 +0000 UTC m=+0.238897907 container start 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:00:09 np0005542249 edpm-start-podman-container[153826]: ovn_controller
Dec  2 06:00:09 np0005542249 systemd[1]: Created slice User Slice of UID 0.
Dec  2 06:00:09 np0005542249 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec  2 06:00:09 np0005542249 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec  2 06:00:09 np0005542249 systemd[1]: Starting User Manager for UID 0...
Dec  2 06:00:09 np0005542249 edpm-start-podman-container[153824]: Creating additional drop-in dependency for "ovn_controller" (5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998)
Dec  2 06:00:09 np0005542249 podman[153866]: 2025-12-02 11:00:09.221470217 +0000 UTC m=+0.085374766 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  2 06:00:09 np0005542249 systemd[1]: 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998-3c4e81cff09bf014.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 06:00:09 np0005542249 systemd[1]: 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998-3c4e81cff09bf014.service: Failed with result 'exit-code'.
Dec  2 06:00:09 np0005542249 systemd[1]: Reloading.
Dec  2 06:00:09 np0005542249 podman[153910]: 2025-12-02 11:00:09.287976709 +0000 UTC m=+0.059082960 container create c5940ef4087388ae145ff8f78f2e1925de4762c83c8006b7883392661dcd5b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_diffie, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:00:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:09 np0005542249 systemd[153900]: Queued start job for default target Main User Target.
Dec  2 06:00:09 np0005542249 systemd[153900]: Created slice User Application Slice.
Dec  2 06:00:09 np0005542249 systemd[153900]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec  2 06:00:09 np0005542249 systemd[153900]: Started Daily Cleanup of User's Temporary Directories.
Dec  2 06:00:09 np0005542249 systemd[153900]: Reached target Paths.
Dec  2 06:00:09 np0005542249 systemd[153900]: Reached target Timers.
Dec  2 06:00:09 np0005542249 systemd[153900]: Starting D-Bus User Message Bus Socket...
Dec  2 06:00:09 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:00:09 np0005542249 podman[153910]: 2025-12-02 11:00:09.265068265 +0000 UTC m=+0.036174526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:00:09 np0005542249 systemd[153900]: Starting Create User's Volatile Files and Directories...
Dec  2 06:00:09 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:00:09 np0005542249 systemd[153900]: Listening on D-Bus User Message Bus Socket.
Dec  2 06:00:09 np0005542249 systemd[153900]: Reached target Sockets.
Dec  2 06:00:09 np0005542249 systemd[153900]: Finished Create User's Volatile Files and Directories.
Dec  2 06:00:09 np0005542249 systemd[153900]: Reached target Basic System.
Dec  2 06:00:09 np0005542249 systemd[153900]: Reached target Main User Target.
Dec  2 06:00:09 np0005542249 systemd[153900]: Startup finished in 137ms.
Dec  2 06:00:09 np0005542249 systemd[1]: Started User Manager for UID 0.
Dec  2 06:00:09 np0005542249 systemd[1]: Started ovn_controller container.
Dec  2 06:00:09 np0005542249 systemd[1]: Started libpod-conmon-c5940ef4087388ae145ff8f78f2e1925de4762c83c8006b7883392661dcd5b6c.scope.
Dec  2 06:00:09 np0005542249 systemd[1]: Started Session c1 of User root.
Dec  2 06:00:09 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:00:09 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7581e7ec9eb9d3da529c5584dfffbfe09038efe9555c9c0eed53b361ea0335c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:00:09 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7581e7ec9eb9d3da529c5584dfffbfe09038efe9555c9c0eed53b361ea0335c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:00:09 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7581e7ec9eb9d3da529c5584dfffbfe09038efe9555c9c0eed53b361ea0335c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:00:09 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7581e7ec9eb9d3da529c5584dfffbfe09038efe9555c9c0eed53b361ea0335c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: INFO:__main__:Validating config file
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: INFO:__main__:Writing out command to execute
Dec  2 06:00:09 np0005542249 podman[153910]: 2025-12-02 11:00:09.636317796 +0000 UTC m=+0.407424057 container init c5940ef4087388ae145ff8f78f2e1925de4762c83c8006b7883392661dcd5b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_diffie, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  2 06:00:09 np0005542249 systemd[1]: session-c1.scope: Deactivated successfully.
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: ++ cat /run_command
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: + ARGS=
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: + sudo kolla_copy_cacerts
Dec  2 06:00:09 np0005542249 podman[153910]: 2025-12-02 11:00:09.646719269 +0000 UTC m=+0.417825510 container start c5940ef4087388ae145ff8f78f2e1925de4762c83c8006b7883392661dcd5b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:00:09 np0005542249 podman[153910]: 2025-12-02 11:00:09.650454621 +0000 UTC m=+0.421560882 container attach c5940ef4087388ae145ff8f78f2e1925de4762c83c8006b7883392661dcd5b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  2 06:00:09 np0005542249 systemd[1]: Started Session c2 of User root.
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: + [[ ! -n '' ]]
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: + . kolla_extend_start
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: + umask 0022
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec  2 06:00:09 np0005542249 systemd[1]: session-c2.scope: Deactivated successfully.
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec  2 06:00:09 np0005542249 NetworkManager[48987]: <info>  [1764673209.7026] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Dec  2 06:00:09 np0005542249 NetworkManager[48987]: <info>  [1764673209.7036] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 06:00:09 np0005542249 NetworkManager[48987]: <info>  [1764673209.7048] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec  2 06:00:09 np0005542249 NetworkManager[48987]: <info>  [1764673209.7053] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Dec  2 06:00:09 np0005542249 NetworkManager[48987]: <info>  [1764673209.7057] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  2 06:00:09 np0005542249 kernel: br-int: entered promiscuous mode
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00019|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00023|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00024|main|INFO|OVS feature set changed, force recompute.
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  2 06:00:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:09Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  2 06:00:09 np0005542249 NetworkManager[48987]: <info>  [1764673209.7272] manager: (ovn-ea8b86-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec  2 06:00:09 np0005542249 kernel: genev_sys_6081: entered promiscuous mode
Dec  2 06:00:09 np0005542249 NetworkManager[48987]: <info>  [1764673209.7447] device (genev_sys_6081): carrier: link connected
Dec  2 06:00:09 np0005542249 NetworkManager[48987]: <info>  [1764673209.7449] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Dec  2 06:00:09 np0005542249 systemd-udevd[154030]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:00:09 np0005542249 systemd-udevd[154032]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:00:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]: {
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:    "0": [
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:        {
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "devices": [
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "/dev/loop3"
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            ],
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "lv_name": "ceph_lv0",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "lv_size": "21470642176",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "name": "ceph_lv0",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "tags": {
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.cluster_name": "ceph",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.crush_device_class": "",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.encrypted": "0",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.osd_id": "0",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.type": "block",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.vdo": "0"
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            },
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "type": "block",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "vg_name": "ceph_vg0"
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:        }
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:    ],
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:    "1": [
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:        {
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "devices": [
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "/dev/loop4"
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            ],
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "lv_name": "ceph_lv1",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "lv_size": "21470642176",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "name": "ceph_lv1",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "tags": {
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.cluster_name": "ceph",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.crush_device_class": "",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.encrypted": "0",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.osd_id": "1",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.type": "block",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.vdo": "0"
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            },
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "type": "block",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "vg_name": "ceph_vg1"
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:        }
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:    ],
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:    "2": [
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:        {
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "devices": [
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "/dev/loop5"
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            ],
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "lv_name": "ceph_lv2",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "lv_size": "21470642176",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "name": "ceph_lv2",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "tags": {
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.cluster_name": "ceph",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.crush_device_class": "",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.encrypted": "0",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.osd_id": "2",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.type": "block",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:                "ceph.vdo": "0"
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            },
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "type": "block",
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:            "vg_name": "ceph_vg2"
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:        }
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]:    ]
Dec  2 06:00:10 np0005542249 friendly_diffie[153977]: }
Dec  2 06:00:10 np0005542249 systemd[1]: libpod-c5940ef4087388ae145ff8f78f2e1925de4762c83c8006b7883392661dcd5b6c.scope: Deactivated successfully.
Dec  2 06:00:10 np0005542249 podman[153910]: 2025-12-02 11:00:10.438721238 +0000 UTC m=+1.209827519 container died c5940ef4087388ae145ff8f78f2e1925de4762c83c8006b7883392661dcd5b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_diffie, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:00:10 np0005542249 python3.9[154151]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:00:10 np0005542249 ovs-vsctl[154168]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec  2 06:00:10 np0005542249 systemd[1]: var-lib-containers-storage-overlay-f7581e7ec9eb9d3da529c5584dfffbfe09038efe9555c9c0eed53b361ea0335c-merged.mount: Deactivated successfully.
Dec  2 06:00:10 np0005542249 podman[153910]: 2025-12-02 11:00:10.723585306 +0000 UTC m=+1.494691547 container remove c5940ef4087388ae145ff8f78f2e1925de4762c83c8006b7883392661dcd5b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_diffie, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:00:10 np0005542249 systemd[1]: libpod-conmon-c5940ef4087388ae145ff8f78f2e1925de4762c83c8006b7883392661dcd5b6c.scope: Deactivated successfully.
Dec  2 06:00:11 np0005542249 python3.9[154421]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:00:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:11 np0005542249 ovs-vsctl[154443]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec  2 06:00:11 np0005542249 podman[154487]: 2025-12-02 11:00:11.489729101 +0000 UTC m=+0.051416660 container create 37b6c7d7f36d017df8ee971c5dbaf18709751b75d28b2e8e75aa882be4019346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  2 06:00:11 np0005542249 systemd[1]: Started libpod-conmon-37b6c7d7f36d017df8ee971c5dbaf18709751b75d28b2e8e75aa882be4019346.scope.
Dec  2 06:00:11 np0005542249 podman[154487]: 2025-12-02 11:00:11.464613297 +0000 UTC m=+0.026300946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:00:11 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:00:11 np0005542249 podman[154487]: 2025-12-02 11:00:11.590155156 +0000 UTC m=+0.151842735 container init 37b6c7d7f36d017df8ee971c5dbaf18709751b75d28b2e8e75aa882be4019346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  2 06:00:11 np0005542249 podman[154487]: 2025-12-02 11:00:11.598935545 +0000 UTC m=+0.160623094 container start 37b6c7d7f36d017df8ee971c5dbaf18709751b75d28b2e8e75aa882be4019346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 06:00:11 np0005542249 podman[154487]: 2025-12-02 11:00:11.603019636 +0000 UTC m=+0.164707205 container attach 37b6c7d7f36d017df8ee971c5dbaf18709751b75d28b2e8e75aa882be4019346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  2 06:00:11 np0005542249 magical_mclaren[154504]: 167 167
Dec  2 06:00:11 np0005542249 systemd[1]: libpod-37b6c7d7f36d017df8ee971c5dbaf18709751b75d28b2e8e75aa882be4019346.scope: Deactivated successfully.
Dec  2 06:00:11 np0005542249 podman[154487]: 2025-12-02 11:00:11.606485231 +0000 UTC m=+0.168172780 container died 37b6c7d7f36d017df8ee971c5dbaf18709751b75d28b2e8e75aa882be4019346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:00:11 np0005542249 systemd[1]: var-lib-containers-storage-overlay-3cf187f9b71f14037f3f1fb6832b417e90b6b0f624da41f2f8bd8a9eff8e66d6-merged.mount: Deactivated successfully.
Dec  2 06:00:11 np0005542249 podman[154487]: 2025-12-02 11:00:11.658921489 +0000 UTC m=+0.220609078 container remove 37b6c7d7f36d017df8ee971c5dbaf18709751b75d28b2e8e75aa882be4019346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 06:00:11 np0005542249 systemd[1]: libpod-conmon-37b6c7d7f36d017df8ee971c5dbaf18709751b75d28b2e8e75aa882be4019346.scope: Deactivated successfully.
Dec  2 06:00:11 np0005542249 podman[154584]: 2025-12-02 11:00:11.883920607 +0000 UTC m=+0.064282402 container create 6c1e662c645f744e0f7ad8dafc8efef37717eb8f992de3c8828252e9bb86d833 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_germain, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:00:11 np0005542249 systemd[1]: Started libpod-conmon-6c1e662c645f744e0f7ad8dafc8efef37717eb8f992de3c8828252e9bb86d833.scope.
Dec  2 06:00:11 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:00:11 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b570a2e09e2a7aeb1723e97a5ac12dac3270ea5de7ee4b55b312eb9664141c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:00:11 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b570a2e09e2a7aeb1723e97a5ac12dac3270ea5de7ee4b55b312eb9664141c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:00:11 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b570a2e09e2a7aeb1723e97a5ac12dac3270ea5de7ee4b55b312eb9664141c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:00:11 np0005542249 podman[154584]: 2025-12-02 11:00:11.863716097 +0000 UTC m=+0.044077932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:00:11 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b570a2e09e2a7aeb1723e97a5ac12dac3270ea5de7ee4b55b312eb9664141c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:00:11 np0005542249 podman[154584]: 2025-12-02 11:00:11.969431216 +0000 UTC m=+0.149793051 container init 6c1e662c645f744e0f7ad8dafc8efef37717eb8f992de3c8828252e9bb86d833 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  2 06:00:11 np0005542249 podman[154584]: 2025-12-02 11:00:11.97803082 +0000 UTC m=+0.158392615 container start 6c1e662c645f744e0f7ad8dafc8efef37717eb8f992de3c8828252e9bb86d833 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_germain, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  2 06:00:11 np0005542249 podman[154584]: 2025-12-02 11:00:11.981722821 +0000 UTC m=+0.162084656 container attach 6c1e662c645f744e0f7ad8dafc8efef37717eb8f992de3c8828252e9bb86d833 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  2 06:00:12 np0005542249 python3.9[154681]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:00:12 np0005542249 ovs-vsctl[154682]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec  2 06:00:12 np0005542249 systemd[1]: session-46.scope: Deactivated successfully.
Dec  2 06:00:12 np0005542249 systemd[1]: session-46.scope: Consumed 1min 2.313s CPU time.
Dec  2 06:00:12 np0005542249 systemd-logind[787]: Session 46 logged out. Waiting for processes to exit.
Dec  2 06:00:12 np0005542249 systemd-logind[787]: Removed session 46.
Dec  2 06:00:13 np0005542249 frosty_germain[154625]: {
Dec  2 06:00:13 np0005542249 frosty_germain[154625]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:00:13 np0005542249 frosty_germain[154625]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:00:13 np0005542249 frosty_germain[154625]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:00:13 np0005542249 frosty_germain[154625]:        "osd_id": 0,
Dec  2 06:00:13 np0005542249 frosty_germain[154625]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:00:13 np0005542249 frosty_germain[154625]:        "type": "bluestore"
Dec  2 06:00:13 np0005542249 frosty_germain[154625]:    },
Dec  2 06:00:13 np0005542249 frosty_germain[154625]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:00:13 np0005542249 frosty_germain[154625]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:00:13 np0005542249 frosty_germain[154625]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:00:13 np0005542249 frosty_germain[154625]:        "osd_id": 2,
Dec  2 06:00:13 np0005542249 frosty_germain[154625]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:00:13 np0005542249 frosty_germain[154625]:        "type": "bluestore"
Dec  2 06:00:13 np0005542249 frosty_germain[154625]:    },
Dec  2 06:00:13 np0005542249 frosty_germain[154625]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:00:13 np0005542249 frosty_germain[154625]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:00:13 np0005542249 frosty_germain[154625]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:00:13 np0005542249 frosty_germain[154625]:        "osd_id": 1,
Dec  2 06:00:13 np0005542249 frosty_germain[154625]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:00:13 np0005542249 frosty_germain[154625]:        "type": "bluestore"
Dec  2 06:00:13 np0005542249 frosty_germain[154625]:    }
Dec  2 06:00:13 np0005542249 frosty_germain[154625]: }
Dec  2 06:00:13 np0005542249 systemd[1]: libpod-6c1e662c645f744e0f7ad8dafc8efef37717eb8f992de3c8828252e9bb86d833.scope: Deactivated successfully.
Dec  2 06:00:13 np0005542249 podman[154584]: 2025-12-02 11:00:13.041329118 +0000 UTC m=+1.221690913 container died 6c1e662c645f744e0f7ad8dafc8efef37717eb8f992de3c8828252e9bb86d833 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:00:13 np0005542249 systemd[1]: libpod-6c1e662c645f744e0f7ad8dafc8efef37717eb8f992de3c8828252e9bb86d833.scope: Consumed 1.064s CPU time.
Dec  2 06:00:13 np0005542249 systemd[1]: var-lib-containers-storage-overlay-5b570a2e09e2a7aeb1723e97a5ac12dac3270ea5de7ee4b55b312eb9664141c5-merged.mount: Deactivated successfully.
Dec  2 06:00:13 np0005542249 podman[154584]: 2025-12-02 11:00:13.09795914 +0000 UTC m=+1.278320935 container remove 6c1e662c645f744e0f7ad8dafc8efef37717eb8f992de3c8828252e9bb86d833 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_germain, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:00:13 np0005542249 systemd[1]: libpod-conmon-6c1e662c645f744e0f7ad8dafc8efef37717eb8f992de3c8828252e9bb86d833.scope: Deactivated successfully.
Dec  2 06:00:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:00:13 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:00:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:00:13 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:00:13 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 512c77e1-4772-499b-b1ea-8fda1bcf5580 does not exist
Dec  2 06:00:13 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev bee3177a-4ed2-4788-b68a-896277af427e does not exist
Dec  2 06:00:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:14 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:00:14 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:00:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:00:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:18 np0005542249 systemd-logind[787]: New session 48 of user zuul.
Dec  2 06:00:18 np0005542249 systemd[1]: Started Session 48 of User zuul.
Dec  2 06:00:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:19 np0005542249 python3.9[154951]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 06:00:19 np0005542249 systemd[1]: Stopping User Manager for UID 0...
Dec  2 06:00:19 np0005542249 systemd[153900]: Activating special unit Exit the Session...
Dec  2 06:00:19 np0005542249 systemd[153900]: Stopped target Main User Target.
Dec  2 06:00:19 np0005542249 systemd[153900]: Stopped target Basic System.
Dec  2 06:00:19 np0005542249 systemd[153900]: Stopped target Paths.
Dec  2 06:00:19 np0005542249 systemd[153900]: Stopped target Sockets.
Dec  2 06:00:19 np0005542249 systemd[153900]: Stopped target Timers.
Dec  2 06:00:19 np0005542249 systemd[153900]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  2 06:00:19 np0005542249 systemd[153900]: Closed D-Bus User Message Bus Socket.
Dec  2 06:00:19 np0005542249 systemd[153900]: Stopped Create User's Volatile Files and Directories.
Dec  2 06:00:19 np0005542249 systemd[153900]: Removed slice User Application Slice.
Dec  2 06:00:19 np0005542249 systemd[153900]: Reached target Shutdown.
Dec  2 06:00:19 np0005542249 systemd[153900]: Finished Exit the Session.
Dec  2 06:00:19 np0005542249 systemd[153900]: Reached target Exit the Session.
Dec  2 06:00:19 np0005542249 systemd[1]: user@0.service: Deactivated successfully.
Dec  2 06:00:19 np0005542249 systemd[1]: Stopped User Manager for UID 0.
Dec  2 06:00:19 np0005542249 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec  2 06:00:19 np0005542249 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec  2 06:00:19 np0005542249 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec  2 06:00:19 np0005542249 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec  2 06:00:19 np0005542249 systemd[1]: Removed slice User Slice of UID 0.
Dec  2 06:00:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:00:20 np0005542249 python3.9[155109]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:00:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:21 np0005542249 python3.9[155261]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:00:22 np0005542249 python3.9[155413]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:00:22 np0005542249 python3.9[155565]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:00:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:23 np0005542249 python3.9[155717]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:00:24 np0005542249 python3.9[155867]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 06:00:24 np0005542249 ceph-osd[88961]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  2 06:00:24 np0005542249 ceph-osd[88961]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 5411 writes, 23K keys, 5411 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5411 writes, 768 syncs, 7.05 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5411 writes, 23K keys, 5411 commit groups, 1.0 writes per commit group, ingest: 18.50 MB, 0.03 MB/s#012Interval WAL: 5411 writes, 768 syncs, 7.05 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55628344d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55628344d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec  2 06:00:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:00:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:25 np0005542249 python3.9[156019]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  2 06:00:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:00:26
Dec  2 06:00:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:00:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:00:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['vms', 'images', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'backups']
Dec  2 06:00:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:00:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:00:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:00:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:00:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:00:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:00:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:00:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:00:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:00:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:00:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:00:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:00:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:00:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:00:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:00:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:00:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:00:27 np0005542249 python3.9[156169]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:00:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:27 np0005542249 python3.9[156290]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764673226.3931382-86-279178208427906/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:00:28 np0005542249 python3.9[156441]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:00:29 np0005542249 python3.9[156562]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764673228.0554118-101-252167360424893/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:00:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:00:30 np0005542249 python3.9[156714]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 06:00:30 np0005542249 ceph-osd[89966]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  2 06:00:30 np0005542249 ceph-osd[89966]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 6607 writes, 27K keys, 6607 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6607 writes, 1156 syncs, 5.72 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6607 writes, 27K keys, 6607 commit groups, 1.0 writes per commit group, ingest: 19.41 MB, 0.03 MB/s#012Interval WAL: 6607 writes, 1156 syncs, 5.72 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556dc92b11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556dc92b11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec  2 06:00:31 np0005542249 python3.9[156798]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 06:00:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:33 np0005542249 python3.9[156951]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  2 06:00:34 np0005542249 python3.9[157104]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:00:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:00:35 np0005542249 python3.9[157225]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764673233.7221935-138-79710519844934/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:00:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:00:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:00:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:00:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:00:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:00:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:00:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:00:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:00:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:00:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:00:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:00:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:00:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:00:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:00:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:00:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:00:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:00:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:00:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:00:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:00:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:00:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:00:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:00:35 np0005542249 python3.9[157375]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:00:36 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  2 06:00:36 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5504 writes, 23K keys, 5504 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5504 writes, 791 syncs, 6.96 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5504 writes, 23K keys, 5504 commit groups, 1.0 writes per commit group, ingest: 18.51 MB, 0.03 MB/s#012Interval WAL: 5504 writes, 791 syncs, 6.96 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c083b291f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c083b291f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec  2 06:00:36 np0005542249 python3.9[157496]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764673235.3403785-138-34516969480857/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:00:37 np0005542249 ceph-mgr[75372]: [devicehealth INFO root] Check health
Dec  2 06:00:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:37 np0005542249 python3.9[157646]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:00:38 np0005542249 python3.9[157767]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764673237.2034888-182-27140032997479/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:00:39 np0005542249 python3.9[157917]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:00:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:39 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:39Z|00025|memory|INFO|17152 kB peak resident set size after 29.8 seconds
Dec  2 06:00:39 np0005542249 ovn_controller[153849]: 2025-12-02T11:00:39Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Dec  2 06:00:39 np0005542249 podman[158012]: 2025-12-02 11:00:39.512989862 +0000 UTC m=+0.142608625 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  2 06:00:39 np0005542249 python3.9[158049]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764673238.4976614-182-46270525521003/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:00:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:00:40 np0005542249 python3.9[158212]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 06:00:41 np0005542249 python3.9[158366]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:00:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:42 np0005542249 python3.9[158518]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:00:42 np0005542249 python3.9[158596]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:00:43 np0005542249 python3.9[158748]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:00:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:43 np0005542249 python3.9[158826]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:00:44 np0005542249 python3.9[158978]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:00:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:00:45 np0005542249 python3.9[159130]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:00:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:45 np0005542249 python3.9[159208]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:00:46 np0005542249 python3.9[159360]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:00:46 np0005542249 python3.9[159438]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:00:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:47 np0005542249 python3.9[159590]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 06:00:47 np0005542249 systemd[1]: Reloading.
Dec  2 06:00:47 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:00:47 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:00:48 np0005542249 python3.9[159780]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:00:49 np0005542249 python3.9[159858]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:00:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:00:50 np0005542249 python3.9[160010]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:00:50 np0005542249 python3.9[160088]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:00:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:51 np0005542249 python3.9[160240]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 06:00:51 np0005542249 systemd[1]: Reloading.
Dec  2 06:00:51 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:00:51 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:00:51 np0005542249 systemd[1]: Starting Create netns directory...
Dec  2 06:00:51 np0005542249 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  2 06:00:51 np0005542249 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  2 06:00:51 np0005542249 systemd[1]: Finished Create netns directory.
Dec  2 06:00:52 np0005542249 python3.9[160434]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:00:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:53 np0005542249 python3.9[160586]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:00:53 np0005542249 python3.9[160709]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764673252.8725202-333-186073281809376/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:00:54 np0005542249 python3.9[160861]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:00:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:00:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:55 np0005542249 python3.9[161013]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:00:56 np0005542249 python3.9[161136]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764673255.10558-358-277638489112882/.source.json _original_basename=.r742qfh8 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:00:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:00:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:00:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:00:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:00:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:00:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:00:56 np0005542249 python3.9[161288]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:00:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:00:59 np0005542249 python3.9[161715]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec  2 06:01:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:01:00 np0005542249 python3.9[161867]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  2 06:01:01 np0005542249 python3.9[162019]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  2 06:01:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:02 np0005542249 python3[162213]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  2 06:01:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:01:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:01:11 np0005542249 podman[162292]: 2025-12-02 11:01:11.03088375 +0000 UTC m=+1.108914349 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  2 06:01:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:12 np0005542249 podman[162226]: 2025-12-02 11:01:12.746755075 +0000 UTC m=+9.720568427 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:01:12 np0005542249 podman[162377]: 2025-12-02 11:01:12.939463237 +0000 UTC m=+0.055943756 container create 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  2 06:01:12 np0005542249 podman[162377]: 2025-12-02 11:01:12.912430924 +0000 UTC m=+0.028911483 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:01:12 np0005542249 python3[162213]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:01:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:13 np0005542249 python3.9[162666]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 06:01:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec  2 06:01:13 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  2 06:01:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:01:13 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:01:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:01:13 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:01:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:01:13 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:01:13 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev b4ac20f1-c5b9-444e-aef7-1a01d79904ce does not exist
Dec  2 06:01:13 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev a9ddecb3-69a9-4f12-b6c2-dcaec28f3e32 does not exist
Dec  2 06:01:13 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 75885cd1-701d-4fd9-9a6d-a16ad07e5511 does not exist
Dec  2 06:01:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:01:13 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:01:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:01:14 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:01:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:01:14 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:01:14 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  2 06:01:14 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:01:14 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:01:14 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:01:14 np0005542249 python3.9[162950]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:01:14 np0005542249 podman[162997]: 2025-12-02 11:01:14.566090203 +0000 UTC m=+0.049042910 container create b9b8aa06a684f5b89493c9ea428e29d88a023c4ee3b82c9ca4f0d6ed67bb07d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bose, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:01:14 np0005542249 systemd[1]: Started libpod-conmon-b9b8aa06a684f5b89493c9ea428e29d88a023c4ee3b82c9ca4f0d6ed67bb07d4.scope.
Dec  2 06:01:14 np0005542249 podman[162997]: 2025-12-02 11:01:14.546202064 +0000 UTC m=+0.029154791 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:01:14 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:01:14 np0005542249 podman[162997]: 2025-12-02 11:01:14.711398731 +0000 UTC m=+0.194351458 container init b9b8aa06a684f5b89493c9ea428e29d88a023c4ee3b82c9ca4f0d6ed67bb07d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bose, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  2 06:01:14 np0005542249 podman[162997]: 2025-12-02 11:01:14.721513214 +0000 UTC m=+0.204465921 container start b9b8aa06a684f5b89493c9ea428e29d88a023c4ee3b82c9ca4f0d6ed67bb07d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bose, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  2 06:01:14 np0005542249 podman[162997]: 2025-12-02 11:01:14.724447494 +0000 UTC m=+0.207400201 container attach b9b8aa06a684f5b89493c9ea428e29d88a023c4ee3b82c9ca4f0d6ed67bb07d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bose, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  2 06:01:14 np0005542249 crazy_bose[163056]: 167 167
Dec  2 06:01:14 np0005542249 systemd[1]: libpod-b9b8aa06a684f5b89493c9ea428e29d88a023c4ee3b82c9ca4f0d6ed67bb07d4.scope: Deactivated successfully.
Dec  2 06:01:14 np0005542249 conmon[163056]: conmon b9b8aa06a684f5b89493 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b9b8aa06a684f5b89493c9ea428e29d88a023c4ee3b82c9ca4f0d6ed67bb07d4.scope/container/memory.events
Dec  2 06:01:14 np0005542249 podman[162997]: 2025-12-02 11:01:14.739084731 +0000 UTC m=+0.222037438 container died b9b8aa06a684f5b89493c9ea428e29d88a023c4ee3b82c9ca4f0d6ed67bb07d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  2 06:01:14 np0005542249 systemd[1]: var-lib-containers-storage-overlay-ea6febc9586c73e61531c28b7b6f841059c907f04c7c2ca6b3b8dd1c6f7e7bd3-merged.mount: Deactivated successfully.
Dec  2 06:01:14 np0005542249 podman[162997]: 2025-12-02 11:01:14.788975072 +0000 UTC m=+0.271927779 container remove b9b8aa06a684f5b89493c9ea428e29d88a023c4ee3b82c9ca4f0d6ed67bb07d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  2 06:01:14 np0005542249 systemd[1]: libpod-conmon-b9b8aa06a684f5b89493c9ea428e29d88a023c4ee3b82c9ca4f0d6ed67bb07d4.scope: Deactivated successfully.
Dec  2 06:01:14 np0005542249 python3.9[163087]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 06:01:14 np0005542249 podman[163107]: 2025-12-02 11:01:14.951530908 +0000 UTC m=+0.050405407 container create b8f9e84e756f006fdda128778a9378c4e6cdc3e14c856718da6facfb2a39b08c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  2 06:01:14 np0005542249 systemd[1]: Started libpod-conmon-b8f9e84e756f006fdda128778a9378c4e6cdc3e14c856718da6facfb2a39b08c.scope.
Dec  2 06:01:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:01:15 np0005542249 podman[163107]: 2025-12-02 11:01:14.925720818 +0000 UTC m=+0.024595367 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:01:15 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:01:15 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52dd01eedf8ea576e559cdcf6fe33a9d57fa8585101d222aa165da955f813b76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:01:15 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52dd01eedf8ea576e559cdcf6fe33a9d57fa8585101d222aa165da955f813b76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:01:15 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52dd01eedf8ea576e559cdcf6fe33a9d57fa8585101d222aa165da955f813b76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:01:15 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52dd01eedf8ea576e559cdcf6fe33a9d57fa8585101d222aa165da955f813b76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:01:15 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52dd01eedf8ea576e559cdcf6fe33a9d57fa8585101d222aa165da955f813b76/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:01:15 np0005542249 podman[163107]: 2025-12-02 11:01:15.043941162 +0000 UTC m=+0.142815661 container init b8f9e84e756f006fdda128778a9378c4e6cdc3e14c856718da6facfb2a39b08c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_feynman, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:01:15 np0005542249 podman[163107]: 2025-12-02 11:01:15.054466227 +0000 UTC m=+0.153340726 container start b8f9e84e756f006fdda128778a9378c4e6cdc3e14c856718da6facfb2a39b08c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Dec  2 06:01:15 np0005542249 podman[163107]: 2025-12-02 11:01:15.058432904 +0000 UTC m=+0.157307393 container attach b8f9e84e756f006fdda128778a9378c4e6cdc3e14c856718da6facfb2a39b08c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_feynman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:01:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:15 np0005542249 python3.9[163278]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764673274.997903-446-110532940943332/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:01:16 np0005542249 blissful_feynman[163146]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:01:16 np0005542249 blissful_feynman[163146]: --> relative data size: 1.0
Dec  2 06:01:16 np0005542249 blissful_feynman[163146]: --> All data devices are unavailable
Dec  2 06:01:16 np0005542249 systemd[1]: libpod-b8f9e84e756f006fdda128778a9378c4e6cdc3e14c856718da6facfb2a39b08c.scope: Deactivated successfully.
Dec  2 06:01:16 np0005542249 systemd[1]: libpod-b8f9e84e756f006fdda128778a9378c4e6cdc3e14c856718da6facfb2a39b08c.scope: Consumed 1.083s CPU time.
Dec  2 06:01:16 np0005542249 conmon[163146]: conmon b8f9e84e756f006fdda1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8f9e84e756f006fdda128778a9378c4e6cdc3e14c856718da6facfb2a39b08c.scope/container/memory.events
Dec  2 06:01:16 np0005542249 podman[163107]: 2025-12-02 11:01:16.216833463 +0000 UTC m=+1.315707962 container died b8f9e84e756f006fdda128778a9378c4e6cdc3e14c856718da6facfb2a39b08c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:01:16 np0005542249 python3.9[163363]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 06:01:16 np0005542249 systemd[1]: var-lib-containers-storage-overlay-52dd01eedf8ea576e559cdcf6fe33a9d57fa8585101d222aa165da955f813b76-merged.mount: Deactivated successfully.
Dec  2 06:01:16 np0005542249 systemd[1]: Reloading.
Dec  2 06:01:16 np0005542249 podman[163107]: 2025-12-02 11:01:16.285644857 +0000 UTC m=+1.384519396 container remove b8f9e84e756f006fdda128778a9378c4e6cdc3e14c856718da6facfb2a39b08c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_feynman, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:01:16 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:01:16 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:01:16 np0005542249 systemd[1]: libpod-conmon-b8f9e84e756f006fdda128778a9378c4e6cdc3e14c856718da6facfb2a39b08c.scope: Deactivated successfully.
Dec  2 06:01:17 np0005542249 podman[163640]: 2025-12-02 11:01:17.173897377 +0000 UTC m=+0.068395965 container create 1270ac0b2283a03f2922b85fe79dd2b4efe8d6cd77d1149cee30330279558a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  2 06:01:17 np0005542249 systemd[1]: Started libpod-conmon-1270ac0b2283a03f2922b85fe79dd2b4efe8d6cd77d1149cee30330279558a0e.scope.
Dec  2 06:01:17 np0005542249 podman[163640]: 2025-12-02 11:01:17.142145806 +0000 UTC m=+0.036644444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:01:17 np0005542249 python3.9[163600]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 06:01:17 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:01:17 np0005542249 podman[163640]: 2025-12-02 11:01:17.274108761 +0000 UTC m=+0.168607349 container init 1270ac0b2283a03f2922b85fe79dd2b4efe8d6cd77d1149cee30330279558a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  2 06:01:17 np0005542249 podman[163640]: 2025-12-02 11:01:17.283040224 +0000 UTC m=+0.177538782 container start 1270ac0b2283a03f2922b85fe79dd2b4efe8d6cd77d1149cee30330279558a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lederberg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  2 06:01:17 np0005542249 podman[163640]: 2025-12-02 11:01:17.286752834 +0000 UTC m=+0.181251392 container attach 1270ac0b2283a03f2922b85fe79dd2b4efe8d6cd77d1149cee30330279558a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lederberg, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:01:17 np0005542249 admiring_lederberg[163657]: 167 167
Dec  2 06:01:17 np0005542249 podman[163640]: 2025-12-02 11:01:17.290573388 +0000 UTC m=+0.185071956 container died 1270ac0b2283a03f2922b85fe79dd2b4efe8d6cd77d1149cee30330279558a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:01:17 np0005542249 systemd[1]: libpod-1270ac0b2283a03f2922b85fe79dd2b4efe8d6cd77d1149cee30330279558a0e.scope: Deactivated successfully.
Dec  2 06:01:17 np0005542249 systemd[1]: Reloading.
Dec  2 06:01:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:17 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:01:17 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:01:17 np0005542249 systemd[1]: var-lib-containers-storage-overlay-78a15a68b92bf61ba2cb73d5fb950e8e6ae69130b6e439548e0bd6fab80eaa0c-merged.mount: Deactivated successfully.
Dec  2 06:01:17 np0005542249 systemd[1]: Starting ovn_metadata_agent container...
Dec  2 06:01:17 np0005542249 podman[163640]: 2025-12-02 11:01:17.611990857 +0000 UTC m=+0.506489405 container remove 1270ac0b2283a03f2922b85fe79dd2b4efe8d6cd77d1149cee30330279558a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lederberg, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  2 06:01:17 np0005542249 systemd[1]: libpod-conmon-1270ac0b2283a03f2922b85fe79dd2b4efe8d6cd77d1149cee30330279558a0e.scope: Deactivated successfully.
Dec  2 06:01:17 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:01:17 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75b80620e97dc1d3fadb000494dd900f76bef2bb2173902422153c10d2f627c/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec  2 06:01:17 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75b80620e97dc1d3fadb000494dd900f76bef2bb2173902422153c10d2f627c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 06:01:17 np0005542249 systemd[1]: Started /usr/bin/podman healthcheck run 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193.
Dec  2 06:01:17 np0005542249 podman[163718]: 2025-12-02 11:01:17.798256485 +0000 UTC m=+0.153055569 container init 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  2 06:01:17 np0005542249 ovn_metadata_agent[163733]: + sudo -E kolla_set_configs
Dec  2 06:01:17 np0005542249 podman[163718]: 2025-12-02 11:01:17.833451608 +0000 UTC m=+0.188250682 container start 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:01:17 np0005542249 podman[163741]: 2025-12-02 11:01:17.838108334 +0000 UTC m=+0.075706892 container create 4505c70f6445e605c36cc8868f7781d7d83eb3e7cddddd99a30f4f081907fbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hopper, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  2 06:01:17 np0005542249 edpm-start-podman-container[163718]: ovn_metadata_agent
Dec  2 06:01:17 np0005542249 systemd[1]: Started libpod-conmon-4505c70f6445e605c36cc8868f7781d7d83eb3e7cddddd99a30f4f081907fbe1.scope.
Dec  2 06:01:17 np0005542249 podman[163741]: 2025-12-02 11:01:17.808208814 +0000 UTC m=+0.045807412 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:01:17 np0005542249 ovn_metadata_agent[163733]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  2 06:01:17 np0005542249 ovn_metadata_agent[163733]: INFO:__main__:Validating config file
Dec  2 06:01:17 np0005542249 ovn_metadata_agent[163733]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  2 06:01:17 np0005542249 ovn_metadata_agent[163733]: INFO:__main__:Copying service configuration files
Dec  2 06:01:17 np0005542249 ovn_metadata_agent[163733]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec  2 06:01:17 np0005542249 ovn_metadata_agent[163733]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec  2 06:01:17 np0005542249 ovn_metadata_agent[163733]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec  2 06:01:17 np0005542249 ovn_metadata_agent[163733]: INFO:__main__:Writing out command to execute
Dec  2 06:01:17 np0005542249 ovn_metadata_agent[163733]: INFO:__main__:Setting permission for /var/lib/neutron
Dec  2 06:01:17 np0005542249 ovn_metadata_agent[163733]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec  2 06:01:17 np0005542249 ovn_metadata_agent[163733]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec  2 06:01:17 np0005542249 ovn_metadata_agent[163733]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec  2 06:01:17 np0005542249 ovn_metadata_agent[163733]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec  2 06:01:17 np0005542249 ovn_metadata_agent[163733]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec  2 06:01:17 np0005542249 ovn_metadata_agent[163733]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec  2 06:01:17 np0005542249 ovn_metadata_agent[163733]: ++ cat /run_command
Dec  2 06:01:17 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:01:17 np0005542249 podman[163758]: 2025-12-02 11:01:17.930494197 +0000 UTC m=+0.079460904 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  2 06:01:17 np0005542249 ovn_metadata_agent[163733]: + CMD=neutron-ovn-metadata-agent
Dec  2 06:01:17 np0005542249 ovn_metadata_agent[163733]: + ARGS=
Dec  2 06:01:17 np0005542249 ovn_metadata_agent[163733]: + sudo kolla_copy_cacerts
Dec  2 06:01:17 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f0d4a7e94cb584b6e74fd0ef5f431b44d5e1228ec860600bff19694d6fdd2b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:01:17 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f0d4a7e94cb584b6e74fd0ef5f431b44d5e1228ec860600bff19694d6fdd2b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:01:17 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f0d4a7e94cb584b6e74fd0ef5f431b44d5e1228ec860600bff19694d6fdd2b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:01:17 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f0d4a7e94cb584b6e74fd0ef5f431b44d5e1228ec860600bff19694d6fdd2b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:01:17 np0005542249 ovn_metadata_agent[163733]: + [[ ! -n '' ]]
Dec  2 06:01:17 np0005542249 ovn_metadata_agent[163733]: + . kolla_extend_start
Dec  2 06:01:17 np0005542249 ovn_metadata_agent[163733]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec  2 06:01:17 np0005542249 ovn_metadata_agent[163733]: Running command: 'neutron-ovn-metadata-agent'
Dec  2 06:01:17 np0005542249 ovn_metadata_agent[163733]: + umask 0022
Dec  2 06:01:17 np0005542249 ovn_metadata_agent[163733]: + exec neutron-ovn-metadata-agent
Dec  2 06:01:17 np0005542249 podman[163741]: 2025-12-02 11:01:17.964434697 +0000 UTC m=+0.202033245 container init 4505c70f6445e605c36cc8868f7781d7d83eb3e7cddddd99a30f4f081907fbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hopper, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:01:17 np0005542249 podman[163741]: 2025-12-02 11:01:17.975056135 +0000 UTC m=+0.212654663 container start 4505c70f6445e605c36cc8868f7781d7d83eb3e7cddddd99a30f4f081907fbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hopper, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:01:17 np0005542249 podman[163741]: 2025-12-02 11:01:17.978694174 +0000 UTC m=+0.216292722 container attach 4505c70f6445e605c36cc8868f7781d7d83eb3e7cddddd99a30f4f081907fbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hopper, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  2 06:01:17 np0005542249 edpm-start-podman-container[163710]: Creating additional drop-in dependency for "ovn_metadata_agent" (301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193)
Dec  2 06:01:18 np0005542249 systemd[1]: Reloading.
Dec  2 06:01:18 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:01:18 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:01:18 np0005542249 systemd[1]: Started ovn_metadata_agent container.
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]: {
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:    "0": [
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:        {
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "devices": [
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "/dev/loop3"
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            ],
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "lv_name": "ceph_lv0",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "lv_size": "21470642176",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "name": "ceph_lv0",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "tags": {
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.cluster_name": "ceph",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.crush_device_class": "",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.encrypted": "0",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.osd_id": "0",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.type": "block",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.vdo": "0"
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            },
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "type": "block",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "vg_name": "ceph_vg0"
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:        }
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:    ],
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:    "1": [
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:        {
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "devices": [
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "/dev/loop4"
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            ],
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "lv_name": "ceph_lv1",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "lv_size": "21470642176",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "name": "ceph_lv1",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "tags": {
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.cluster_name": "ceph",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.crush_device_class": "",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.encrypted": "0",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.osd_id": "1",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.type": "block",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.vdo": "0"
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            },
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "type": "block",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "vg_name": "ceph_vg1"
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:        }
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:    ],
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:    "2": [
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:        {
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "devices": [
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "/dev/loop5"
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            ],
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "lv_name": "ceph_lv2",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "lv_size": "21470642176",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "name": "ceph_lv2",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "tags": {
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.cluster_name": "ceph",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.crush_device_class": "",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.encrypted": "0",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.osd_id": "2",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.type": "block",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:                "ceph.vdo": "0"
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            },
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "type": "block",
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:            "vg_name": "ceph_vg2"
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:        }
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]:    ]
Dec  2 06:01:18 np0005542249 ecstatic_hopper[163787]: }
Dec  2 06:01:18 np0005542249 systemd[1]: libpod-4505c70f6445e605c36cc8868f7781d7d83eb3e7cddddd99a30f4f081907fbe1.scope: Deactivated successfully.
Dec  2 06:01:18 np0005542249 conmon[163787]: conmon 4505c70f6445e605c36c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4505c70f6445e605c36cc8868f7781d7d83eb3e7cddddd99a30f4f081907fbe1.scope/container/memory.events
Dec  2 06:01:18 np0005542249 podman[163741]: 2025-12-02 11:01:18.781398554 +0000 UTC m=+1.018997092 container died 4505c70f6445e605c36cc8868f7781d7d83eb3e7cddddd99a30f4f081907fbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hopper, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  2 06:01:18 np0005542249 systemd[1]: session-48.scope: Deactivated successfully.
Dec  2 06:01:18 np0005542249 systemd[1]: session-48.scope: Consumed 58.780s CPU time.
Dec  2 06:01:18 np0005542249 systemd-logind[787]: Session 48 logged out. Waiting for processes to exit.
Dec  2 06:01:18 np0005542249 systemd-logind[787]: Removed session 48.
Dec  2 06:01:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.751 163757 INFO neutron.common.config [-] Logging enabled!#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.753 163757 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.753 163757 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.754 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.754 163757 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.754 163757 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.754 163757 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.755 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.755 163757 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.755 163757 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.755 163757 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.756 163757 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.756 163757 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.756 163757 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.756 163757 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.756 163757 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.757 163757 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.757 163757 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.757 163757 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.757 163757 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.757 163757 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.758 163757 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.758 163757 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.758 163757 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.758 163757 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.758 163757 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.759 163757 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.759 163757 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.759 163757 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.759 163757 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.759 163757 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.760 163757 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.760 163757 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.760 163757 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.760 163757 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.760 163757 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.761 163757 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.761 163757 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.761 163757 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.761 163757 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.761 163757 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.762 163757 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.762 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.762 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.762 163757 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.762 163757 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.763 163757 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.763 163757 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.763 163757 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.763 163757 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.763 163757 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.764 163757 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.764 163757 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.764 163757 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.764 163757 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.764 163757 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.764 163757 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.765 163757 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.765 163757 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.765 163757 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.765 163757 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.765 163757 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.766 163757 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.766 163757 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.766 163757 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.766 163757 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.767 163757 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.767 163757 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.767 163757 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.767 163757 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.767 163757 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.767 163757 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.768 163757 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.768 163757 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.768 163757 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.768 163757 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.769 163757 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.769 163757 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.769 163757 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.769 163757 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.769 163757 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.770 163757 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.770 163757 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.770 163757 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.770 163757 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.770 163757 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.770 163757 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.770 163757 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.771 163757 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.771 163757 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.771 163757 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.771 163757 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.771 163757 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.771 163757 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.772 163757 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.772 163757 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.772 163757 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.772 163757 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.772 163757 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.772 163757 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.772 163757 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.772 163757 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.773 163757 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.773 163757 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.773 163757 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.773 163757 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.773 163757 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.773 163757 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.774 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.774 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.774 163757 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.774 163757 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.774 163757 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.774 163757 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.775 163757 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.775 163757 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.775 163757 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.775 163757 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.775 163757 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.776 163757 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.776 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.776 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.776 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.776 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.776 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.777 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.777 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.777 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.777 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.777 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.777 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.777 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.778 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.778 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.778 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.778 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.778 163757 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.778 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.779 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.779 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.779 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.779 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.779 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.779 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.780 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.780 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.780 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.780 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.780 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.780 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.780 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.781 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.781 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.781 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.781 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.781 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.781 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.781 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.782 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.782 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.782 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.782 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.783 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.783 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.783 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.783 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.783 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.784 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.784 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.784 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.784 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.784 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.784 163757 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.785 163757 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.785 163757 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.785 163757 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.785 163757 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.785 163757 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.785 163757 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.786 163757 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.786 163757 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.786 163757 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.786 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.786 163757 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.786 163757 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.787 163757 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.787 163757 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.787 163757 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.787 163757 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.787 163757 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.787 163757 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.787 163757 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.788 163757 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.788 163757 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.788 163757 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.788 163757 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.788 163757 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.788 163757 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.789 163757 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.789 163757 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.789 163757 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.789 163757 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.789 163757 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.789 163757 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.790 163757 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.790 163757 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.790 163757 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.790 163757 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.790 163757 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.791 163757 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.791 163757 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.791 163757 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.791 163757 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.791 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.791 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.792 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.792 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.792 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.792 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.792 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.793 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.793 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.793 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.793 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.793 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.794 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.794 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.794 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.794 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.794 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.795 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.795 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.795 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.795 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.795 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.796 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.796 163757 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.796 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.796 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.796 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.797 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.797 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.797 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.797 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.797 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.797 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.798 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.798 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.798 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.798 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.798 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.798 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.799 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.799 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.799 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.799 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.799 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.799 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.800 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.800 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.800 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.800 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.801 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.801 163757 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.801 163757 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.801 163757 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.801 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.802 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.802 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.802 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.802 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.802 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.803 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.803 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.803 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.803 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.803 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.804 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.804 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.804 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.804 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.804 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.805 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.805 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.805 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.805 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.805 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.806 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.806 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.806 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.806 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.806 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.807 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.807 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.807 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.807 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.807 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.808 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.808 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.808 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.808 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.808 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.808 163757 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.808 163757 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.818 163757 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.819 163757 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.819 163757 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.819 163757 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.819 163757 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.834 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae (UUID: 4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.870 163757 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.871 163757 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  2 06:01:19 np0005542249 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.871 163757 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.871 163757 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.875 163757 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.882 163757 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  2 06:01:19 np0005542249 systemd[1]: var-lib-containers-storage-overlay-4f0d4a7e94cb584b6e74fd0ef5f431b44d5e1228ec860600bff19694d6fdd2b7-merged.mount: Deactivated successfully.
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.889 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], external_ids={}, name=4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae, nb_cfg_timestamp=1764673217723, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.892 163757 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f0f38e5aee0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.892 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.893 163757 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.893 163757 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.893 163757 INFO oslo_service.service [-] Starting 1 workers#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.898 163757 DEBUG oslo_service.service [-] Started child 163893 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.901 163893 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-2001862'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.902 163757 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp4q14yauh/privsep.sock']#033[00m
Dec  2 06:01:19 np0005542249 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 06:01:19 np0005542249 podman[163741]: 2025-12-02 11:01:19.917314924 +0000 UTC m=+2.154913442 container remove 4505c70f6445e605c36cc8868f7781d7d83eb3e7cddddd99a30f4f081907fbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.927 163893 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.928 163893 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.928 163893 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.931 163893 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.937 163893 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  2 06:01:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:19.950 163893 INFO eventlet.wsgi.server [-] (163893) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Dec  2 06:01:19 np0005542249 systemd[1]: libpod-conmon-4505c70f6445e605c36cc8868f7781d7d83eb3e7cddddd99a30f4f081907fbe1.scope: Deactivated successfully.
Dec  2 06:01:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:01:20 np0005542249 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec  2 06:01:20 np0005542249 podman[164037]: 2025-12-02 11:01:20.547283984 +0000 UTC m=+0.054599581 container create 645497f619eca065f30831997a0b8ae24723a2fe2f34cc6d6a5b3336c83c4754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_meitner, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  2 06:01:20 np0005542249 systemd[1]: Started libpod-conmon-645497f619eca065f30831997a0b8ae24723a2fe2f34cc6d6a5b3336c83c4754.scope.
Dec  2 06:01:20 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:20.579 163757 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  2 06:01:20 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:20.580 163757 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp4q14yauh/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  2 06:01:20 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:20.447 164036 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  2 06:01:20 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:20.450 164036 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  2 06:01:20 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:20.452 164036 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Dec  2 06:01:20 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:20.452 164036 INFO oslo.privsep.daemon [-] privsep daemon running as pid 164036#033[00m
Dec  2 06:01:20 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:20.583 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[6fe33e05-325a-4e96-90ff-d46aff9ca136]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:01:20 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:01:20 np0005542249 podman[164037]: 2025-12-02 11:01:20.520282662 +0000 UTC m=+0.027598339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:01:20 np0005542249 podman[164037]: 2025-12-02 11:01:20.627925089 +0000 UTC m=+0.135240676 container init 645497f619eca065f30831997a0b8ae24723a2fe2f34cc6d6a5b3336c83c4754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_meitner, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:01:20 np0005542249 podman[164037]: 2025-12-02 11:01:20.636252945 +0000 UTC m=+0.143568532 container start 645497f619eca065f30831997a0b8ae24723a2fe2f34cc6d6a5b3336c83c4754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:01:20 np0005542249 podman[164037]: 2025-12-02 11:01:20.639751739 +0000 UTC m=+0.147067316 container attach 645497f619eca065f30831997a0b8ae24723a2fe2f34cc6d6a5b3336c83c4754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  2 06:01:20 np0005542249 dazzling_meitner[164051]: 167 167
Dec  2 06:01:20 np0005542249 systemd[1]: libpod-645497f619eca065f30831997a0b8ae24723a2fe2f34cc6d6a5b3336c83c4754.scope: Deactivated successfully.
Dec  2 06:01:20 np0005542249 podman[164037]: 2025-12-02 11:01:20.642110804 +0000 UTC m=+0.149426421 container died 645497f619eca065f30831997a0b8ae24723a2fe2f34cc6d6a5b3336c83c4754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_meitner, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:01:20 np0005542249 systemd[1]: var-lib-containers-storage-overlay-9d041a4c475ca0c62985c9446f623f365370cc87266bc80395309afe3618f98c-merged.mount: Deactivated successfully.
Dec  2 06:01:20 np0005542249 podman[164037]: 2025-12-02 11:01:20.689132987 +0000 UTC m=+0.196448614 container remove 645497f619eca065f30831997a0b8ae24723a2fe2f34cc6d6a5b3336c83c4754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Dec  2 06:01:20 np0005542249 systemd[1]: libpod-conmon-645497f619eca065f30831997a0b8ae24723a2fe2f34cc6d6a5b3336c83c4754.scope: Deactivated successfully.
Dec  2 06:01:20 np0005542249 podman[164079]: 2025-12-02 11:01:20.904876154 +0000 UTC m=+0.073098783 container create ee53138de583f3eb54bebbc7464c009cacdb8dd321a7049c07660f307cf2ec7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  2 06:01:20 np0005542249 systemd[1]: Started libpod-conmon-ee53138de583f3eb54bebbc7464c009cacdb8dd321a7049c07660f307cf2ec7d.scope.
Dec  2 06:01:20 np0005542249 podman[164079]: 2025-12-02 11:01:20.874379207 +0000 UTC m=+0.042601826 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:01:20 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:01:20 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4afe2d1871543bc2cf4bff5951ac7a1cebb325fc231a4f178c7eb31a8a80bc43/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:01:20 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4afe2d1871543bc2cf4bff5951ac7a1cebb325fc231a4f178c7eb31a8a80bc43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:01:20 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4afe2d1871543bc2cf4bff5951ac7a1cebb325fc231a4f178c7eb31a8a80bc43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:01:21 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4afe2d1871543bc2cf4bff5951ac7a1cebb325fc231a4f178c7eb31a8a80bc43/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:01:21 np0005542249 podman[164079]: 2025-12-02 11:01:21.010807754 +0000 UTC m=+0.179030383 container init ee53138de583f3eb54bebbc7464c009cacdb8dd321a7049c07660f307cf2ec7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_poitras, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  2 06:01:21 np0005542249 podman[164079]: 2025-12-02 11:01:21.018476542 +0000 UTC m=+0.186699131 container start ee53138de583f3eb54bebbc7464c009cacdb8dd321a7049c07660f307cf2ec7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_poitras, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:01:21 np0005542249 podman[164079]: 2025-12-02 11:01:21.022351796 +0000 UTC m=+0.190574485 container attach ee53138de583f3eb54bebbc7464c009cacdb8dd321a7049c07660f307cf2ec7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.168 164036 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.168 164036 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.168 164036 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:01:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.696 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[c60f1360-7d04-4b91-99d1-c449f251b5fe]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.699 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae, column=external_ids, values=({'neutron:ovn-metadata-id': '1e5fe02a-4fea-5f85-a64e-6b7e96691d06'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.711 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.717 163757 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.717 163757 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.717 163757 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.718 163757 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.718 163757 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.718 163757 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.718 163757 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.718 163757 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.719 163757 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.719 163757 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.719 163757 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.719 163757 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.719 163757 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.719 163757 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.719 163757 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.719 163757 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.720 163757 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.720 163757 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.720 163757 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.720 163757 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.720 163757 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.720 163757 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.720 163757 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.720 163757 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.720 163757 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.721 163757 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.721 163757 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.721 163757 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.721 163757 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.721 163757 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.721 163757 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.721 163757 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.721 163757 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.722 163757 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.722 163757 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.722 163757 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.722 163757 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.722 163757 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.722 163757 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.722 163757 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.722 163757 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.723 163757 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.723 163757 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.723 163757 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.723 163757 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.723 163757 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.723 163757 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.723 163757 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.723 163757 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.723 163757 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.723 163757 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.724 163757 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.724 163757 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.724 163757 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.724 163757 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.724 163757 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.724 163757 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.724 163757 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.724 163757 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.724 163757 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.724 163757 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.725 163757 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.725 163757 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.725 163757 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.725 163757 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.725 163757 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.725 163757 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.725 163757 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.725 163757 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.725 163757 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.726 163757 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.726 163757 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.726 163757 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.726 163757 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.726 163757 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.726 163757 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.726 163757 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.726 163757 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.726 163757 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.727 163757 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.727 163757 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.727 163757 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.727 163757 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.727 163757 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.727 163757 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.727 163757 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.727 163757 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.727 163757 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.727 163757 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.728 163757 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.728 163757 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.728 163757 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.728 163757 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.728 163757 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.728 163757 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.728 163757 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.728 163757 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.728 163757 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.728 163757 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.728 163757 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.729 163757 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.729 163757 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.729 163757 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.729 163757 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.729 163757 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.729 163757 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.729 163757 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.729 163757 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.729 163757 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.730 163757 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.730 163757 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.730 163757 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.730 163757 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.730 163757 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.730 163757 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.730 163757 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.730 163757 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.731 163757 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.731 163757 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.731 163757 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.731 163757 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.731 163757 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.731 163757 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.731 163757 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.731 163757 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.731 163757 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.732 163757 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.732 163757 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.732 163757 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.732 163757 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.732 163757 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.732 163757 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.732 163757 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.732 163757 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.732 163757 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.733 163757 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.733 163757 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.733 163757 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.733 163757 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.733 163757 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.733 163757 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.733 163757 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.733 163757 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.733 163757 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.734 163757 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.734 163757 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.734 163757 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.734 163757 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.734 163757 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.734 163757 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.734 163757 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.734 163757 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.734 163757 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.735 163757 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.735 163757 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.735 163757 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.735 163757 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.735 163757 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.735 163757 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.735 163757 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.735 163757 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.735 163757 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.735 163757 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.735 163757 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.736 163757 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.736 163757 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.736 163757 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.736 163757 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.736 163757 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.736 163757 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.736 163757 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.736 163757 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.736 163757 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.737 163757 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.737 163757 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.737 163757 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.737 163757 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.737 163757 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.737 163757 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.737 163757 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.737 163757 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.738 163757 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.738 163757 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.738 163757 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.738 163757 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.738 163757 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.738 163757 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.738 163757 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.738 163757 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.738 163757 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.739 163757 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.739 163757 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.739 163757 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.739 163757 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.739 163757 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.739 163757 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.739 163757 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.739 163757 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.739 163757 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.740 163757 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.740 163757 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.740 163757 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.740 163757 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.740 163757 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.740 163757 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.740 163757 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.740 163757 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.740 163757 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.740 163757 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.741 163757 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.741 163757 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.741 163757 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.741 163757 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.741 163757 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.741 163757 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.741 163757 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.741 163757 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.741 163757 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.741 163757 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.742 163757 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.742 163757 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.742 163757 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.742 163757 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.742 163757 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.742 163757 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.742 163757 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.742 163757 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.742 163757 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.742 163757 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.742 163757 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.743 163757 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.743 163757 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.743 163757 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.743 163757 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.743 163757 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.743 163757 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.743 163757 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.743 163757 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.743 163757 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.744 163757 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.744 163757 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.744 163757 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.744 163757 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.744 163757 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.744 163757 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.744 163757 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.744 163757 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.744 163757 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.744 163757 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.745 163757 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.745 163757 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.745 163757 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.745 163757 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.745 163757 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.745 163757 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.745 163757 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.745 163757 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.745 163757 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.745 163757 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.746 163757 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.746 163757 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.746 163757 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.746 163757 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.746 163757 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.746 163757 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.746 163757 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.746 163757 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.746 163757 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.747 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.747 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.747 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.747 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.747 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.747 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.747 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.747 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.747 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.748 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.748 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.748 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.748 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.748 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.748 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.748 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.748 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.749 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.749 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.749 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.749 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.749 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.749 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.749 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.749 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.749 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.749 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.750 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.750 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.750 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.750 163757 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.750 163757 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.750 163757 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.750 163757 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.750 163757 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:01:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:01:21.751 163757 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  2 06:01:21 np0005542249 busy_poitras[164096]: {
Dec  2 06:01:21 np0005542249 busy_poitras[164096]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:01:21 np0005542249 busy_poitras[164096]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:01:21 np0005542249 busy_poitras[164096]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:01:21 np0005542249 busy_poitras[164096]:        "osd_id": 0,
Dec  2 06:01:21 np0005542249 busy_poitras[164096]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:01:21 np0005542249 busy_poitras[164096]:        "type": "bluestore"
Dec  2 06:01:21 np0005542249 busy_poitras[164096]:    },
Dec  2 06:01:21 np0005542249 busy_poitras[164096]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:01:21 np0005542249 busy_poitras[164096]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:01:21 np0005542249 busy_poitras[164096]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:01:21 np0005542249 busy_poitras[164096]:        "osd_id": 2,
Dec  2 06:01:21 np0005542249 busy_poitras[164096]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:01:21 np0005542249 busy_poitras[164096]:        "type": "bluestore"
Dec  2 06:01:21 np0005542249 busy_poitras[164096]:    },
Dec  2 06:01:21 np0005542249 busy_poitras[164096]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:01:21 np0005542249 busy_poitras[164096]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:01:21 np0005542249 busy_poitras[164096]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:01:21 np0005542249 busy_poitras[164096]:        "osd_id": 1,
Dec  2 06:01:21 np0005542249 busy_poitras[164096]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:01:21 np0005542249 busy_poitras[164096]:        "type": "bluestore"
Dec  2 06:01:21 np0005542249 busy_poitras[164096]:    }
Dec  2 06:01:21 np0005542249 busy_poitras[164096]: }
Dec  2 06:01:21 np0005542249 systemd[1]: libpod-ee53138de583f3eb54bebbc7464c009cacdb8dd321a7049c07660f307cf2ec7d.scope: Deactivated successfully.
Dec  2 06:01:21 np0005542249 podman[164079]: 2025-12-02 11:01:21.921980834 +0000 UTC m=+1.090203423 container died ee53138de583f3eb54bebbc7464c009cacdb8dd321a7049c07660f307cf2ec7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_poitras, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  2 06:01:21 np0005542249 systemd[1]: var-lib-containers-storage-overlay-4afe2d1871543bc2cf4bff5951ac7a1cebb325fc231a4f178c7eb31a8a80bc43-merged.mount: Deactivated successfully.
Dec  2 06:01:21 np0005542249 podman[164079]: 2025-12-02 11:01:21.991787905 +0000 UTC m=+1.160010504 container remove ee53138de583f3eb54bebbc7464c009cacdb8dd321a7049c07660f307cf2ec7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_poitras, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  2 06:01:21 np0005542249 systemd[1]: libpod-conmon-ee53138de583f3eb54bebbc7464c009cacdb8dd321a7049c07660f307cf2ec7d.scope: Deactivated successfully.
Dec  2 06:01:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:01:22 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:01:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:01:22 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:01:22 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 76af4439-afbe-47bd-99ff-f3127a958b05 does not exist
Dec  2 06:01:22 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 92ba8f84-881c-4f5c-ba77-9af18460a426 does not exist
Dec  2 06:01:23 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:01:23 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:01:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:24 np0005542249 systemd-logind[787]: New session 49 of user zuul.
Dec  2 06:01:24 np0005542249 systemd[1]: Started Session 49 of User zuul.
Dec  2 06:01:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:01:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:25 np0005542249 python3.9[164345]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 06:01:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:01:26
Dec  2 06:01:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:01:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:01:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'backups', 'default.rgw.log', '.rgw.root', 'volumes', 'default.rgw.control', '.mgr', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta']
Dec  2 06:01:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:01:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:01:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:01:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:01:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:01:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:01:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:01:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:01:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:01:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:01:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:01:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:01:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:01:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:01:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:01:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:01:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:01:26 np0005542249 python3.9[164501]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:01:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:28 np0005542249 python3.9[164664]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 06:01:28 np0005542249 systemd[1]: Reloading.
Dec  2 06:01:28 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:01:28 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:01:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:29 np0005542249 python3.9[164848]: ansible-ansible.builtin.service_facts Invoked
Dec  2 06:01:29 np0005542249 network[164865]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  2 06:01:29 np0005542249 network[164866]: 'network-scripts' will be removed from distribution in near future.
Dec  2 06:01:29 np0005542249 network[164867]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  2 06:01:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:01:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:01:35 np0005542249 python3.9[165129]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 06:01:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:01:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:01:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:01:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:01:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:01:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:01:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:01:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:01:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:01:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:01:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:01:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:01:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:01:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:01:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:01:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:01:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:01:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:01:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:01:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:01:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:01:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:01:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:01:35 np0005542249 python3.9[165282]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 06:01:36 np0005542249 python3.9[165435]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 06:01:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:37 np0005542249 python3.9[165588]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 06:01:38 np0005542249 python3.9[165741]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 06:01:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:39 np0005542249 python3.9[165894]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 06:01:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:01:40 np0005542249 python3.9[166047]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 06:01:41 np0005542249 python3.9[166200]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:01:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:41 np0005542249 python3.9[166352]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:01:42 np0005542249 podman[166476]: 2025-12-02 11:01:42.575840563 +0000 UTC m=+0.126957450 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 06:01:42 np0005542249 python3.9[166524]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:01:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:43 np0005542249 python3.9[166683]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:01:44 np0005542249 python3.9[166835]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:01:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:01:45 np0005542249 python3.9[166987]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:01:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:45 np0005542249 python3.9[167139]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:01:46 np0005542249 python3.9[167291]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:01:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:47 np0005542249 python3.9[167443]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:01:48 np0005542249 podman[167567]: 2025-12-02 11:01:48.040853307 +0000 UTC m=+0.054662072 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  2 06:01:48 np0005542249 python3.9[167613]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:01:48 np0005542249 python3.9[167766]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:01:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:49 np0005542249 python3.9[167918]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:01:50 np0005542249 python3.9[168070]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:01:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:01:50 np0005542249 python3.9[168222]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:01:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:51 np0005542249 python3.9[168374]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:01:52 np0005542249 python3.9[168526]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  2 06:01:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:53 np0005542249 python3.9[168678]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 06:01:53 np0005542249 systemd[1]: Reloading.
Dec  2 06:01:53 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:01:53 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:01:54 np0005542249 python3.9[168865]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:01:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:01:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:55 np0005542249 python3.9[169018]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:01:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:01:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:01:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:01:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:01:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:01:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:01:56 np0005542249 python3.9[169171]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:01:57 np0005542249 python3.9[169324]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:01:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:01:57 np0005542249 python3.9[169477]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:01:58 np0005542249 python3.9[169630]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:01:59 np0005542249 python3.9[169783]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:01:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:02:00 np0005542249 python3.9[169936]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec  2 06:02:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:02:01 np0005542249 python3.9[170090]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  2 06:02:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:02:02 np0005542249 python3.9[170248]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  2 06:02:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Dec  2 06:02:03 np0005542249 python3.9[170408]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 06:02:04 np0005542249 python3.9[170492]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 06:02:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:02:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  2 06:02:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  2 06:02:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  2 06:02:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:02:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  2 06:02:13 np0005542249 podman[170505]: 2025-12-02 11:02:13.08407912 +0000 UTC m=+0.147499732 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  2 06:02:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  2 06:02:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:02:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Dec  2 06:02:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:02:19 np0005542249 podman[170693]: 2025-12-02 11:02:19.018814797 +0000 UTC m=+0.081738466 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec  2 06:02:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:02:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:02:19.811 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:02:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:02:19.812 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:02:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:02:19.812 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:02:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:02:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:02:23 np0005542249 podman[170893]: 2025-12-02 11:02:23.052713455 +0000 UTC m=+0.082172988 container exec cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:02:23 np0005542249 podman[170893]: 2025-12-02 11:02:23.161400888 +0000 UTC m=+0.190860421 container exec_died cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:02:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:02:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:02:23 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:02:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:02:23 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:02:24 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:02:24 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:02:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:02:24 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:02:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:02:24 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:02:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:02:24 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:02:24 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev f9d116f1-6dcf-4cf1-b569-a843ef267fcb does not exist
Dec  2 06:02:24 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 901ad2a2-cb8a-435f-91ac-d66ab5ac0bb5 does not exist
Dec  2 06:02:24 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev b3d1b376-dab8-4ffa-a838-242493b18136 does not exist
Dec  2 06:02:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:02:24 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:02:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:02:24 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:02:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:02:24 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:02:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:02:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:02:25 np0005542249 podman[171330]: 2025-12-02 11:02:25.530495244 +0000 UTC m=+0.052974851 container create 021cef80881e0ecf9470efcb69e1654be96ac453d43fde828f48684e6fd08c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bose, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  2 06:02:25 np0005542249 systemd[1]: Started libpod-conmon-021cef80881e0ecf9470efcb69e1654be96ac453d43fde828f48684e6fd08c84.scope.
Dec  2 06:02:25 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:02:25 np0005542249 podman[171330]: 2025-12-02 11:02:25.507110252 +0000 UTC m=+0.029589869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:02:25 np0005542249 podman[171330]: 2025-12-02 11:02:25.611426538 +0000 UTC m=+0.133906145 container init 021cef80881e0ecf9470efcb69e1654be96ac453d43fde828f48684e6fd08c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:02:25 np0005542249 podman[171330]: 2025-12-02 11:02:25.617947953 +0000 UTC m=+0.140427540 container start 021cef80881e0ecf9470efcb69e1654be96ac453d43fde828f48684e6fd08c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bose, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  2 06:02:25 np0005542249 podman[171330]: 2025-12-02 11:02:25.621023437 +0000 UTC m=+0.143503044 container attach 021cef80881e0ecf9470efcb69e1654be96ac453d43fde828f48684e6fd08c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bose, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  2 06:02:25 np0005542249 systemd[1]: libpod-021cef80881e0ecf9470efcb69e1654be96ac453d43fde828f48684e6fd08c84.scope: Deactivated successfully.
Dec  2 06:02:25 np0005542249 relaxed_bose[171345]: 167 167
Dec  2 06:02:25 np0005542249 podman[171330]: 2025-12-02 11:02:25.623251346 +0000 UTC m=+0.145730933 container died 021cef80881e0ecf9470efcb69e1654be96ac453d43fde828f48684e6fd08c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:02:25 np0005542249 conmon[171345]: conmon 021cef80881e0ecf9470 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-021cef80881e0ecf9470efcb69e1654be96ac453d43fde828f48684e6fd08c84.scope/container/memory.events
Dec  2 06:02:25 np0005542249 systemd[1]: var-lib-containers-storage-overlay-3c59ae4c98cd2c81ebc8826c2773205dfc29da49123b89cd1c6dbc19d09c8afd-merged.mount: Deactivated successfully.
Dec  2 06:02:25 np0005542249 podman[171330]: 2025-12-02 11:02:25.671365405 +0000 UTC m=+0.193845022 container remove 021cef80881e0ecf9470efcb69e1654be96ac453d43fde828f48684e6fd08c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:02:25 np0005542249 systemd[1]: libpod-conmon-021cef80881e0ecf9470efcb69e1654be96ac453d43fde828f48684e6fd08c84.scope: Deactivated successfully.
Dec  2 06:02:25 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:02:25 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:02:25 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:02:25 np0005542249 podman[171371]: 2025-12-02 11:02:25.873772066 +0000 UTC m=+0.044453161 container create f6e3f475bcc9a929106606f80f08d85d879895814d1722bc2c9dc5775233d7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cori, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  2 06:02:25 np0005542249 systemd[1]: Started libpod-conmon-f6e3f475bcc9a929106606f80f08d85d879895814d1722bc2c9dc5775233d7f5.scope.
Dec  2 06:02:25 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:02:25 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6c0dfb2ac0c6cba265c670c181f4ba7bea86f41b74c35865837f4afc32a1d21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:02:25 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6c0dfb2ac0c6cba265c670c181f4ba7bea86f41b74c35865837f4afc32a1d21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:02:25 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6c0dfb2ac0c6cba265c670c181f4ba7bea86f41b74c35865837f4afc32a1d21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:02:25 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6c0dfb2ac0c6cba265c670c181f4ba7bea86f41b74c35865837f4afc32a1d21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:02:25 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6c0dfb2ac0c6cba265c670c181f4ba7bea86f41b74c35865837f4afc32a1d21/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:02:25 np0005542249 podman[171371]: 2025-12-02 11:02:25.854348432 +0000 UTC m=+0.025029537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:02:25 np0005542249 podman[171371]: 2025-12-02 11:02:25.963494327 +0000 UTC m=+0.134175442 container init f6e3f475bcc9a929106606f80f08d85d879895814d1722bc2c9dc5775233d7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  2 06:02:25 np0005542249 podman[171371]: 2025-12-02 11:02:25.975664526 +0000 UTC m=+0.146345621 container start f6e3f475bcc9a929106606f80f08d85d879895814d1722bc2c9dc5775233d7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cori, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  2 06:02:25 np0005542249 podman[171371]: 2025-12-02 11:02:25.979112958 +0000 UTC m=+0.149794053 container attach f6e3f475bcc9a929106606f80f08d85d879895814d1722bc2c9dc5775233d7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cori, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  2 06:02:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:02:26
Dec  2 06:02:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:02:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:02:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', '.mgr', 'images', 'backups', 'vms']
Dec  2 06:02:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:02:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:02:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:02:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:02:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:02:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:02:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:02:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:02:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:02:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:02:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:02:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:02:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:02:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:02:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:02:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:02:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:02:27 np0005542249 blissful_cori[171388]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:02:27 np0005542249 blissful_cori[171388]: --> relative data size: 1.0
Dec  2 06:02:27 np0005542249 blissful_cori[171388]: --> All data devices are unavailable
Dec  2 06:02:27 np0005542249 systemd[1]: libpod-f6e3f475bcc9a929106606f80f08d85d879895814d1722bc2c9dc5775233d7f5.scope: Deactivated successfully.
Dec  2 06:02:27 np0005542249 systemd[1]: libpod-f6e3f475bcc9a929106606f80f08d85d879895814d1722bc2c9dc5775233d7f5.scope: Consumed 1.060s CPU time.
Dec  2 06:02:27 np0005542249 podman[171371]: 2025-12-02 11:02:27.109602523 +0000 UTC m=+1.280283678 container died f6e3f475bcc9a929106606f80f08d85d879895814d1722bc2c9dc5775233d7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:02:27 np0005542249 systemd[1]: var-lib-containers-storage-overlay-d6c0dfb2ac0c6cba265c670c181f4ba7bea86f41b74c35865837f4afc32a1d21-merged.mount: Deactivated successfully.
Dec  2 06:02:27 np0005542249 podman[171371]: 2025-12-02 11:02:27.17027645 +0000 UTC m=+1.340957545 container remove f6e3f475bcc9a929106606f80f08d85d879895814d1722bc2c9dc5775233d7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cori, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  2 06:02:27 np0005542249 systemd[1]: libpod-conmon-f6e3f475bcc9a929106606f80f08d85d879895814d1722bc2c9dc5775233d7f5.scope: Deactivated successfully.
Dec  2 06:02:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:02:27 np0005542249 podman[171570]: 2025-12-02 11:02:27.854360079 +0000 UTC m=+0.053858305 container create eb956e4d16bb848dec0f8ec14269de66d4d0ec8cf24af4ccaa70f51cba162806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_villani, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:02:27 np0005542249 systemd[1]: Started libpod-conmon-eb956e4d16bb848dec0f8ec14269de66d4d0ec8cf24af4ccaa70f51cba162806.scope.
Dec  2 06:02:27 np0005542249 podman[171570]: 2025-12-02 11:02:27.831614325 +0000 UTC m=+0.031112581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:02:27 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:02:27 np0005542249 podman[171570]: 2025-12-02 11:02:27.954980115 +0000 UTC m=+0.154478421 container init eb956e4d16bb848dec0f8ec14269de66d4d0ec8cf24af4ccaa70f51cba162806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_villani, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:02:27 np0005542249 podman[171570]: 2025-12-02 11:02:27.961315495 +0000 UTC m=+0.160813751 container start eb956e4d16bb848dec0f8ec14269de66d4d0ec8cf24af4ccaa70f51cba162806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_villani, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  2 06:02:27 np0005542249 podman[171570]: 2025-12-02 11:02:27.965168729 +0000 UTC m=+0.164667035 container attach eb956e4d16bb848dec0f8ec14269de66d4d0ec8cf24af4ccaa70f51cba162806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_villani, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec  2 06:02:27 np0005542249 laughing_villani[171586]: 167 167
Dec  2 06:02:27 np0005542249 systemd[1]: libpod-eb956e4d16bb848dec0f8ec14269de66d4d0ec8cf24af4ccaa70f51cba162806.scope: Deactivated successfully.
Dec  2 06:02:27 np0005542249 podman[171570]: 2025-12-02 11:02:27.969709781 +0000 UTC m=+0.169208087 container died eb956e4d16bb848dec0f8ec14269de66d4d0ec8cf24af4ccaa70f51cba162806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_villani, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  2 06:02:28 np0005542249 systemd[1]: var-lib-containers-storage-overlay-914852118bedccd0250e02770f5193f6d0241db93ea7c620c1c3595f94af1478-merged.mount: Deactivated successfully.
Dec  2 06:02:28 np0005542249 podman[171570]: 2025-12-02 11:02:28.022147216 +0000 UTC m=+0.221645472 container remove eb956e4d16bb848dec0f8ec14269de66d4d0ec8cf24af4ccaa70f51cba162806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_villani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:02:28 np0005542249 systemd[1]: libpod-conmon-eb956e4d16bb848dec0f8ec14269de66d4d0ec8cf24af4ccaa70f51cba162806.scope: Deactivated successfully.
Dec  2 06:02:28 np0005542249 podman[171609]: 2025-12-02 11:02:28.251953248 +0000 UTC m=+0.071047849 container create b7da4505c867f1c28e2ca80840e1c5153433378e439f20cdb27f71879a032282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:02:28 np0005542249 systemd[1]: Started libpod-conmon-b7da4505c867f1c28e2ca80840e1c5153433378e439f20cdb27f71879a032282.scope.
Dec  2 06:02:28 np0005542249 podman[171609]: 2025-12-02 11:02:28.222175364 +0000 UTC m=+0.041269995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:02:28 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:02:28 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ebd6a0385cf321d144c314a72d6a6737de0aee099ec81d1896e8580b762e3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:02:28 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ebd6a0385cf321d144c314a72d6a6737de0aee099ec81d1896e8580b762e3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:02:28 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ebd6a0385cf321d144c314a72d6a6737de0aee099ec81d1896e8580b762e3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:02:28 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ebd6a0385cf321d144c314a72d6a6737de0aee099ec81d1896e8580b762e3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:02:28 np0005542249 podman[171609]: 2025-12-02 11:02:28.353707493 +0000 UTC m=+0.172802144 container init b7da4505c867f1c28e2ca80840e1c5153433378e439f20cdb27f71879a032282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shirley, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:02:28 np0005542249 podman[171609]: 2025-12-02 11:02:28.369060437 +0000 UTC m=+0.188155028 container start b7da4505c867f1c28e2ca80840e1c5153433378e439f20cdb27f71879a032282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 06:02:28 np0005542249 podman[171609]: 2025-12-02 11:02:28.37398238 +0000 UTC m=+0.193076981 container attach b7da4505c867f1c28e2ca80840e1c5153433378e439f20cdb27f71879a032282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shirley, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:02:29 np0005542249 angry_shirley[171626]: {
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:    "0": [
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:        {
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "devices": [
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "/dev/loop3"
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            ],
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "lv_name": "ceph_lv0",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "lv_size": "21470642176",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "name": "ceph_lv0",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "tags": {
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.cluster_name": "ceph",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.crush_device_class": "",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.encrypted": "0",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.osd_id": "0",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.type": "block",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.vdo": "0"
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            },
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "type": "block",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "vg_name": "ceph_vg0"
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:        }
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:    ],
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:    "1": [
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:        {
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "devices": [
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "/dev/loop4"
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            ],
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "lv_name": "ceph_lv1",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "lv_size": "21470642176",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "name": "ceph_lv1",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "tags": {
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.cluster_name": "ceph",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.crush_device_class": "",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.encrypted": "0",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.osd_id": "1",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.type": "block",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.vdo": "0"
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            },
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "type": "block",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "vg_name": "ceph_vg1"
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:        }
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:    ],
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:    "2": [
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:        {
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "devices": [
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "/dev/loop5"
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            ],
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "lv_name": "ceph_lv2",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "lv_size": "21470642176",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "name": "ceph_lv2",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "tags": {
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.cluster_name": "ceph",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.crush_device_class": "",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.encrypted": "0",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.osd_id": "2",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.type": "block",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:                "ceph.vdo": "0"
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            },
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "type": "block",
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:            "vg_name": "ceph_vg2"
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:        }
Dec  2 06:02:29 np0005542249 angry_shirley[171626]:    ]
Dec  2 06:02:29 np0005542249 angry_shirley[171626]: }
Dec  2 06:02:29 np0005542249 systemd[1]: libpod-b7da4505c867f1c28e2ca80840e1c5153433378e439f20cdb27f71879a032282.scope: Deactivated successfully.
Dec  2 06:02:29 np0005542249 podman[171609]: 2025-12-02 11:02:29.197424359 +0000 UTC m=+1.016518940 container died b7da4505c867f1c28e2ca80840e1c5153433378e439f20cdb27f71879a032282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:02:29 np0005542249 systemd[1]: var-lib-containers-storage-overlay-c1ebd6a0385cf321d144c314a72d6a6737de0aee099ec81d1896e8580b762e3c-merged.mount: Deactivated successfully.
Dec  2 06:02:29 np0005542249 podman[171609]: 2025-12-02 11:02:29.353629384 +0000 UTC m=+1.172723955 container remove b7da4505c867f1c28e2ca80840e1c5153433378e439f20cdb27f71879a032282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shirley, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Dec  2 06:02:29 np0005542249 systemd[1]: libpod-conmon-b7da4505c867f1c28e2ca80840e1c5153433378e439f20cdb27f71879a032282.scope: Deactivated successfully.
Dec  2 06:02:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:02:30 np0005542249 podman[171786]: 2025-12-02 11:02:30.087029373 +0000 UTC m=+0.054264145 container create 08919e72ab70c397e8fb8f32f9c912bcafff2af3aff4bc436360e7aab64db999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:02:30 np0005542249 systemd[1]: Started libpod-conmon-08919e72ab70c397e8fb8f32f9c912bcafff2af3aff4bc436360e7aab64db999.scope.
Dec  2 06:02:30 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:02:30 np0005542249 podman[171786]: 2025-12-02 11:02:30.063876469 +0000 UTC m=+0.031111321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:02:30 np0005542249 podman[171786]: 2025-12-02 11:02:30.174062832 +0000 UTC m=+0.141297624 container init 08919e72ab70c397e8fb8f32f9c912bcafff2af3aff4bc436360e7aab64db999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:02:30 np0005542249 podman[171786]: 2025-12-02 11:02:30.185072109 +0000 UTC m=+0.152306891 container start 08919e72ab70c397e8fb8f32f9c912bcafff2af3aff4bc436360e7aab64db999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_meitner, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:02:30 np0005542249 podman[171786]: 2025-12-02 11:02:30.189258632 +0000 UTC m=+0.156493424 container attach 08919e72ab70c397e8fb8f32f9c912bcafff2af3aff4bc436360e7aab64db999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_meitner, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  2 06:02:30 np0005542249 naughty_meitner[171803]: 167 167
Dec  2 06:02:30 np0005542249 systemd[1]: libpod-08919e72ab70c397e8fb8f32f9c912bcafff2af3aff4bc436360e7aab64db999.scope: Deactivated successfully.
Dec  2 06:02:30 np0005542249 podman[171786]: 2025-12-02 11:02:30.193438155 +0000 UTC m=+0.160672927 container died 08919e72ab70c397e8fb8f32f9c912bcafff2af3aff4bc436360e7aab64db999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  2 06:02:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:02:30 np0005542249 systemd[1]: var-lib-containers-storage-overlay-850e37efe0c947eee9bf4204c8fb279a122d4ef54fcf99ec5a798874abc4f876-merged.mount: Deactivated successfully.
Dec  2 06:02:30 np0005542249 podman[171786]: 2025-12-02 11:02:30.23215442 +0000 UTC m=+0.199389192 container remove 08919e72ab70c397e8fb8f32f9c912bcafff2af3aff4bc436360e7aab64db999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  2 06:02:30 np0005542249 systemd[1]: libpod-conmon-08919e72ab70c397e8fb8f32f9c912bcafff2af3aff4bc436360e7aab64db999.scope: Deactivated successfully.
Dec  2 06:02:30 np0005542249 podman[171827]: 2025-12-02 11:02:30.430151412 +0000 UTC m=+0.054368908 container create 7ab68a2e938e6b3993494b0726ba7df827f4c47ce27ec521f929a47821e0923f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_black, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:02:30 np0005542249 systemd[1]: Started libpod-conmon-7ab68a2e938e6b3993494b0726ba7df827f4c47ce27ec521f929a47821e0923f.scope.
Dec  2 06:02:30 np0005542249 podman[171827]: 2025-12-02 11:02:30.405468576 +0000 UTC m=+0.029686142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:02:30 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:02:30 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50e984d7fcb1bfc745d48613b96c57cf3b5261b30249225930fc4ec05fff3bda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:02:30 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50e984d7fcb1bfc745d48613b96c57cf3b5261b30249225930fc4ec05fff3bda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:02:30 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50e984d7fcb1bfc745d48613b96c57cf3b5261b30249225930fc4ec05fff3bda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:02:30 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50e984d7fcb1bfc745d48613b96c57cf3b5261b30249225930fc4ec05fff3bda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:02:30 np0005542249 podman[171827]: 2025-12-02 11:02:30.545554886 +0000 UTC m=+0.169772462 container init 7ab68a2e938e6b3993494b0726ba7df827f4c47ce27ec521f929a47821e0923f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  2 06:02:30 np0005542249 podman[171827]: 2025-12-02 11:02:30.558905496 +0000 UTC m=+0.183122972 container start 7ab68a2e938e6b3993494b0726ba7df827f4c47ce27ec521f929a47821e0923f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:02:30 np0005542249 podman[171827]: 2025-12-02 11:02:30.562933525 +0000 UTC m=+0.187151041 container attach 7ab68a2e938e6b3993494b0726ba7df827f4c47ce27ec521f929a47821e0923f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  2 06:02:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:02:31 np0005542249 funny_black[171843]: {
Dec  2 06:02:31 np0005542249 funny_black[171843]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:02:31 np0005542249 funny_black[171843]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:02:31 np0005542249 funny_black[171843]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:02:31 np0005542249 funny_black[171843]:        "osd_id": 0,
Dec  2 06:02:31 np0005542249 funny_black[171843]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:02:31 np0005542249 funny_black[171843]:        "type": "bluestore"
Dec  2 06:02:31 np0005542249 funny_black[171843]:    },
Dec  2 06:02:31 np0005542249 funny_black[171843]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:02:31 np0005542249 funny_black[171843]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:02:31 np0005542249 funny_black[171843]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:02:31 np0005542249 funny_black[171843]:        "osd_id": 2,
Dec  2 06:02:31 np0005542249 funny_black[171843]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:02:31 np0005542249 funny_black[171843]:        "type": "bluestore"
Dec  2 06:02:31 np0005542249 funny_black[171843]:    },
Dec  2 06:02:31 np0005542249 funny_black[171843]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:02:31 np0005542249 funny_black[171843]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:02:31 np0005542249 funny_black[171843]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:02:31 np0005542249 funny_black[171843]:        "osd_id": 1,
Dec  2 06:02:31 np0005542249 funny_black[171843]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:02:31 np0005542249 funny_black[171843]:        "type": "bluestore"
Dec  2 06:02:31 np0005542249 funny_black[171843]:    }
Dec  2 06:02:31 np0005542249 funny_black[171843]: }
Dec  2 06:02:31 np0005542249 systemd[1]: libpod-7ab68a2e938e6b3993494b0726ba7df827f4c47ce27ec521f929a47821e0923f.scope: Deactivated successfully.
Dec  2 06:02:31 np0005542249 systemd[1]: libpod-7ab68a2e938e6b3993494b0726ba7df827f4c47ce27ec521f929a47821e0923f.scope: Consumed 1.106s CPU time.
Dec  2 06:02:31 np0005542249 podman[171876]: 2025-12-02 11:02:31.713431949 +0000 UTC m=+0.035832308 container died 7ab68a2e938e6b3993494b0726ba7df827f4c47ce27ec521f929a47821e0923f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  2 06:02:31 np0005542249 systemd[1]: var-lib-containers-storage-overlay-50e984d7fcb1bfc745d48613b96c57cf3b5261b30249225930fc4ec05fff3bda-merged.mount: Deactivated successfully.
Dec  2 06:02:31 np0005542249 podman[171876]: 2025-12-02 11:02:31.766279616 +0000 UTC m=+0.088679945 container remove 7ab68a2e938e6b3993494b0726ba7df827f4c47ce27ec521f929a47821e0923f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_black, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  2 06:02:31 np0005542249 systemd[1]: libpod-conmon-7ab68a2e938e6b3993494b0726ba7df827f4c47ce27ec521f929a47821e0923f.scope: Deactivated successfully.
Dec  2 06:02:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:02:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:02:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:02:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:02:31 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev e905a5a8-9695-4cd7-b4c5-362e32c8bed8 does not exist
Dec  2 06:02:31 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 202ac07e-43ad-4753-9bd9-869126c3dfc7 does not exist
Dec  2 06:02:32 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:02:32 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:02:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:02:34 np0005542249 kernel: SELinux:  Converting 2768 SID table entries...
Dec  2 06:02:34 np0005542249 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 06:02:34 np0005542249 kernel: SELinux:  policy capability open_perms=1
Dec  2 06:02:34 np0005542249 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 06:02:34 np0005542249 kernel: SELinux:  policy capability always_check_network=0
Dec  2 06:02:34 np0005542249 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 06:02:34 np0005542249 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 06:02:34 np0005542249 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 06:02:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:02:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:02:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:02:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:02:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:02:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:02:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:02:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:02:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:02:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:02:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:02:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:02:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:02:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:02:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:02:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:02:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:02:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:02:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:02:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:02:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:02:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:02:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:02:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:02:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:02:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:02:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:02:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:02:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:02:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:02:43 np0005542249 kernel: SELinux:  Converting 2768 SID table entries...
Dec  2 06:02:43 np0005542249 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Dec  2 06:02:43 np0005542249 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 06:02:43 np0005542249 kernel: SELinux:  policy capability open_perms=1
Dec  2 06:02:43 np0005542249 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 06:02:43 np0005542249 kernel: SELinux:  policy capability always_check_network=0
Dec  2 06:02:43 np0005542249 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 06:02:43 np0005542249 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 06:02:43 np0005542249 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 06:02:44 np0005542249 podman[171955]: 2025-12-02 11:02:44.051243813 +0000 UTC m=+0.123296188 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller)
Dec  2 06:02:44 np0005542249 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec  2 06:02:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:02:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:02:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:02:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:02:50 np0005542249 podman[171982]: 2025-12-02 11:02:50.094853349 +0000 UTC m=+0.061672235 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec  2 06:02:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:02:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:02:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:02:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:02:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:02:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:02:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:02:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:02:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:02:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:02:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:02:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:02:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:03:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:03:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:03:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:15 np0005542249 podman[181007]: 2025-12-02 11:03:15.05650118 +0000 UTC m=+0.128738246 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec  2 06:03:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:03:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:03:19.813 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:03:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:03:19.813 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:03:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:03:19.813 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:03:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:03:21 np0005542249 podman[184156]: 2025-12-02 11:03:21.01346703 +0000 UTC m=+0.082082844 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  2 06:03:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:03:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:03:26
Dec  2 06:03:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:03:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:03:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'images', 'backups', 'volumes', '.rgw.root']
Dec  2 06:03:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:03:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:03:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:03:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:03:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:03:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:03:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:03:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:03:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:03:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:03:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:03:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:03:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:03:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:03:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:03:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:03:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:03:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:03:28.497271) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673408497320, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2036, "num_deletes": 251, "total_data_size": 3509036, "memory_usage": 3556192, "flush_reason": "Manual Compaction"}
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673408524871, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3433916, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9777, "largest_seqno": 11812, "table_properties": {"data_size": 3424659, "index_size": 5879, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17719, "raw_average_key_size": 19, "raw_value_size": 3406354, "raw_average_value_size": 3735, "num_data_blocks": 267, "num_entries": 912, "num_filter_entries": 912, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764673176, "oldest_key_time": 1764673176, "file_creation_time": 1764673408, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 27866 microseconds, and 13642 cpu microseconds.
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:03:28.525136) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3433916 bytes OK
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:03:28.525225) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:03:28.526691) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:03:28.526713) EVENT_LOG_v1 {"time_micros": 1764673408526705, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:03:28.526737) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3500557, prev total WAL file size 3500557, number of live WAL files 2.
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:03:28.529105) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3353KB)], [26(6211KB)]
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673408529216, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9794311, "oldest_snapshot_seqno": -1}
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3732 keys, 8038492 bytes, temperature: kUnknown
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673408597306, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8038492, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8009716, "index_size": 18362, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9349, "raw_key_size": 89746, "raw_average_key_size": 24, "raw_value_size": 7938474, "raw_average_value_size": 2127, "num_data_blocks": 794, "num_entries": 3732, "num_filter_entries": 3732, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672515, "oldest_key_time": 0, "file_creation_time": 1764673408, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:03:28.597721) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8038492 bytes
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:03:28.599588) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.6 rd, 117.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 6.1 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(5.2) write-amplify(2.3) OK, records in: 4246, records dropped: 514 output_compression: NoCompression
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:03:28.599623) EVENT_LOG_v1 {"time_micros": 1764673408599606, "job": 10, "event": "compaction_finished", "compaction_time_micros": 68204, "compaction_time_cpu_micros": 37304, "output_level": 6, "num_output_files": 1, "total_output_size": 8038492, "num_input_records": 4246, "num_output_records": 3732, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673408600743, "job": 10, "event": "table_file_deletion", "file_number": 28}
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673408602754, "job": 10, "event": "table_file_deletion", "file_number": 26}
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:03:28.528868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:03:28.602812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:03:28.602820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:03:28.602825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:03:28.602829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:03:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:03:28.602833) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:03:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:03:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:03:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:03:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:03:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:03:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:03:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:03:32 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 28120c3a-8a17-49df-bf67-229d2ad1e302 does not exist
Dec  2 06:03:32 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev e84bb2b8-e36d-4dfe-a642-6d20fc14fc22 does not exist
Dec  2 06:03:32 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 6f66dd68-110d-4d8d-b882-ff87408a658a does not exist
Dec  2 06:03:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:03:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:03:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:03:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:03:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:03:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:03:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:33 np0005542249 podman[189122]: 2025-12-02 11:03:33.518974741 +0000 UTC m=+0.062196225 container create 776f92c964d7d447b9a92051dc8312fdb28a77bdae1b4c35b443e2729993188e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_almeida, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:03:33 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:03:33 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:03:33 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:03:33 np0005542249 systemd[1]: Started libpod-conmon-776f92c964d7d447b9a92051dc8312fdb28a77bdae1b4c35b443e2729993188e.scope.
Dec  2 06:03:33 np0005542249 podman[189122]: 2025-12-02 11:03:33.497025636 +0000 UTC m=+0.040247120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:03:33 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:03:33 np0005542249 podman[189122]: 2025-12-02 11:03:33.633905102 +0000 UTC m=+0.177126646 container init 776f92c964d7d447b9a92051dc8312fdb28a77bdae1b4c35b443e2729993188e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_almeida, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:03:33 np0005542249 podman[189122]: 2025-12-02 11:03:33.649465213 +0000 UTC m=+0.192686677 container start 776f92c964d7d447b9a92051dc8312fdb28a77bdae1b4c35b443e2729993188e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:03:33 np0005542249 podman[189122]: 2025-12-02 11:03:33.654220023 +0000 UTC m=+0.197441527 container attach 776f92c964d7d447b9a92051dc8312fdb28a77bdae1b4c35b443e2729993188e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:03:33 np0005542249 practical_almeida[189139]: 167 167
Dec  2 06:03:33 np0005542249 systemd[1]: libpod-776f92c964d7d447b9a92051dc8312fdb28a77bdae1b4c35b443e2729993188e.scope: Deactivated successfully.
Dec  2 06:03:33 np0005542249 podman[189122]: 2025-12-02 11:03:33.660067941 +0000 UTC m=+0.203289425 container died 776f92c964d7d447b9a92051dc8312fdb28a77bdae1b4c35b443e2729993188e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:03:33 np0005542249 systemd[1]: var-lib-containers-storage-overlay-ad2498dc0bd9a92f56268a6c8118f7325875c07a20bc3de696ab32ca35ed11bc-merged.mount: Deactivated successfully.
Dec  2 06:03:33 np0005542249 podman[189122]: 2025-12-02 11:03:33.718464122 +0000 UTC m=+0.261685616 container remove 776f92c964d7d447b9a92051dc8312fdb28a77bdae1b4c35b443e2729993188e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_almeida, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  2 06:03:33 np0005542249 systemd[1]: libpod-conmon-776f92c964d7d447b9a92051dc8312fdb28a77bdae1b4c35b443e2729993188e.scope: Deactivated successfully.
Dec  2 06:03:33 np0005542249 podman[189163]: 2025-12-02 11:03:33.917711896 +0000 UTC m=+0.056758708 container create 9ef2da7dc89a0d9386f8d18ce6a3ca874af12a97aff26c9c103d5e951752209d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  2 06:03:33 np0005542249 systemd[1]: Started libpod-conmon-9ef2da7dc89a0d9386f8d18ce6a3ca874af12a97aff26c9c103d5e951752209d.scope.
Dec  2 06:03:33 np0005542249 podman[189163]: 2025-12-02 11:03:33.899270037 +0000 UTC m=+0.038316869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:03:34 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:03:34 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514a5473303f522191c6c4c4896cee99aa0251b039252e6bb37692a4b74abd93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:03:34 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514a5473303f522191c6c4c4896cee99aa0251b039252e6bb37692a4b74abd93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:03:34 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514a5473303f522191c6c4c4896cee99aa0251b039252e6bb37692a4b74abd93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:03:34 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514a5473303f522191c6c4c4896cee99aa0251b039252e6bb37692a4b74abd93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:03:34 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514a5473303f522191c6c4c4896cee99aa0251b039252e6bb37692a4b74abd93/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:03:34 np0005542249 podman[189163]: 2025-12-02 11:03:34.04307268 +0000 UTC m=+0.182119512 container init 9ef2da7dc89a0d9386f8d18ce6a3ca874af12a97aff26c9c103d5e951752209d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:03:34 np0005542249 podman[189163]: 2025-12-02 11:03:34.055118226 +0000 UTC m=+0.194165038 container start 9ef2da7dc89a0d9386f8d18ce6a3ca874af12a97aff26c9c103d5e951752209d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_khorana, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  2 06:03:34 np0005542249 podman[189163]: 2025-12-02 11:03:34.059159326 +0000 UTC m=+0.198206148 container attach 9ef2da7dc89a0d9386f8d18ce6a3ca874af12a97aff26c9c103d5e951752209d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  2 06:03:35 np0005542249 nifty_khorana[189180]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:03:35 np0005542249 nifty_khorana[189180]: --> relative data size: 1.0
Dec  2 06:03:35 np0005542249 nifty_khorana[189180]: --> All data devices are unavailable
Dec  2 06:03:35 np0005542249 systemd[1]: libpod-9ef2da7dc89a0d9386f8d18ce6a3ca874af12a97aff26c9c103d5e951752209d.scope: Deactivated successfully.
Dec  2 06:03:35 np0005542249 systemd[1]: libpod-9ef2da7dc89a0d9386f8d18ce6a3ca874af12a97aff26c9c103d5e951752209d.scope: Consumed 1.056s CPU time.
Dec  2 06:03:35 np0005542249 podman[189163]: 2025-12-02 11:03:35.168699615 +0000 UTC m=+1.307746417 container died 9ef2da7dc89a0d9386f8d18ce6a3ca874af12a97aff26c9c103d5e951752209d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_khorana, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:03:35 np0005542249 systemd[1]: var-lib-containers-storage-overlay-514a5473303f522191c6c4c4896cee99aa0251b039252e6bb37692a4b74abd93-merged.mount: Deactivated successfully.
Dec  2 06:03:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:03:35 np0005542249 podman[189163]: 2025-12-02 11:03:35.227412674 +0000 UTC m=+1.366459476 container remove 9ef2da7dc89a0d9386f8d18ce6a3ca874af12a97aff26c9c103d5e951752209d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Dec  2 06:03:35 np0005542249 systemd[1]: libpod-conmon-9ef2da7dc89a0d9386f8d18ce6a3ca874af12a97aff26c9c103d5e951752209d.scope: Deactivated successfully.
Dec  2 06:03:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:03:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:03:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:03:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:03:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:03:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:03:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:03:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:03:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:03:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:03:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:03:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:03:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:03:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:03:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:03:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:03:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:03:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:03:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:03:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:03:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:03:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:03:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:03:36 np0005542249 podman[189365]: 2025-12-02 11:03:36.106164796 +0000 UTC m=+0.060238902 container create 4ce9b64acdb9fa0143bead1e9ba0fb88061cc3f251bbe14262a815111bf07ff4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  2 06:03:36 np0005542249 systemd[1]: Started libpod-conmon-4ce9b64acdb9fa0143bead1e9ba0fb88061cc3f251bbe14262a815111bf07ff4.scope.
Dec  2 06:03:36 np0005542249 podman[189365]: 2025-12-02 11:03:36.085491916 +0000 UTC m=+0.039566012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:03:36 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:03:36 np0005542249 podman[189365]: 2025-12-02 11:03:36.212667659 +0000 UTC m=+0.166741735 container init 4ce9b64acdb9fa0143bead1e9ba0fb88061cc3f251bbe14262a815111bf07ff4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shaw, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  2 06:03:36 np0005542249 podman[189365]: 2025-12-02 11:03:36.220840221 +0000 UTC m=+0.174914297 container start 4ce9b64acdb9fa0143bead1e9ba0fb88061cc3f251bbe14262a815111bf07ff4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  2 06:03:36 np0005542249 podman[189365]: 2025-12-02 11:03:36.224453678 +0000 UTC m=+0.178527774 container attach 4ce9b64acdb9fa0143bead1e9ba0fb88061cc3f251bbe14262a815111bf07ff4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  2 06:03:36 np0005542249 optimistic_shaw[189382]: 167 167
Dec  2 06:03:36 np0005542249 podman[189365]: 2025-12-02 11:03:36.226205566 +0000 UTC m=+0.180279642 container died 4ce9b64acdb9fa0143bead1e9ba0fb88061cc3f251bbe14262a815111bf07ff4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:03:36 np0005542249 systemd[1]: libpod-4ce9b64acdb9fa0143bead1e9ba0fb88061cc3f251bbe14262a815111bf07ff4.scope: Deactivated successfully.
Dec  2 06:03:36 np0005542249 systemd[1]: var-lib-containers-storage-overlay-e16d9f469ce1c7cbeb7b722ebabfc61ddccf586205f2a9e88abf13259e213c05-merged.mount: Deactivated successfully.
Dec  2 06:03:36 np0005542249 podman[189365]: 2025-12-02 11:03:36.261942583 +0000 UTC m=+0.216016659 container remove 4ce9b64acdb9fa0143bead1e9ba0fb88061cc3f251bbe14262a815111bf07ff4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shaw, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:03:36 np0005542249 systemd[1]: libpod-conmon-4ce9b64acdb9fa0143bead1e9ba0fb88061cc3f251bbe14262a815111bf07ff4.scope: Deactivated successfully.
Dec  2 06:03:36 np0005542249 podman[189406]: 2025-12-02 11:03:36.45943564 +0000 UTC m=+0.048003580 container create da6ee68fcf11e339de83914d42ee91e0fb3eccd3423b37c58d21a3299f9c6803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_fermi, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:03:36 np0005542249 systemd[1]: Started libpod-conmon-da6ee68fcf11e339de83914d42ee91e0fb3eccd3423b37c58d21a3299f9c6803.scope.
Dec  2 06:03:36 np0005542249 podman[189406]: 2025-12-02 11:03:36.440434126 +0000 UTC m=+0.029002086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:03:36 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:03:36 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eca1e9d9eb52f5f511c69fa8d9a2a5aac35de2ed298736e5e2112ef4e18c8e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:03:36 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eca1e9d9eb52f5f511c69fa8d9a2a5aac35de2ed298736e5e2112ef4e18c8e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:03:36 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eca1e9d9eb52f5f511c69fa8d9a2a5aac35de2ed298736e5e2112ef4e18c8e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:03:36 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eca1e9d9eb52f5f511c69fa8d9a2a5aac35de2ed298736e5e2112ef4e18c8e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:03:36 np0005542249 podman[189406]: 2025-12-02 11:03:36.596645845 +0000 UTC m=+0.185213835 container init da6ee68fcf11e339de83914d42ee91e0fb3eccd3423b37c58d21a3299f9c6803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_fermi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:03:36 np0005542249 podman[189406]: 2025-12-02 11:03:36.605232307 +0000 UTC m=+0.193800257 container start da6ee68fcf11e339de83914d42ee91e0fb3eccd3423b37c58d21a3299f9c6803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  2 06:03:36 np0005542249 podman[189406]: 2025-12-02 11:03:36.609709599 +0000 UTC m=+0.198277589 container attach da6ee68fcf11e339de83914d42ee91e0fb3eccd3423b37c58d21a3299f9c6803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_fermi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]: {
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:    "0": [
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:        {
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "devices": [
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "/dev/loop3"
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            ],
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "lv_name": "ceph_lv0",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "lv_size": "21470642176",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "name": "ceph_lv0",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "tags": {
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.cluster_name": "ceph",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.crush_device_class": "",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.encrypted": "0",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.osd_id": "0",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.type": "block",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.vdo": "0"
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            },
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "type": "block",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "vg_name": "ceph_vg0"
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:        }
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:    ],
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:    "1": [
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:        {
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "devices": [
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "/dev/loop4"
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            ],
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "lv_name": "ceph_lv1",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "lv_size": "21470642176",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "name": "ceph_lv1",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "tags": {
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.cluster_name": "ceph",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.crush_device_class": "",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.encrypted": "0",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.osd_id": "1",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.type": "block",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.vdo": "0"
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            },
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "type": "block",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "vg_name": "ceph_vg1"
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:        }
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:    ],
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:    "2": [
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:        {
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "devices": [
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "/dev/loop5"
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            ],
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "lv_name": "ceph_lv2",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "lv_size": "21470642176",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "name": "ceph_lv2",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "tags": {
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.cluster_name": "ceph",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.crush_device_class": "",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.encrypted": "0",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.osd_id": "2",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.type": "block",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:                "ceph.vdo": "0"
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            },
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "type": "block",
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:            "vg_name": "ceph_vg2"
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:        }
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]:    ]
Dec  2 06:03:37 np0005542249 thirsty_fermi[189423]: }
Dec  2 06:03:37 np0005542249 systemd[1]: libpod-da6ee68fcf11e339de83914d42ee91e0fb3eccd3423b37c58d21a3299f9c6803.scope: Deactivated successfully.
Dec  2 06:03:37 np0005542249 podman[189406]: 2025-12-02 11:03:37.371923205 +0000 UTC m=+0.960491155 container died da6ee68fcf11e339de83914d42ee91e0fb3eccd3423b37c58d21a3299f9c6803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:03:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:37 np0005542249 systemd[1]: var-lib-containers-storage-overlay-1eca1e9d9eb52f5f511c69fa8d9a2a5aac35de2ed298736e5e2112ef4e18c8e1-merged.mount: Deactivated successfully.
Dec  2 06:03:37 np0005542249 podman[189406]: 2025-12-02 11:03:37.450248735 +0000 UTC m=+1.038816695 container remove da6ee68fcf11e339de83914d42ee91e0fb3eccd3423b37c58d21a3299f9c6803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_fermi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  2 06:03:37 np0005542249 systemd[1]: libpod-conmon-da6ee68fcf11e339de83914d42ee91e0fb3eccd3423b37c58d21a3299f9c6803.scope: Deactivated successfully.
Dec  2 06:03:38 np0005542249 podman[189583]: 2025-12-02 11:03:38.189340796 +0000 UTC m=+0.047056886 container create 7a1678ff1918ee4a26459160b83838f69be8e7c2f3f23e962a822c17a49ca6b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:03:38 np0005542249 systemd[1]: Started libpod-conmon-7a1678ff1918ee4a26459160b83838f69be8e7c2f3f23e962a822c17a49ca6b0.scope.
Dec  2 06:03:38 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:03:38 np0005542249 podman[189583]: 2025-12-02 11:03:38.165024267 +0000 UTC m=+0.022740317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:03:38 np0005542249 podman[189583]: 2025-12-02 11:03:38.274396938 +0000 UTC m=+0.132113028 container init 7a1678ff1918ee4a26459160b83838f69be8e7c2f3f23e962a822c17a49ca6b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_goldstine, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  2 06:03:38 np0005542249 podman[189583]: 2025-12-02 11:03:38.282576199 +0000 UTC m=+0.140292289 container start 7a1678ff1918ee4a26459160b83838f69be8e7c2f3f23e962a822c17a49ca6b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  2 06:03:38 np0005542249 podman[189583]: 2025-12-02 11:03:38.287719289 +0000 UTC m=+0.145435379 container attach 7a1678ff1918ee4a26459160b83838f69be8e7c2f3f23e962a822c17a49ca6b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_goldstine, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  2 06:03:38 np0005542249 systemd[1]: libpod-7a1678ff1918ee4a26459160b83838f69be8e7c2f3f23e962a822c17a49ca6b0.scope: Deactivated successfully.
Dec  2 06:03:38 np0005542249 loving_goldstine[189599]: 167 167
Dec  2 06:03:38 np0005542249 conmon[189599]: conmon 7a1678ff1918ee4a2645 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7a1678ff1918ee4a26459160b83838f69be8e7c2f3f23e962a822c17a49ca6b0.scope/container/memory.events
Dec  2 06:03:38 np0005542249 podman[189583]: 2025-12-02 11:03:38.290177806 +0000 UTC m=+0.147893866 container died 7a1678ff1918ee4a26459160b83838f69be8e7c2f3f23e962a822c17a49ca6b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_goldstine, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  2 06:03:38 np0005542249 systemd[1]: var-lib-containers-storage-overlay-ecac2607901d17aa194a55925efb583e7e03a459fc659bc31dd42a29cceef0e9-merged.mount: Deactivated successfully.
Dec  2 06:03:38 np0005542249 podman[189583]: 2025-12-02 11:03:38.337120556 +0000 UTC m=+0.194836606 container remove 7a1678ff1918ee4a26459160b83838f69be8e7c2f3f23e962a822c17a49ca6b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_goldstine, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:03:38 np0005542249 systemd[1]: libpod-conmon-7a1678ff1918ee4a26459160b83838f69be8e7c2f3f23e962a822c17a49ca6b0.scope: Deactivated successfully.
Dec  2 06:03:38 np0005542249 podman[189624]: 2025-12-02 11:03:38.526788412 +0000 UTC m=+0.060237592 container create c1287c2dfb32890e3fbaafa4785e338d0ac845086619ace8fe8f05ea9c03d091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_rubin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  2 06:03:38 np0005542249 systemd[1]: Started libpod-conmon-c1287c2dfb32890e3fbaafa4785e338d0ac845086619ace8fe8f05ea9c03d091.scope.
Dec  2 06:03:38 np0005542249 podman[189624]: 2025-12-02 11:03:38.497956101 +0000 UTC m=+0.031405361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:03:38 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:03:38 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e01cd4c213e1dbcc29f8c084108aa2504babe4d9d6df215230839345ef88014/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:03:38 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e01cd4c213e1dbcc29f8c084108aa2504babe4d9d6df215230839345ef88014/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:03:38 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e01cd4c213e1dbcc29f8c084108aa2504babe4d9d6df215230839345ef88014/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:03:38 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e01cd4c213e1dbcc29f8c084108aa2504babe4d9d6df215230839345ef88014/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:03:38 np0005542249 podman[189624]: 2025-12-02 11:03:38.663945085 +0000 UTC m=+0.197394275 container init c1287c2dfb32890e3fbaafa4785e338d0ac845086619ace8fe8f05ea9c03d091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_rubin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:03:38 np0005542249 podman[189624]: 2025-12-02 11:03:38.67598883 +0000 UTC m=+0.209438050 container start c1287c2dfb32890e3fbaafa4785e338d0ac845086619ace8fe8f05ea9c03d091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_rubin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  2 06:03:38 np0005542249 podman[189624]: 2025-12-02 11:03:38.683922015 +0000 UTC m=+0.217371205 container attach c1287c2dfb32890e3fbaafa4785e338d0ac845086619ace8fe8f05ea9c03d091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_rubin, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:03:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:39 np0005542249 agitated_rubin[189641]: {
Dec  2 06:03:39 np0005542249 agitated_rubin[189641]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:03:39 np0005542249 agitated_rubin[189641]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:03:39 np0005542249 agitated_rubin[189641]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:03:39 np0005542249 agitated_rubin[189641]:        "osd_id": 0,
Dec  2 06:03:39 np0005542249 agitated_rubin[189641]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:03:39 np0005542249 agitated_rubin[189641]:        "type": "bluestore"
Dec  2 06:03:39 np0005542249 agitated_rubin[189641]:    },
Dec  2 06:03:39 np0005542249 agitated_rubin[189641]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:03:39 np0005542249 agitated_rubin[189641]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:03:39 np0005542249 agitated_rubin[189641]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:03:39 np0005542249 agitated_rubin[189641]:        "osd_id": 2,
Dec  2 06:03:39 np0005542249 agitated_rubin[189641]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:03:39 np0005542249 agitated_rubin[189641]:        "type": "bluestore"
Dec  2 06:03:39 np0005542249 agitated_rubin[189641]:    },
Dec  2 06:03:39 np0005542249 agitated_rubin[189641]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:03:39 np0005542249 agitated_rubin[189641]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:03:39 np0005542249 agitated_rubin[189641]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:03:39 np0005542249 agitated_rubin[189641]:        "osd_id": 1,
Dec  2 06:03:39 np0005542249 agitated_rubin[189641]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:03:39 np0005542249 agitated_rubin[189641]:        "type": "bluestore"
Dec  2 06:03:39 np0005542249 agitated_rubin[189641]:    }
Dec  2 06:03:39 np0005542249 agitated_rubin[189641]: }
Dec  2 06:03:39 np0005542249 systemd[1]: libpod-c1287c2dfb32890e3fbaafa4785e338d0ac845086619ace8fe8f05ea9c03d091.scope: Deactivated successfully.
Dec  2 06:03:39 np0005542249 podman[189624]: 2025-12-02 11:03:39.7567145 +0000 UTC m=+1.290163710 container died c1287c2dfb32890e3fbaafa4785e338d0ac845086619ace8fe8f05ea9c03d091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_rubin, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:03:39 np0005542249 systemd[1]: libpod-c1287c2dfb32890e3fbaafa4785e338d0ac845086619ace8fe8f05ea9c03d091.scope: Consumed 1.087s CPU time.
Dec  2 06:03:39 np0005542249 systemd[1]: var-lib-containers-storage-overlay-3e01cd4c213e1dbcc29f8c084108aa2504babe4d9d6df215230839345ef88014-merged.mount: Deactivated successfully.
Dec  2 06:03:39 np0005542249 podman[189624]: 2025-12-02 11:03:39.826956041 +0000 UTC m=+1.360405231 container remove c1287c2dfb32890e3fbaafa4785e338d0ac845086619ace8fe8f05ea9c03d091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 06:03:39 np0005542249 systemd[1]: libpod-conmon-c1287c2dfb32890e3fbaafa4785e338d0ac845086619ace8fe8f05ea9c03d091.scope: Deactivated successfully.
Dec  2 06:03:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:03:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:03:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:03:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:03:39 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 9dbe0257-e1e5-48ac-96df-2234b21bf3f8 does not exist
Dec  2 06:03:39 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 84d604be-6b96-4fbe-a266-30ed2f8edbd1 does not exist
Dec  2 06:03:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:03:40 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:03:40 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:03:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:42 np0005542249 kernel: SELinux:  Converting 2769 SID table entries...
Dec  2 06:03:42 np0005542249 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 06:03:42 np0005542249 kernel: SELinux:  policy capability open_perms=1
Dec  2 06:03:42 np0005542249 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 06:03:42 np0005542249 kernel: SELinux:  policy capability always_check_network=0
Dec  2 06:03:42 np0005542249 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 06:03:42 np0005542249 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 06:03:42 np0005542249 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 06:03:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:44 np0005542249 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Dec  2 06:03:44 np0005542249 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec  2 06:03:44 np0005542249 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Dec  2 06:03:45 np0005542249 podman[189778]: 2025-12-02 11:03:45.186068944 +0000 UTC m=+0.086110713 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  2 06:03:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:03:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:03:51 np0005542249 podman[190016]: 2025-12-02 11:03:51.231714343 +0000 UTC m=+0.106774403 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:03:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:52 np0005542249 systemd[1]: Stopping OpenSSH server daemon...
Dec  2 06:03:52 np0005542249 systemd[1]: sshd.service: Deactivated successfully.
Dec  2 06:03:52 np0005542249 systemd[1]: Stopped OpenSSH server daemon.
Dec  2 06:03:52 np0005542249 systemd[1]: sshd.service: Consumed 3.890s CPU time, read 32.0K from disk, written 12.0K to disk.
Dec  2 06:03:52 np0005542249 systemd[1]: Stopped target sshd-keygen.target.
Dec  2 06:03:52 np0005542249 systemd[1]: Stopping sshd-keygen.target...
Dec  2 06:03:52 np0005542249 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  2 06:03:52 np0005542249 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  2 06:03:52 np0005542249 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  2 06:03:52 np0005542249 systemd[1]: Reached target sshd-keygen.target.
Dec  2 06:03:52 np0005542249 systemd[1]: Starting OpenSSH server daemon...
Dec  2 06:03:52 np0005542249 systemd[1]: Started OpenSSH server daemon.
Dec  2 06:03:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:55 np0005542249 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  2 06:03:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:03:55 np0005542249 systemd[1]: Starting man-db-cache-update.service...
Dec  2 06:03:55 np0005542249 systemd[1]: Reloading.
Dec  2 06:03:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:55 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:03:55 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:03:55 np0005542249 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  2 06:03:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:03:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:03:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:03:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:03:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:03:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:03:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:03:59 np0005542249 python3.9[195436]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  2 06:03:59 np0005542249 systemd[1]: Reloading.
Dec  2 06:03:59 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:03:59 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:04:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:04:00 np0005542249 python3.9[196582]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  2 06:04:00 np0005542249 systemd[1]: Reloading.
Dec  2 06:04:01 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:04:01 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:04:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:01 np0005542249 auditd[702]: Audit daemon rotating log files
Dec  2 06:04:02 np0005542249 python3.9[197698]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  2 06:04:02 np0005542249 systemd[1]: Reloading.
Dec  2 06:04:02 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:04:02 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:04:03 np0005542249 python3.9[199012]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  2 06:04:03 np0005542249 systemd[1]: Reloading.
Dec  2 06:04:03 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:04:03 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:04:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:04 np0005542249 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  2 06:04:04 np0005542249 systemd[1]: Finished man-db-cache-update.service.
Dec  2 06:04:04 np0005542249 systemd[1]: man-db-cache-update.service: Consumed 11.455s CPU time.
Dec  2 06:04:04 np0005542249 systemd[1]: run-rc0de34426dee4bdc886453a19bd53cb2.service: Deactivated successfully.
Dec  2 06:04:04 np0005542249 python3.9[200175]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 06:04:04 np0005542249 systemd[1]: Reloading.
Dec  2 06:04:04 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:04:04 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:04:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:04:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:05 np0005542249 python3.9[200366]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 06:04:05 np0005542249 systemd[1]: Reloading.
Dec  2 06:04:06 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:04:06 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:04:07 np0005542249 python3.9[200556]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 06:04:07 np0005542249 systemd[1]: Reloading.
Dec  2 06:04:07 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:04:07 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:04:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:08 np0005542249 python3.9[200747]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 06:04:09 np0005542249 python3.9[200902]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 06:04:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:09 np0005542249 systemd[1]: Reloading.
Dec  2 06:04:09 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:04:09 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:04:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:04:10 np0005542249 python3.9[201092]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  2 06:04:10 np0005542249 systemd[1]: Reloading.
Dec  2 06:04:10 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:04:10 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:04:11 np0005542249 systemd[1]: Listening on libvirt proxy daemon socket.
Dec  2 06:04:11 np0005542249 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec  2 06:04:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:12 np0005542249 python3.9[201285]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 06:04:13 np0005542249 python3.9[201440]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 06:04:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:15 np0005542249 python3.9[201595]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 06:04:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:04:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:15 np0005542249 podman[201722]: 2025-12-02 11:04:15.761546818 +0000 UTC m=+0.113749210 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  2 06:04:16 np0005542249 python3.9[201770]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 06:04:16 np0005542249 python3.9[201931]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 06:04:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:17 np0005542249 python3.9[202086]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 06:04:18 np0005542249 python3.9[202241]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 06:04:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:19 np0005542249 python3.9[202396]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 06:04:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:04:19.814 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:04:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:04:19.815 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:04:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:04:19.815 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:04:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:04:20 np0005542249 python3.9[202551]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 06:04:21 np0005542249 podman[202706]: 2025-12-02 11:04:21.374500624 +0000 UTC m=+0.083421236 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:04:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:21 np0005542249 python3.9[202707]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 06:04:22 np0005542249 python3.9[202880]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 06:04:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:23 np0005542249 python3.9[203035]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 06:04:24 np0005542249 python3.9[203190]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 06:04:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:04:25 np0005542249 python3.9[203345]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 06:04:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:04:26
Dec  2 06:04:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:04:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:04:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['images', 'backups', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'vms', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta']
Dec  2 06:04:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:04:26 np0005542249 python3.9[203500]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:04:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:04:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:04:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:04:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:04:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:04:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:04:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:04:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:04:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:04:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:04:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:04:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:04:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:04:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:04:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:04:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:04:26 np0005542249 python3.9[203652]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:04:27 np0005542249 ceph-mgr[75372]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2127781581
Dec  2 06:04:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:27 np0005542249 python3.9[203804]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:04:28 np0005542249 python3.9[203956]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:04:29 np0005542249 python3.9[204108]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:04:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:29 np0005542249 python3.9[204260]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:04:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:04:30 np0005542249 python3.9[204412]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:04:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:31 np0005542249 python3.9[204537]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764673470.00537-554-40426842952929/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:32 np0005542249 python3.9[204689]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:04:32 np0005542249 python3.9[204814]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764673471.6371987-554-94183390238693/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:33 np0005542249 python3.9[204966]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:04:34 np0005542249 python3.9[205091]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764673473.0682824-554-95713394440488/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:34 np0005542249 python3.9[205243]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:04:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:04:35 np0005542249 python3.9[205368]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764673474.3204067-554-268451974659331/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:04:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:04:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:04:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:04:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:04:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:04:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:04:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:04:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:04:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:04:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:04:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:04:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:04:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:04:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:04:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:04:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:04:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:04:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:04:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:04:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:04:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:04:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:04:36 np0005542249 python3.9[205520]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:04:36 np0005542249 python3.9[205645]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764673475.5920453-554-81891190853456/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:37 np0005542249 python3.9[205797]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:04:38 np0005542249 python3.9[205922]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764673476.9075105-554-47662572799216/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:38 np0005542249 python3.9[206074]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:04:39 np0005542249 python3.9[206197]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764673478.2919173-554-56659679162995/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:40 np0005542249 python3.9[206349]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:04:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:04:40 np0005542249 python3.9[206588]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764673479.5701356-554-32751170086492/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:04:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:04:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:04:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:04:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:04:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:04:40 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 1f34f9e9-3f15-49fa-9fb4-96dbd609294c does not exist
Dec  2 06:04:40 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev bdaad2eb-f4d1-4eff-a343-589d2c29526e does not exist
Dec  2 06:04:40 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev f633b1f7-36b7-47c4-94e2-201165f27fb3 does not exist
Dec  2 06:04:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:04:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:04:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:04:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:04:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:04:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:04:41 np0005542249 python3.9[206855]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec  2 06:04:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:41 np0005542249 podman[206924]: 2025-12-02 11:04:41.535588962 +0000 UTC m=+0.037174400 container create 89351732d9877add99a9737c4cf40b4caf5a850c888ab03fc2f2f8ef7a6bf315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khorana, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:04:41 np0005542249 systemd[1]: Started libpod-conmon-89351732d9877add99a9737c4cf40b4caf5a850c888ab03fc2f2f8ef7a6bf315.scope.
Dec  2 06:04:41 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:04:41 np0005542249 podman[206924]: 2025-12-02 11:04:41.61353422 +0000 UTC m=+0.115119708 container init 89351732d9877add99a9737c4cf40b4caf5a850c888ab03fc2f2f8ef7a6bf315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khorana, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:04:41 np0005542249 podman[206924]: 2025-12-02 11:04:41.518310684 +0000 UTC m=+0.019896142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:04:41 np0005542249 podman[206924]: 2025-12-02 11:04:41.621571088 +0000 UTC m=+0.123156536 container start 89351732d9877add99a9737c4cf40b4caf5a850c888ab03fc2f2f8ef7a6bf315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khorana, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:04:41 np0005542249 podman[206924]: 2025-12-02 11:04:41.62567347 +0000 UTC m=+0.127258958 container attach 89351732d9877add99a9737c4cf40b4caf5a850c888ab03fc2f2f8ef7a6bf315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khorana, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  2 06:04:41 np0005542249 systemd[1]: libpod-89351732d9877add99a9737c4cf40b4caf5a850c888ab03fc2f2f8ef7a6bf315.scope: Deactivated successfully.
Dec  2 06:04:41 np0005542249 intelligent_khorana[206953]: 167 167
Dec  2 06:04:41 np0005542249 conmon[206953]: conmon 89351732d9877add99a9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-89351732d9877add99a9737c4cf40b4caf5a850c888ab03fc2f2f8ef7a6bf315.scope/container/memory.events
Dec  2 06:04:41 np0005542249 podman[206924]: 2025-12-02 11:04:41.629327898 +0000 UTC m=+0.130913336 container died 89351732d9877add99a9737c4cf40b4caf5a850c888ab03fc2f2f8ef7a6bf315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:04:41 np0005542249 systemd[1]: var-lib-containers-storage-overlay-5b48f8216af5b7e3ac6d7e98e0d048af31437b978c0577977db6da85524c7670-merged.mount: Deactivated successfully.
Dec  2 06:04:41 np0005542249 podman[206924]: 2025-12-02 11:04:41.668909544 +0000 UTC m=+0.170494992 container remove 89351732d9877add99a9737c4cf40b4caf5a850c888ab03fc2f2f8ef7a6bf315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khorana, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  2 06:04:41 np0005542249 systemd[1]: libpod-conmon-89351732d9877add99a9737c4cf40b4caf5a850c888ab03fc2f2f8ef7a6bf315.scope: Deactivated successfully.
Dec  2 06:04:41 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:04:41 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:04:41 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:04:41 np0005542249 podman[207077]: 2025-12-02 11:04:41.863504689 +0000 UTC m=+0.043207374 container create 92cedb9c209d237544054fa2b8af00c5107c9f0f45fea07cb56e0181d366606d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:04:41 np0005542249 systemd[1]: Started libpod-conmon-92cedb9c209d237544054fa2b8af00c5107c9f0f45fea07cb56e0181d366606d.scope.
Dec  2 06:04:41 np0005542249 podman[207077]: 2025-12-02 11:04:41.841099321 +0000 UTC m=+0.020802036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:04:41 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:04:41 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8dcdbf716ada9ad1853855d20cb2ed49e5110778eaf97e8547499be21b65b23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:04:41 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8dcdbf716ada9ad1853855d20cb2ed49e5110778eaf97e8547499be21b65b23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:04:41 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8dcdbf716ada9ad1853855d20cb2ed49e5110778eaf97e8547499be21b65b23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:04:41 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8dcdbf716ada9ad1853855d20cb2ed49e5110778eaf97e8547499be21b65b23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:04:41 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8dcdbf716ada9ad1853855d20cb2ed49e5110778eaf97e8547499be21b65b23/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:04:41 np0005542249 podman[207077]: 2025-12-02 11:04:41.995848163 +0000 UTC m=+0.175550878 container init 92cedb9c209d237544054fa2b8af00c5107c9f0f45fea07cb56e0181d366606d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 06:04:42 np0005542249 podman[207077]: 2025-12-02 11:04:42.003654266 +0000 UTC m=+0.183356971 container start 92cedb9c209d237544054fa2b8af00c5107c9f0f45fea07cb56e0181d366606d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keller, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:04:42 np0005542249 podman[207077]: 2025-12-02 11:04:42.007409637 +0000 UTC m=+0.187112372 container attach 92cedb9c209d237544054fa2b8af00c5107c9f0f45fea07cb56e0181d366606d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  2 06:04:42 np0005542249 python3.9[207104]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:42 np0005542249 python3.9[207263]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:43 np0005542249 clever_keller[207107]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:04:43 np0005542249 clever_keller[207107]: --> relative data size: 1.0
Dec  2 06:04:43 np0005542249 clever_keller[207107]: --> All data devices are unavailable
Dec  2 06:04:43 np0005542249 systemd[1]: libpod-92cedb9c209d237544054fa2b8af00c5107c9f0f45fea07cb56e0181d366606d.scope: Deactivated successfully.
Dec  2 06:04:43 np0005542249 podman[207410]: 2025-12-02 11:04:43.118953616 +0000 UTC m=+0.031551887 container died 92cedb9c209d237544054fa2b8af00c5107c9f0f45fea07cb56e0181d366606d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keller, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  2 06:04:43 np0005542249 systemd[1]: var-lib-containers-storage-overlay-c8dcdbf716ada9ad1853855d20cb2ed49e5110778eaf97e8547499be21b65b23-merged.mount: Deactivated successfully.
Dec  2 06:04:43 np0005542249 podman[207410]: 2025-12-02 11:04:43.187231051 +0000 UTC m=+0.099829282 container remove 92cedb9c209d237544054fa2b8af00c5107c9f0f45fea07cb56e0181d366606d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keller, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  2 06:04:43 np0005542249 systemd[1]: libpod-conmon-92cedb9c209d237544054fa2b8af00c5107c9f0f45fea07cb56e0181d366606d.scope: Deactivated successfully.
Dec  2 06:04:43 np0005542249 python3.9[207453]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:43 np0005542249 podman[207706]: 2025-12-02 11:04:43.778281264 +0000 UTC m=+0.044630224 container create 51262a0902ede3d7e513de2ef62ea246e4503addaca944925847736b08eeb989 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:04:43 np0005542249 systemd[1]: Started libpod-conmon-51262a0902ede3d7e513de2ef62ea246e4503addaca944925847736b08eeb989.scope.
Dec  2 06:04:43 np0005542249 podman[207706]: 2025-12-02 11:04:43.757838189 +0000 UTC m=+0.024187139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:04:43 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:04:43 np0005542249 podman[207706]: 2025-12-02 11:04:43.874588059 +0000 UTC m=+0.140936989 container init 51262a0902ede3d7e513de2ef62ea246e4503addaca944925847736b08eeb989 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mclaren, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  2 06:04:43 np0005542249 podman[207706]: 2025-12-02 11:04:43.885579228 +0000 UTC m=+0.151928168 container start 51262a0902ede3d7e513de2ef62ea246e4503addaca944925847736b08eeb989 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mclaren, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  2 06:04:43 np0005542249 podman[207706]: 2025-12-02 11:04:43.889268478 +0000 UTC m=+0.155617408 container attach 51262a0902ede3d7e513de2ef62ea246e4503addaca944925847736b08eeb989 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:04:43 np0005542249 zen_mclaren[207761]: 167 167
Dec  2 06:04:43 np0005542249 systemd[1]: libpod-51262a0902ede3d7e513de2ef62ea246e4503addaca944925847736b08eeb989.scope: Deactivated successfully.
Dec  2 06:04:43 np0005542249 conmon[207761]: conmon 51262a0902ede3d7e513 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-51262a0902ede3d7e513de2ef62ea246e4503addaca944925847736b08eeb989.scope/container/memory.events
Dec  2 06:04:43 np0005542249 podman[207706]: 2025-12-02 11:04:43.897417409 +0000 UTC m=+0.163766379 container died 51262a0902ede3d7e513de2ef62ea246e4503addaca944925847736b08eeb989 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mclaren, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  2 06:04:43 np0005542249 systemd[1]: var-lib-containers-storage-overlay-a0bae5ea2514ae1a0e6a12292cdca169fbeeb45f5ac0526812fb69cb1f4ff50e-merged.mount: Deactivated successfully.
Dec  2 06:04:43 np0005542249 podman[207706]: 2025-12-02 11:04:43.939353688 +0000 UTC m=+0.205702628 container remove 51262a0902ede3d7e513de2ef62ea246e4503addaca944925847736b08eeb989 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mclaren, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:04:43 np0005542249 systemd[1]: libpod-conmon-51262a0902ede3d7e513de2ef62ea246e4503addaca944925847736b08eeb989.scope: Deactivated successfully.
Dec  2 06:04:44 np0005542249 python3.9[207763]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:44 np0005542249 podman[207807]: 2025-12-02 11:04:44.177418894 +0000 UTC m=+0.058988153 container create dc0317bbf120ea2ee37b6974dfc07c5ab40af7548a5c8f6a9c8b191beb26107c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_poincare, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  2 06:04:44 np0005542249 systemd[1]: Started libpod-conmon-dc0317bbf120ea2ee37b6974dfc07c5ab40af7548a5c8f6a9c8b191beb26107c.scope.
Dec  2 06:04:44 np0005542249 podman[207807]: 2025-12-02 11:04:44.152776315 +0000 UTC m=+0.034345584 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:04:44 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:04:44 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f19702941909f1e74493568393d9ba53bec407ca1147d91dcb6ded11f1ba4e7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:04:44 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f19702941909f1e74493568393d9ba53bec407ca1147d91dcb6ded11f1ba4e7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:04:44 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f19702941909f1e74493568393d9ba53bec407ca1147d91dcb6ded11f1ba4e7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:04:44 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f19702941909f1e74493568393d9ba53bec407ca1147d91dcb6ded11f1ba4e7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:04:44 np0005542249 podman[207807]: 2025-12-02 11:04:44.285635333 +0000 UTC m=+0.167204572 container init dc0317bbf120ea2ee37b6974dfc07c5ab40af7548a5c8f6a9c8b191beb26107c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_poincare, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  2 06:04:44 np0005542249 podman[207807]: 2025-12-02 11:04:44.297388743 +0000 UTC m=+0.178957952 container start dc0317bbf120ea2ee37b6974dfc07c5ab40af7548a5c8f6a9c8b191beb26107c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_poincare, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:04:44 np0005542249 podman[207807]: 2025-12-02 11:04:44.301040191 +0000 UTC m=+0.182609420 container attach dc0317bbf120ea2ee37b6974dfc07c5ab40af7548a5c8f6a9c8b191beb26107c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:04:44 np0005542249 python3.9[207958]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]: {
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:    "0": [
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:        {
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "devices": [
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "/dev/loop3"
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            ],
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "lv_name": "ceph_lv0",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "lv_size": "21470642176",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "name": "ceph_lv0",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "tags": {
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.cluster_name": "ceph",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.crush_device_class": "",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.encrypted": "0",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.osd_id": "0",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.type": "block",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.vdo": "0"
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            },
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "type": "block",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "vg_name": "ceph_vg0"
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:        }
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:    ],
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:    "1": [
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:        {
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "devices": [
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "/dev/loop4"
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            ],
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "lv_name": "ceph_lv1",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "lv_size": "21470642176",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "name": "ceph_lv1",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "tags": {
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.cluster_name": "ceph",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.crush_device_class": "",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.encrypted": "0",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.osd_id": "1",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.type": "block",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.vdo": "0"
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            },
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "type": "block",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "vg_name": "ceph_vg1"
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:        }
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:    ],
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:    "2": [
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:        {
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "devices": [
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "/dev/loop5"
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            ],
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "lv_name": "ceph_lv2",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "lv_size": "21470642176",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "name": "ceph_lv2",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "tags": {
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.cluster_name": "ceph",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.crush_device_class": "",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.encrypted": "0",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.osd_id": "2",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.type": "block",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:                "ceph.vdo": "0"
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            },
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "type": "block",
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:            "vg_name": "ceph_vg2"
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:        }
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]:    ]
Dec  2 06:04:45 np0005542249 thirsty_poincare[207854]: }
Dec  2 06:04:45 np0005542249 systemd[1]: libpod-dc0317bbf120ea2ee37b6974dfc07c5ab40af7548a5c8f6a9c8b191beb26107c.scope: Deactivated successfully.
Dec  2 06:04:45 np0005542249 podman[207807]: 2025-12-02 11:04:45.164193845 +0000 UTC m=+1.045763104 container died dc0317bbf120ea2ee37b6974dfc07c5ab40af7548a5c8f6a9c8b191beb26107c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_poincare, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  2 06:04:45 np0005542249 systemd[1]: var-lib-containers-storage-overlay-f19702941909f1e74493568393d9ba53bec407ca1147d91dcb6ded11f1ba4e7c-merged.mount: Deactivated successfully.
Dec  2 06:04:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:04:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:45 np0005542249 python3.9[208125]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:45 np0005542249 podman[207807]: 2025-12-02 11:04:45.534568073 +0000 UTC m=+1.416137302 container remove dc0317bbf120ea2ee37b6974dfc07c5ab40af7548a5c8f6a9c8b191beb26107c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Dec  2 06:04:45 np0005542249 systemd[1]: libpod-conmon-dc0317bbf120ea2ee37b6974dfc07c5ab40af7548a5c8f6a9c8b191beb26107c.scope: Deactivated successfully.
Dec  2 06:04:45 np0005542249 podman[208307]: 2025-12-02 11:04:45.961063617 +0000 UTC m=+0.110314937 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Dec  2 06:04:46 np0005542249 python3.9[208399]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:46 np0005542249 podman[208441]: 2025-12-02 11:04:46.240161107 +0000 UTC m=+0.046725620 container create e370bf5409d312148db3cae1af18b27a32beb4cf537c4bc3c19d11a9c3327f04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_murdock, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  2 06:04:46 np0005542249 systemd[1]: Started libpod-conmon-e370bf5409d312148db3cae1af18b27a32beb4cf537c4bc3c19d11a9c3327f04.scope.
Dec  2 06:04:46 np0005542249 podman[208441]: 2025-12-02 11:04:46.217270686 +0000 UTC m=+0.023835249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:04:46 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:04:46 np0005542249 podman[208441]: 2025-12-02 11:04:46.349752904 +0000 UTC m=+0.156317467 container init e370bf5409d312148db3cae1af18b27a32beb4cf537c4bc3c19d11a9c3327f04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  2 06:04:46 np0005542249 podman[208441]: 2025-12-02 11:04:46.358843931 +0000 UTC m=+0.165408464 container start e370bf5409d312148db3cae1af18b27a32beb4cf537c4bc3c19d11a9c3327f04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_murdock, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:04:46 np0005542249 podman[208441]: 2025-12-02 11:04:46.362033787 +0000 UTC m=+0.168598300 container attach e370bf5409d312148db3cae1af18b27a32beb4cf537c4bc3c19d11a9c3327f04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  2 06:04:46 np0005542249 systemd[1]: libpod-e370bf5409d312148db3cae1af18b27a32beb4cf537c4bc3c19d11a9c3327f04.scope: Deactivated successfully.
Dec  2 06:04:46 np0005542249 modest_murdock[208481]: 167 167
Dec  2 06:04:46 np0005542249 conmon[208481]: conmon e370bf5409d312148db3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e370bf5409d312148db3cae1af18b27a32beb4cf537c4bc3c19d11a9c3327f04.scope/container/memory.events
Dec  2 06:04:46 np0005542249 podman[208441]: 2025-12-02 11:04:46.366657553 +0000 UTC m=+0.173222076 container died e370bf5409d312148db3cae1af18b27a32beb4cf537c4bc3c19d11a9c3327f04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_murdock, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  2 06:04:46 np0005542249 systemd[1]: var-lib-containers-storage-overlay-671895794b605628232aeaf12de46c9bd93456faf210627f51080877880c127c-merged.mount: Deactivated successfully.
Dec  2 06:04:46 np0005542249 podman[208441]: 2025-12-02 11:04:46.40998065 +0000 UTC m=+0.216545183 container remove e370bf5409d312148db3cae1af18b27a32beb4cf537c4bc3c19d11a9c3327f04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_murdock, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  2 06:04:46 np0005542249 systemd[1]: libpod-conmon-e370bf5409d312148db3cae1af18b27a32beb4cf537c4bc3c19d11a9c3327f04.scope: Deactivated successfully.
Dec  2 06:04:46 np0005542249 podman[208591]: 2025-12-02 11:04:46.606391144 +0000 UTC m=+0.053665548 container create ab05f481be41d422c5d02f698f3b839564ad456de06eebe0d522871316f0f91d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  2 06:04:46 np0005542249 systemd[1]: Started libpod-conmon-ab05f481be41d422c5d02f698f3b839564ad456de06eebe0d522871316f0f91d.scope.
Dec  2 06:04:46 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:04:46 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ad18018c2209625a82b5240a475a1bef91e6ae66624e64a489f5f6875722b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:04:46 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ad18018c2209625a82b5240a475a1bef91e6ae66624e64a489f5f6875722b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:04:46 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ad18018c2209625a82b5240a475a1bef91e6ae66624e64a489f5f6875722b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:04:46 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ad18018c2209625a82b5240a475a1bef91e6ae66624e64a489f5f6875722b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:04:46 np0005542249 podman[208591]: 2025-12-02 11:04:46.583653947 +0000 UTC m=+0.030928341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:04:46 np0005542249 podman[208591]: 2025-12-02 11:04:46.694370444 +0000 UTC m=+0.141644868 container init ab05f481be41d422c5d02f698f3b839564ad456de06eebe0d522871316f0f91d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:04:46 np0005542249 podman[208591]: 2025-12-02 11:04:46.707558292 +0000 UTC m=+0.154832676 container start ab05f481be41d422c5d02f698f3b839564ad456de06eebe0d522871316f0f91d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:04:46 np0005542249 podman[208591]: 2025-12-02 11:04:46.711681844 +0000 UTC m=+0.158956238 container attach ab05f481be41d422c5d02f698f3b839564ad456de06eebe0d522871316f0f91d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  2 06:04:46 np0005542249 python3.9[208649]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:47 np0005542249 python3.9[208804]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:47 np0005542249 crazy_ramanujan[208647]: {
Dec  2 06:04:47 np0005542249 crazy_ramanujan[208647]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:04:47 np0005542249 crazy_ramanujan[208647]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:04:47 np0005542249 crazy_ramanujan[208647]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:04:47 np0005542249 crazy_ramanujan[208647]:        "osd_id": 0,
Dec  2 06:04:47 np0005542249 crazy_ramanujan[208647]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:04:47 np0005542249 crazy_ramanujan[208647]:        "type": "bluestore"
Dec  2 06:04:47 np0005542249 crazy_ramanujan[208647]:    },
Dec  2 06:04:47 np0005542249 crazy_ramanujan[208647]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:04:47 np0005542249 crazy_ramanujan[208647]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:04:47 np0005542249 crazy_ramanujan[208647]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:04:47 np0005542249 crazy_ramanujan[208647]:        "osd_id": 2,
Dec  2 06:04:47 np0005542249 crazy_ramanujan[208647]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:04:47 np0005542249 crazy_ramanujan[208647]:        "type": "bluestore"
Dec  2 06:04:47 np0005542249 crazy_ramanujan[208647]:    },
Dec  2 06:04:47 np0005542249 crazy_ramanujan[208647]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:04:47 np0005542249 crazy_ramanujan[208647]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:04:47 np0005542249 crazy_ramanujan[208647]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:04:47 np0005542249 crazy_ramanujan[208647]:        "osd_id": 1,
Dec  2 06:04:47 np0005542249 crazy_ramanujan[208647]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:04:47 np0005542249 crazy_ramanujan[208647]:        "type": "bluestore"
Dec  2 06:04:47 np0005542249 crazy_ramanujan[208647]:    }
Dec  2 06:04:47 np0005542249 crazy_ramanujan[208647]: }
Dec  2 06:04:47 np0005542249 systemd[1]: libpod-ab05f481be41d422c5d02f698f3b839564ad456de06eebe0d522871316f0f91d.scope: Deactivated successfully.
Dec  2 06:04:47 np0005542249 systemd[1]: libpod-ab05f481be41d422c5d02f698f3b839564ad456de06eebe0d522871316f0f91d.scope: Consumed 1.014s CPU time.
Dec  2 06:04:47 np0005542249 conmon[208647]: conmon ab05f481be41d422c5d0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ab05f481be41d422c5d02f698f3b839564ad456de06eebe0d522871316f0f91d.scope/container/memory.events
Dec  2 06:04:47 np0005542249 podman[208591]: 2025-12-02 11:04:47.714764748 +0000 UTC m=+1.162039132 container died ab05f481be41d422c5d02f698f3b839564ad456de06eebe0d522871316f0f91d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  2 06:04:47 np0005542249 systemd[1]: var-lib-containers-storage-overlay-68ad18018c2209625a82b5240a475a1bef91e6ae66624e64a489f5f6875722b2-merged.mount: Deactivated successfully.
Dec  2 06:04:47 np0005542249 podman[208591]: 2025-12-02 11:04:47.795394607 +0000 UTC m=+1.242668981 container remove ab05f481be41d422c5d02f698f3b839564ad456de06eebe0d522871316f0f91d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  2 06:04:47 np0005542249 systemd[1]: libpod-conmon-ab05f481be41d422c5d02f698f3b839564ad456de06eebe0d522871316f0f91d.scope: Deactivated successfully.
Dec  2 06:04:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:04:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:04:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:04:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:04:47 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 8f707d9f-7218-4da0-860b-2f9fe51b811c does not exist
Dec  2 06:04:47 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 8b36f6b3-10b4-405f-873c-77a42c5afb6a does not exist
Dec  2 06:04:48 np0005542249 python3.9[209046]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:48 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:04:48 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:04:48 np0005542249 python3.9[209200]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:49 np0005542249 python3.9[209352]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:04:50 np0005542249 python3.9[209504]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:51 np0005542249 python3.9[209656]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:51 np0005542249 podman[209780]: 2025-12-02 11:04:51.817465435 +0000 UTC m=+0.113307808 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:04:52 np0005542249 python3.9[209827]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:04:52 np0005542249 python3.9[209950]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673491.39171-775-107235250577621/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:53 np0005542249 python3.9[210102]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:04:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:53 np0005542249 python3.9[210225]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673492.7774575-775-142567273721027/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:54 np0005542249 python3.9[210377]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:04:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:04:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:55 np0005542249 python3.9[210500]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673494.0854843-775-222923818416738/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:56 np0005542249 python3.9[210652]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:04:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:04:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:04:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:04:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:04:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:04:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:04:56 np0005542249 python3.9[210775]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673495.7591224-775-206362168207862/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:57 np0005542249 python3.9[210927]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:04:58 np0005542249 python3.9[211050]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673497.1508493-775-127751918459667/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:04:59 np0005542249 python3.9[211202]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:04:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:04:59 np0005542249 python3.9[211325]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673498.5526297-775-11747752625205/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:05:00 np0005542249 python3.9[211477]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:05:01 np0005542249 python3.9[211600]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673499.9526908-775-93607858152932/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:01 np0005542249 python3.9[211752]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:05:02 np0005542249 python3.9[211875]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673501.1614642-775-34893770642749/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:02 np0005542249 python3.9[212027]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:05:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:03 np0005542249 python3.9[212150]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673502.414896-775-228163286088748/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:04 np0005542249 python3.9[212302]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:05:04 np0005542249 python3.9[212425]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673503.7270296-775-177960215357669/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:05:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:05 np0005542249 python3.9[212577]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:05:06 np0005542249 python3.9[212700]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673505.0546603-775-143679310525893/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:06 np0005542249 python3.9[212852]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:05:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:07 np0005542249 python3.9[212975]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673506.4320219-775-109399469938112/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:08 np0005542249 python3.9[213127]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:05:09 np0005542249 python3.9[213250]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673507.8004303-775-278730022192888/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:09 np0005542249 python3.9[213402]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:05:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:05:10 np0005542249 python3.9[213525]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673509.2615798-775-213671443141334/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:11 np0005542249 python3.9[213675]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:05:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:12 np0005542249 python3.9[213830]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec  2 06:05:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:13 np0005542249 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec  2 06:05:14 np0005542249 python3.9[213986]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:14 np0005542249 python3.9[214138]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:05:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:15 np0005542249 python3.9[214290]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:16 np0005542249 podman[214442]: 2025-12-02 11:05:16.208333237 +0000 UTC m=+0.127117646 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  2 06:05:16 np0005542249 python3.9[214443]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:16 np0005542249 python3.9[214620]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:17 np0005542249 python3.9[214772]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:18 np0005542249 python3.9[214924]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:19 np0005542249 python3.9[215076]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:05:19.815 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:05:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:05:19.815 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:05:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:05:19.815 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:05:19 np0005542249 python3.9[215228]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:05:20 np0005542249 python3.9[215380]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:21 np0005542249 python3.9[215532]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 06:05:21 np0005542249 systemd[1]: Reloading.
Dec  2 06:05:21 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:05:21 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:05:21 np0005542249 systemd[1]: Starting libvirt logging daemon socket...
Dec  2 06:05:21 np0005542249 systemd[1]: Listening on libvirt logging daemon socket.
Dec  2 06:05:21 np0005542249 systemd[1]: Starting libvirt logging daemon admin socket...
Dec  2 06:05:21 np0005542249 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec  2 06:05:21 np0005542249 systemd[1]: Starting libvirt logging daemon...
Dec  2 06:05:22 np0005542249 podman[215569]: 2025-12-02 11:05:22.010315926 +0000 UTC m=+0.082027254 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  2 06:05:22 np0005542249 systemd[1]: Started libvirt logging daemon.
Dec  2 06:05:22 np0005542249 python3.9[215743]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 06:05:22 np0005542249 systemd[1]: Reloading.
Dec  2 06:05:23 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:05:23 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:05:23 np0005542249 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec  2 06:05:23 np0005542249 systemd[1]: Starting libvirt nodedev daemon socket...
Dec  2 06:05:23 np0005542249 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec  2 06:05:23 np0005542249 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec  2 06:05:23 np0005542249 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec  2 06:05:23 np0005542249 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec  2 06:05:23 np0005542249 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec  2 06:05:23 np0005542249 systemd[1]: Starting libvirt nodedev daemon...
Dec  2 06:05:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:23 np0005542249 systemd[1]: Started libvirt nodedev daemon.
Dec  2 06:05:23 np0005542249 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec  2 06:05:23 np0005542249 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec  2 06:05:23 np0005542249 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec  2 06:05:24 np0005542249 python3.9[215968]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 06:05:24 np0005542249 systemd[1]: Reloading.
Dec  2 06:05:24 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:05:24 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:05:24 np0005542249 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec  2 06:05:24 np0005542249 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec  2 06:05:24 np0005542249 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec  2 06:05:24 np0005542249 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec  2 06:05:24 np0005542249 systemd[1]: Starting libvirt proxy daemon...
Dec  2 06:05:24 np0005542249 systemd[1]: Started libvirt proxy daemon.
Dec  2 06:05:24 np0005542249 setroubleshoot[215780]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l e28cab28-7c27-4fc7-b28a-2eb602a77993
Dec  2 06:05:24 np0005542249 setroubleshoot[215780]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  2 06:05:24 np0005542249 setroubleshoot[215780]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l e28cab28-7c27-4fc7-b28a-2eb602a77993
Dec  2 06:05:24 np0005542249 setroubleshoot[215780]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  2 06:05:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:05:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:25 np0005542249 python3.9[216182]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 06:05:25 np0005542249 systemd[1]: Reloading.
Dec  2 06:05:25 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:05:25 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:05:26 np0005542249 systemd[1]: Listening on libvirt locking daemon socket.
Dec  2 06:05:26 np0005542249 systemd[1]: Starting libvirt QEMU daemon socket...
Dec  2 06:05:26 np0005542249 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec  2 06:05:26 np0005542249 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec  2 06:05:26 np0005542249 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec  2 06:05:26 np0005542249 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec  2 06:05:26 np0005542249 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec  2 06:05:26 np0005542249 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec  2 06:05:26 np0005542249 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec  2 06:05:26 np0005542249 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec  2 06:05:26 np0005542249 systemd[1]: Starting libvirt QEMU daemon...
Dec  2 06:05:26 np0005542249 systemd[1]: Started libvirt QEMU daemon.
Dec  2 06:05:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:05:26
Dec  2 06:05:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:05:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:05:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', '.mgr', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'vms']
Dec  2 06:05:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:05:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:05:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:05:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:05:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:05:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:05:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:05:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:05:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:05:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:05:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:05:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:05:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:05:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:05:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:05:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:05:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:05:27 np0005542249 python3.9[216397]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 06:05:27 np0005542249 systemd[1]: Reloading.
Dec  2 06:05:27 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:05:27 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:05:27 np0005542249 systemd[1]: Starting libvirt secret daemon socket...
Dec  2 06:05:27 np0005542249 systemd[1]: Listening on libvirt secret daemon socket.
Dec  2 06:05:27 np0005542249 systemd[1]: Starting libvirt secret daemon admin socket...
Dec  2 06:05:27 np0005542249 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec  2 06:05:27 np0005542249 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec  2 06:05:27 np0005542249 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec  2 06:05:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:27 np0005542249 systemd[1]: Starting libvirt secret daemon...
Dec  2 06:05:27 np0005542249 systemd[1]: Started libvirt secret daemon.
Dec  2 06:05:28 np0005542249 python3.9[216609]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:29 np0005542249 python3.9[216761]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  2 06:05:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:29 np0005542249 python3.9[216913]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:05:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:05:30 np0005542249 python3.9[217067]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  2 06:05:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:31 np0005542249 python3.9[217217]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:05:32 np0005542249 python3.9[217338]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764673531.0026875-1133-270636734430248/.source.xml follow=False _original_basename=secret.xml.j2 checksum=b9fc9d19250c9e1e865d71a5f2675a4cad99a152 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:32 np0005542249 python3.9[217490]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 95bc4eaa-1a14-59bf-acf2-4b3da055547d#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:05:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:33 np0005542249 python3.9[217652]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:34 np0005542249 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec  2 06:05:34 np0005542249 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec  2 06:05:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:05:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:05:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:05:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:05:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:05:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:05:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:05:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:05:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:05:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:05:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:05:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:05:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:05:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:05:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:05:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:05:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:05:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:05:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:05:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:05:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:05:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:05:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:05:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:05:36 np0005542249 python3.9[218115]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:36 np0005542249 python3.9[218267]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:05:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:37 np0005542249 python3.9[218390]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764673536.3919785-1188-38962894416326/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:38 np0005542249 python3.9[218542]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:39 np0005542249 python3.9[218694]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:05:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:39 np0005542249 python3.9[218772]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:05:40 np0005542249 python3.9[218924]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:05:41 np0005542249 python3.9[219002]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.kn2orsly recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:41 np0005542249 python3.9[219154]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:05:42 np0005542249 python3.9[219232]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:43 np0005542249 python3.9[219384]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:05:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:44 np0005542249 python3[219537]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  2 06:05:44 np0005542249 python3.9[219689]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:05:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:05:45 np0005542249 python3.9[219767]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:46 np0005542249 python3.9[219919]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:05:46 np0005542249 podman[219969]: 2025-12-02 11:05:46.638189903 +0000 UTC m=+0.182652521 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  2 06:05:46 np0005542249 python3.9[220008]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:47 np0005542249 python3.9[220173]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:05:48 np0005542249 python3.9[220251]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:48 np0005542249 python3.9[220515]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:05:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:05:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:05:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:05:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:05:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:05:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:05:48 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev c2d5a9f6-e424-4659-8cfb-083c3c816ab3 does not exist
Dec  2 06:05:48 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev d0e79c3b-66d2-4366-9ed9-55f777273ba6 does not exist
Dec  2 06:05:48 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev a0a3c0e2-f476-4a81-af21-4e9a23b928d7 does not exist
Dec  2 06:05:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:05:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:05:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:05:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:05:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:05:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:05:49 np0005542249 python3.9[220659]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:49 np0005542249 podman[220819]: 2025-12-02 11:05:49.568464291 +0000 UTC m=+0.040898709 container create eb004bc9a4baadbaa5021f16fa335bac036f632ebaf96a7e3fc0b95af4c61558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:05:49 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:05:49 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:05:49 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:05:49 np0005542249 systemd[1]: Started libpod-conmon-eb004bc9a4baadbaa5021f16fa335bac036f632ebaf96a7e3fc0b95af4c61558.scope.
Dec  2 06:05:49 np0005542249 podman[220819]: 2025-12-02 11:05:49.550228127 +0000 UTC m=+0.022662555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:05:49 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:05:49 np0005542249 podman[220819]: 2025-12-02 11:05:49.681723611 +0000 UTC m=+0.154158109 container init eb004bc9a4baadbaa5021f16fa335bac036f632ebaf96a7e3fc0b95af4c61558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  2 06:05:49 np0005542249 podman[220819]: 2025-12-02 11:05:49.695182836 +0000 UTC m=+0.167617294 container start eb004bc9a4baadbaa5021f16fa335bac036f632ebaf96a7e3fc0b95af4c61558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  2 06:05:49 np0005542249 podman[220819]: 2025-12-02 11:05:49.700739626 +0000 UTC m=+0.173174084 container attach eb004bc9a4baadbaa5021f16fa335bac036f632ebaf96a7e3fc0b95af4c61558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:05:49 np0005542249 competent_carver[220844]: 167 167
Dec  2 06:05:49 np0005542249 systemd[1]: libpod-eb004bc9a4baadbaa5021f16fa335bac036f632ebaf96a7e3fc0b95af4c61558.scope: Deactivated successfully.
Dec  2 06:05:49 np0005542249 podman[220819]: 2025-12-02 11:05:49.706085301 +0000 UTC m=+0.178519789 container died eb004bc9a4baadbaa5021f16fa335bac036f632ebaf96a7e3fc0b95af4c61558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  2 06:05:49 np0005542249 systemd[1]: var-lib-containers-storage-overlay-9f629433295b61d0b35d444f42994c59e673b7087d5286e0d4d323ffa5636775-merged.mount: Deactivated successfully.
Dec  2 06:05:49 np0005542249 podman[220819]: 2025-12-02 11:05:49.761703248 +0000 UTC m=+0.234137696 container remove eb004bc9a4baadbaa5021f16fa335bac036f632ebaf96a7e3fc0b95af4c61558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 06:05:49 np0005542249 systemd[1]: libpod-conmon-eb004bc9a4baadbaa5021f16fa335bac036f632ebaf96a7e3fc0b95af4c61558.scope: Deactivated successfully.
Dec  2 06:05:49 np0005542249 podman[220939]: 2025-12-02 11:05:49.944192684 +0000 UTC m=+0.054743075 container create 5226d6b6017b12113ca0850249baf528a6168fb032f00bfc1147f60709bcc189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cray, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  2 06:05:50 np0005542249 systemd[1]: Started libpod-conmon-5226d6b6017b12113ca0850249baf528a6168fb032f00bfc1147f60709bcc189.scope.
Dec  2 06:05:50 np0005542249 podman[220939]: 2025-12-02 11:05:49.920608525 +0000 UTC m=+0.031158956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:05:50 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:05:50 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/025f83d5e02bad1d21ad44c24f67ab3b8be4f9a2e8e4a642be3bdc969216c934/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:05:50 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/025f83d5e02bad1d21ad44c24f67ab3b8be4f9a2e8e4a642be3bdc969216c934/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:05:50 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/025f83d5e02bad1d21ad44c24f67ab3b8be4f9a2e8e4a642be3bdc969216c934/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:05:50 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/025f83d5e02bad1d21ad44c24f67ab3b8be4f9a2e8e4a642be3bdc969216c934/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:05:50 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/025f83d5e02bad1d21ad44c24f67ab3b8be4f9a2e8e4a642be3bdc969216c934/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:05:50 np0005542249 podman[220939]: 2025-12-02 11:05:50.061235175 +0000 UTC m=+0.171785586 container init 5226d6b6017b12113ca0850249baf528a6168fb032f00bfc1147f60709bcc189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cray, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Dec  2 06:05:50 np0005542249 podman[220939]: 2025-12-02 11:05:50.074871685 +0000 UTC m=+0.185422076 container start 5226d6b6017b12113ca0850249baf528a6168fb032f00bfc1147f60709bcc189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cray, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  2 06:05:50 np0005542249 podman[220939]: 2025-12-02 11:05:50.07840416 +0000 UTC m=+0.188954551 container attach 5226d6b6017b12113ca0850249baf528a6168fb032f00bfc1147f60709bcc189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cray, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  2 06:05:50 np0005542249 python3.9[220955]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:05:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:05:50 np0005542249 python3.9[221090]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764673549.4567375-1313-126703338767454/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:51 np0005542249 admiring_cray[220961]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:05:51 np0005542249 admiring_cray[220961]: --> relative data size: 1.0
Dec  2 06:05:51 np0005542249 admiring_cray[220961]: --> All data devices are unavailable
Dec  2 06:05:51 np0005542249 systemd[1]: libpod-5226d6b6017b12113ca0850249baf528a6168fb032f00bfc1147f60709bcc189.scope: Deactivated successfully.
Dec  2 06:05:51 np0005542249 systemd[1]: libpod-5226d6b6017b12113ca0850249baf528a6168fb032f00bfc1147f60709bcc189.scope: Consumed 1.105s CPU time.
Dec  2 06:05:51 np0005542249 podman[220939]: 2025-12-02 11:05:51.249549139 +0000 UTC m=+1.360099580 container died 5226d6b6017b12113ca0850249baf528a6168fb032f00bfc1147f60709bcc189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cray, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:05:51 np0005542249 systemd[1]: var-lib-containers-storage-overlay-025f83d5e02bad1d21ad44c24f67ab3b8be4f9a2e8e4a642be3bdc969216c934-merged.mount: Deactivated successfully.
Dec  2 06:05:51 np0005542249 podman[220939]: 2025-12-02 11:05:51.32414872 +0000 UTC m=+1.434699151 container remove 5226d6b6017b12113ca0850249baf528a6168fb032f00bfc1147f60709bcc189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cray, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:05:51 np0005542249 systemd[1]: libpod-conmon-5226d6b6017b12113ca0850249baf528a6168fb032f00bfc1147f60709bcc189.scope: Deactivated successfully.
Dec  2 06:05:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:51 np0005542249 python3.9[221267]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:52 np0005542249 podman[221539]: 2025-12-02 11:05:52.066430506 +0000 UTC m=+0.064321634 container create 66f3fbcc96c6bfd4c20f0b2cbe6cbffad6505812b0a660c14da05fcf73397f70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  2 06:05:52 np0005542249 systemd[1]: Started libpod-conmon-66f3fbcc96c6bfd4c20f0b2cbe6cbffad6505812b0a660c14da05fcf73397f70.scope.
Dec  2 06:05:52 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:05:52 np0005542249 podman[221539]: 2025-12-02 11:05:52.038271263 +0000 UTC m=+0.036162421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:05:52 np0005542249 podman[221539]: 2025-12-02 11:05:52.154600915 +0000 UTC m=+0.152492073 container init 66f3fbcc96c6bfd4c20f0b2cbe6cbffad6505812b0a660c14da05fcf73397f70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sammet, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:05:52 np0005542249 podman[221539]: 2025-12-02 11:05:52.168939085 +0000 UTC m=+0.166830163 container start 66f3fbcc96c6bfd4c20f0b2cbe6cbffad6505812b0a660c14da05fcf73397f70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:05:52 np0005542249 thirsty_sammet[221587]: 167 167
Dec  2 06:05:52 np0005542249 podman[221539]: 2025-12-02 11:05:52.173750395 +0000 UTC m=+0.171641513 container attach 66f3fbcc96c6bfd4c20f0b2cbe6cbffad6505812b0a660c14da05fcf73397f70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sammet, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:05:52 np0005542249 systemd[1]: libpod-66f3fbcc96c6bfd4c20f0b2cbe6cbffad6505812b0a660c14da05fcf73397f70.scope: Deactivated successfully.
Dec  2 06:05:52 np0005542249 podman[221539]: 2025-12-02 11:05:52.175172503 +0000 UTC m=+0.173063591 container died 66f3fbcc96c6bfd4c20f0b2cbe6cbffad6505812b0a660c14da05fcf73397f70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  2 06:05:52 np0005542249 podman[221582]: 2025-12-02 11:05:52.186655954 +0000 UTC m=+0.081803077 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  2 06:05:52 np0005542249 systemd[1]: var-lib-containers-storage-overlay-038ea8fd720c9493ecc47089667cc83a86bce1b34bc4025d2919b6f9ca7b68d0-merged.mount: Deactivated successfully.
Dec  2 06:05:52 np0005542249 podman[221539]: 2025-12-02 11:05:52.217578872 +0000 UTC m=+0.215469950 container remove 66f3fbcc96c6bfd4c20f0b2cbe6cbffad6505812b0a660c14da05fcf73397f70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sammet, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:05:52 np0005542249 systemd[1]: libpod-conmon-66f3fbcc96c6bfd4c20f0b2cbe6cbffad6505812b0a660c14da05fcf73397f70.scope: Deactivated successfully.
Dec  2 06:05:52 np0005542249 python3.9[221594]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:05:52 np0005542249 podman[221631]: 2025-12-02 11:05:52.475258395 +0000 UTC m=+0.069629167 container create a9ce3d8f547d78de16917a38389f678c5987f677a441735c356432f6df299324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_brattain, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:05:52 np0005542249 systemd[1]: Started libpod-conmon-a9ce3d8f547d78de16917a38389f678c5987f677a441735c356432f6df299324.scope.
Dec  2 06:05:52 np0005542249 podman[221631]: 2025-12-02 11:05:52.441152891 +0000 UTC m=+0.035523753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:05:52 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:05:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6b5b7698eb6331a642ea8887e5442dafffde141a26623a0908fa37adcd9766/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:05:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6b5b7698eb6331a642ea8887e5442dafffde141a26623a0908fa37adcd9766/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:05:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6b5b7698eb6331a642ea8887e5442dafffde141a26623a0908fa37adcd9766/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:05:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6b5b7698eb6331a642ea8887e5442dafffde141a26623a0908fa37adcd9766/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:05:52 np0005542249 podman[221631]: 2025-12-02 11:05:52.584901747 +0000 UTC m=+0.179272599 container init a9ce3d8f547d78de16917a38389f678c5987f677a441735c356432f6df299324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  2 06:05:52 np0005542249 podman[221631]: 2025-12-02 11:05:52.598462903 +0000 UTC m=+0.192833725 container start a9ce3d8f547d78de16917a38389f678c5987f677a441735c356432f6df299324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_brattain, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  2 06:05:52 np0005542249 podman[221631]: 2025-12-02 11:05:52.603929782 +0000 UTC m=+0.198300654 container attach a9ce3d8f547d78de16917a38389f678c5987f677a441735c356432f6df299324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]: {
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:    "0": [
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:        {
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "devices": [
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "/dev/loop3"
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            ],
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "lv_name": "ceph_lv0",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "lv_size": "21470642176",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "name": "ceph_lv0",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "tags": {
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.cluster_name": "ceph",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.crush_device_class": "",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.encrypted": "0",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.osd_id": "0",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.type": "block",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.vdo": "0"
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            },
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "type": "block",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "vg_name": "ceph_vg0"
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:        }
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:    ],
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:    "1": [
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:        {
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "devices": [
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "/dev/loop4"
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            ],
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "lv_name": "ceph_lv1",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "lv_size": "21470642176",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "name": "ceph_lv1",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "tags": {
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.cluster_name": "ceph",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.crush_device_class": "",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.encrypted": "0",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.osd_id": "1",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.type": "block",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.vdo": "0"
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            },
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "type": "block",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "vg_name": "ceph_vg1"
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:        }
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:    ],
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:    "2": [
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:        {
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "devices": [
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "/dev/loop5"
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            ],
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "lv_name": "ceph_lv2",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "lv_size": "21470642176",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "name": "ceph_lv2",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "tags": {
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.cluster_name": "ceph",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.crush_device_class": "",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.encrypted": "0",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.osd_id": "2",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.type": "block",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:                "ceph.vdo": "0"
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            },
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "type": "block",
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:            "vg_name": "ceph_vg2"
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:        }
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]:    ]
Dec  2 06:05:53 np0005542249 nifty_brattain[221671]: }
Dec  2 06:05:53 np0005542249 systemd[1]: libpod-a9ce3d8f547d78de16917a38389f678c5987f677a441735c356432f6df299324.scope: Deactivated successfully.
Dec  2 06:05:53 np0005542249 podman[221631]: 2025-12-02 11:05:53.387176898 +0000 UTC m=+0.981547690 container died a9ce3d8f547d78de16917a38389f678c5987f677a441735c356432f6df299324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_brattain, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:05:53 np0005542249 systemd[1]: var-lib-containers-storage-overlay-bc6b5b7698eb6331a642ea8887e5442dafffde141a26623a0908fa37adcd9766-merged.mount: Deactivated successfully.
Dec  2 06:05:53 np0005542249 python3.9[221803]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:53 np0005542249 podman[221631]: 2025-12-02 11:05:53.453494915 +0000 UTC m=+1.047865697 container remove a9ce3d8f547d78de16917a38389f678c5987f677a441735c356432f6df299324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_brattain, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  2 06:05:53 np0005542249 systemd[1]: libpod-conmon-a9ce3d8f547d78de16917a38389f678c5987f677a441735c356432f6df299324.scope: Deactivated successfully.
Dec  2 06:05:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:54 np0005542249 podman[222113]: 2025-12-02 11:05:54.164533464 +0000 UTC m=+0.044822896 container create 47747f7bce34e5225bd9295a88bdf72cb314309c6e862ddb235a61bb78d3907b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  2 06:05:54 np0005542249 systemd[1]: Started libpod-conmon-47747f7bce34e5225bd9295a88bdf72cb314309c6e862ddb235a61bb78d3907b.scope.
Dec  2 06:05:54 np0005542249 podman[222113]: 2025-12-02 11:05:54.143265798 +0000 UTC m=+0.023555210 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:05:54 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:05:54 np0005542249 podman[222113]: 2025-12-02 11:05:54.26293674 +0000 UTC m=+0.143226152 container init 47747f7bce34e5225bd9295a88bdf72cb314309c6e862ddb235a61bb78d3907b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hellman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  2 06:05:54 np0005542249 podman[222113]: 2025-12-02 11:05:54.271689177 +0000 UTC m=+0.151978589 container start 47747f7bce34e5225bd9295a88bdf72cb314309c6e862ddb235a61bb78d3907b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  2 06:05:54 np0005542249 podman[222113]: 2025-12-02 11:05:54.275122081 +0000 UTC m=+0.155411493 container attach 47747f7bce34e5225bd9295a88bdf72cb314309c6e862ddb235a61bb78d3907b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hellman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  2 06:05:54 np0005542249 unruffled_hellman[222129]: 167 167
Dec  2 06:05:54 np0005542249 systemd[1]: libpod-47747f7bce34e5225bd9295a88bdf72cb314309c6e862ddb235a61bb78d3907b.scope: Deactivated successfully.
Dec  2 06:05:54 np0005542249 podman[222113]: 2025-12-02 11:05:54.283550679 +0000 UTC m=+0.163840091 container died 47747f7bce34e5225bd9295a88bdf72cb314309c6e862ddb235a61bb78d3907b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  2 06:05:54 np0005542249 python3.9[222112]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:05:54 np0005542249 systemd[1]: var-lib-containers-storage-overlay-c76424bc8409ec3e0a6f124c6b97e54cd13d37fb7d44970068217367ba833efc-merged.mount: Deactivated successfully.
Dec  2 06:05:54 np0005542249 podman[222113]: 2025-12-02 11:05:54.324758905 +0000 UTC m=+0.205048317 container remove 47747f7bce34e5225bd9295a88bdf72cb314309c6e862ddb235a61bb78d3907b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hellman, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  2 06:05:54 np0005542249 systemd[1]: libpod-conmon-47747f7bce34e5225bd9295a88bdf72cb314309c6e862ddb235a61bb78d3907b.scope: Deactivated successfully.
Dec  2 06:05:54 np0005542249 podman[222178]: 2025-12-02 11:05:54.489594512 +0000 UTC m=+0.043901440 container create 6a113acebcc7c91027421bf0424155abd3cc81723ec3d15372bc5688eb32e9f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:05:54 np0005542249 systemd[1]: Started libpod-conmon-6a113acebcc7c91027421bf0424155abd3cc81723ec3d15372bc5688eb32e9f0.scope.
Dec  2 06:05:54 np0005542249 podman[222178]: 2025-12-02 11:05:54.469243341 +0000 UTC m=+0.023550319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:05:54 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:05:54 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda36839c2724c2298f26d95a4086464df5959faafbbf98d109ba157e8d37ed5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:05:54 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda36839c2724c2298f26d95a4086464df5959faafbbf98d109ba157e8d37ed5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:05:54 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda36839c2724c2298f26d95a4086464df5959faafbbf98d109ba157e8d37ed5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:05:54 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda36839c2724c2298f26d95a4086464df5959faafbbf98d109ba157e8d37ed5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:05:54 np0005542249 podman[222178]: 2025-12-02 11:05:54.59835788 +0000 UTC m=+0.152664828 container init 6a113acebcc7c91027421bf0424155abd3cc81723ec3d15372bc5688eb32e9f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bartik, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  2 06:05:54 np0005542249 podman[222178]: 2025-12-02 11:05:54.605389041 +0000 UTC m=+0.159695989 container start 6a113acebcc7c91027421bf0424155abd3cc81723ec3d15372bc5688eb32e9f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  2 06:05:54 np0005542249 podman[222178]: 2025-12-02 11:05:54.611041294 +0000 UTC m=+0.165348222 container attach 6a113acebcc7c91027421bf0424155abd3cc81723ec3d15372bc5688eb32e9f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  2 06:05:55 np0005542249 python3.9[222327]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 06:05:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:05:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:55 np0005542249 recursing_bartik[222217]: {
Dec  2 06:05:55 np0005542249 recursing_bartik[222217]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:05:55 np0005542249 recursing_bartik[222217]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:05:55 np0005542249 recursing_bartik[222217]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:05:55 np0005542249 recursing_bartik[222217]:        "osd_id": 0,
Dec  2 06:05:55 np0005542249 recursing_bartik[222217]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:05:55 np0005542249 recursing_bartik[222217]:        "type": "bluestore"
Dec  2 06:05:55 np0005542249 recursing_bartik[222217]:    },
Dec  2 06:05:55 np0005542249 recursing_bartik[222217]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:05:55 np0005542249 recursing_bartik[222217]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:05:55 np0005542249 recursing_bartik[222217]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:05:55 np0005542249 recursing_bartik[222217]:        "osd_id": 2,
Dec  2 06:05:55 np0005542249 recursing_bartik[222217]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:05:55 np0005542249 recursing_bartik[222217]:        "type": "bluestore"
Dec  2 06:05:55 np0005542249 recursing_bartik[222217]:    },
Dec  2 06:05:55 np0005542249 recursing_bartik[222217]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:05:55 np0005542249 recursing_bartik[222217]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:05:55 np0005542249 recursing_bartik[222217]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:05:55 np0005542249 recursing_bartik[222217]:        "osd_id": 1,
Dec  2 06:05:55 np0005542249 recursing_bartik[222217]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:05:55 np0005542249 recursing_bartik[222217]:        "type": "bluestore"
Dec  2 06:05:55 np0005542249 recursing_bartik[222217]:    }
Dec  2 06:05:55 np0005542249 recursing_bartik[222217]: }
Dec  2 06:05:55 np0005542249 systemd[1]: libpod-6a113acebcc7c91027421bf0424155abd3cc81723ec3d15372bc5688eb32e9f0.scope: Deactivated successfully.
Dec  2 06:05:55 np0005542249 podman[222178]: 2025-12-02 11:05:55.685915362 +0000 UTC m=+1.240222280 container died 6a113acebcc7c91027421bf0424155abd3cc81723ec3d15372bc5688eb32e9f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bartik, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  2 06:05:55 np0005542249 systemd[1]: libpod-6a113acebcc7c91027421bf0424155abd3cc81723ec3d15372bc5688eb32e9f0.scope: Consumed 1.084s CPU time.
Dec  2 06:05:55 np0005542249 systemd[1]: var-lib-containers-storage-overlay-dda36839c2724c2298f26d95a4086464df5959faafbbf98d109ba157e8d37ed5-merged.mount: Deactivated successfully.
Dec  2 06:05:55 np0005542249 podman[222178]: 2025-12-02 11:05:55.842126685 +0000 UTC m=+1.396433653 container remove 6a113acebcc7c91027421bf0424155abd3cc81723ec3d15372bc5688eb32e9f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  2 06:05:55 np0005542249 systemd[1]: libpod-conmon-6a113acebcc7c91027421bf0424155abd3cc81723ec3d15372bc5688eb32e9f0.scope: Deactivated successfully.
Dec  2 06:05:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:05:55 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:05:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:05:55 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:05:55 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev d8928ff0-fad3-467d-a4e5-6891fafe6540 does not exist
Dec  2 06:05:55 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 725dc6b3-1523-4939-a542-13d3d8cf7da0 does not exist
Dec  2 06:05:55 np0005542249 python3.9[222520]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:05:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:05:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:05:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:05:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:05:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:05:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:05:56 np0005542249 python3.9[222725]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:56 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:05:56 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:05:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:57 np0005542249 python3.9[222877]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:05:58 np0005542249 python3.9[223000]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764673556.9553776-1385-76334673858735/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:05:58 np0005542249 python3.9[223152]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:05:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:05:59 np0005542249 python3.9[223275]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764673558.401539-1400-231753307441921/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:06:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:06:00 np0005542249 python3.9[223427]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:06:00 np0005542249 python3.9[223550]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764673559.720271-1415-269479220067855/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:06:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:01 np0005542249 python3.9[223702]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 06:06:01 np0005542249 systemd[1]: Reloading.
Dec  2 06:06:01 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:06:01 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:06:02 np0005542249 systemd[1]: Reached target edpm_libvirt.target.
Dec  2 06:06:03 np0005542249 python3.9[223893]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  2 06:06:03 np0005542249 systemd[1]: Reloading.
Dec  2 06:06:03 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:06:03 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:06:03 np0005542249 systemd[1]: Reloading.
Dec  2 06:06:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:03 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:06:03 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:06:04 np0005542249 systemd[1]: session-49.scope: Deactivated successfully.
Dec  2 06:06:04 np0005542249 systemd[1]: session-49.scope: Consumed 3min 54.902s CPU time.
Dec  2 06:06:04 np0005542249 systemd-logind[787]: Session 49 logged out. Waiting for processes to exit.
Dec  2 06:06:04 np0005542249 systemd-logind[787]: Removed session 49.
Dec  2 06:06:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:06:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:10 np0005542249 systemd-logind[787]: New session 50 of user zuul.
Dec  2 06:06:10 np0005542249 systemd[1]: Started Session 50 of User zuul.
Dec  2 06:06:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:06:11 np0005542249 python3.9[224144]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 06:06:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:12 np0005542249 python3.9[224298]: ansible-ansible.builtin.service_facts Invoked
Dec  2 06:06:12 np0005542249 network[224315]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  2 06:06:12 np0005542249 network[224316]: 'network-scripts' will be removed from distribution in near future.
Dec  2 06:06:12 np0005542249 network[224317]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  2 06:06:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:06:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:16 np0005542249 podman[224404]: 2025-12-02 11:06:16.890565978 +0000 UTC m=+0.175291816 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  2 06:06:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:19 np0005542249 python3.9[224615]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 06:06:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:06:19.816 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:06:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:06:19.817 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:06:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:06:19.817 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:06:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:06:20 np0005542249 python3.9[224699]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 06:06:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:22 np0005542249 podman[224701]: 2025-12-02 11:06:22.98999242 +0000 UTC m=+0.068592168 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  2 06:06:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:06:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:06:26
Dec  2 06:06:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:06:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:06:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['images', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'backups', '.rgw.root', 'cephfs.cephfs.data']
Dec  2 06:06:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:06:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:06:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:06:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:06:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:06:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:06:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:06:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:06:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:06:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:06:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:06:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:06:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:06:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:06:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:06:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:06:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:06:27 np0005542249 python3.9[224873]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 06:06:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:28 np0005542249 python3.9[225025]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:06:29 np0005542249 python3.9[225178]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 06:06:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:29 np0005542249 python3.9[225330]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:06:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:06:30 np0005542249 python3.9[225483]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:06:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:31 np0005542249 python3.9[225606]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764673590.2333834-95-215732457817741/.source.iscsi _original_basename=.5r9hlp48 follow=False checksum=ffddfaffd52acc2ee5fc592a320709a3ba076a90 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:06:32 np0005542249 python3.9[225758]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:06:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:33 np0005542249 python3.9[225910]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:06:34 np0005542249 python3.9[226062]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 06:06:35 np0005542249 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec  2 06:06:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:06:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:06:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:06:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:06:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:06:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:06:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:06:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:06:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:06:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:06:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:06:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:06:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:06:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:06:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:06:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:06:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:06:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:06:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:06:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:06:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:06:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:06:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:06:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:06:35 np0005542249 python3.9[226218]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 06:06:35 np0005542249 systemd[1]: Reloading.
Dec  2 06:06:36 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:06:36 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:06:36 np0005542249 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  2 06:06:36 np0005542249 systemd[1]: Starting Open-iSCSI...
Dec  2 06:06:36 np0005542249 kernel: Loading iSCSI transport class v2.0-870.
Dec  2 06:06:36 np0005542249 systemd[1]: Started Open-iSCSI.
Dec  2 06:06:36 np0005542249 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec  2 06:06:36 np0005542249 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec  2 06:06:37 np0005542249 python3.9[226419]: ansible-ansible.builtin.service_facts Invoked
Dec  2 06:06:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:37 np0005542249 network[226436]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  2 06:06:37 np0005542249 network[226437]: 'network-scripts' will be removed from distribution in near future.
Dec  2 06:06:37 np0005542249 network[226438]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  2 06:06:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:06:40.254196) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673600254321, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1701, "num_deletes": 250, "total_data_size": 2835838, "memory_usage": 2877592, "flush_reason": "Manual Compaction"}
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673600270042, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1606352, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11813, "largest_seqno": 13513, "table_properties": {"data_size": 1600690, "index_size": 2801, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14068, "raw_average_key_size": 20, "raw_value_size": 1588312, "raw_average_value_size": 2269, "num_data_blocks": 129, "num_entries": 700, "num_filter_entries": 700, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764673409, "oldest_key_time": 1764673409, "file_creation_time": 1764673600, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 15882 microseconds, and 6033 cpu microseconds.
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:06:40.270098) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1606352 bytes OK
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:06:40.270124) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:06:40.272338) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:06:40.272355) EVENT_LOG_v1 {"time_micros": 1764673600272349, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:06:40.272380) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2828577, prev total WAL file size 2828577, number of live WAL files 2.
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:06:40.273557) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1568KB)], [29(7850KB)]
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673600273651, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9644844, "oldest_snapshot_seqno": -1}
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4010 keys, 7528393 bytes, temperature: kUnknown
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673600353356, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7528393, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7499765, "index_size": 17510, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 95631, "raw_average_key_size": 23, "raw_value_size": 7425592, "raw_average_value_size": 1851, "num_data_blocks": 761, "num_entries": 4010, "num_filter_entries": 4010, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672515, "oldest_key_time": 0, "file_creation_time": 1764673600, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:06:40.353790) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7528393 bytes
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:06:40.355311) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.8 rd, 94.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.7 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(10.7) write-amplify(4.7) OK, records in: 4432, records dropped: 422 output_compression: NoCompression
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:06:40.355328) EVENT_LOG_v1 {"time_micros": 1764673600355319, "job": 12, "event": "compaction_finished", "compaction_time_micros": 79831, "compaction_time_cpu_micros": 38829, "output_level": 6, "num_output_files": 1, "total_output_size": 7528393, "num_input_records": 4432, "num_output_records": 4010, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673600355697, "job": 12, "event": "table_file_deletion", "file_number": 31}
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673600356868, "job": 12, "event": "table_file_deletion", "file_number": 29}
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:06:40.273455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:06:40.357124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:06:40.357133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:06:40.357137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:06:40.357143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:06:40 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:06:40.357146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:06:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:43 np0005542249 python3.9[226710]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  2 06:06:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:44 np0005542249 python3.9[226862]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec  2 06:06:45 np0005542249 python3.9[227018]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:06:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:06:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:45 np0005542249 python3.9[227141]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764673604.5762818-172-47755272139403/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:06:46 np0005542249 python3.9[227293]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:06:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:47 np0005542249 podman[227417]: 2025-12-02 11:06:47.698825042 +0000 UTC m=+0.123750071 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  2 06:06:47 np0005542249 python3.9[227462]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 06:06:47 np0005542249 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  2 06:06:47 np0005542249 systemd[1]: Stopped Load Kernel Modules.
Dec  2 06:06:47 np0005542249 systemd[1]: Stopping Load Kernel Modules...
Dec  2 06:06:47 np0005542249 systemd[1]: Starting Load Kernel Modules...
Dec  2 06:06:48 np0005542249 systemd[1]: Finished Load Kernel Modules.
Dec  2 06:06:48 np0005542249 python3.9[227628]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:06:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:49 np0005542249 python3.9[227781]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 06:06:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:06:50 np0005542249 python3.9[227933]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 06:06:51 np0005542249 python3.9[228085]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:06:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:51 np0005542249 python3.9[228208]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764673610.7988462-230-81664791983869/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:06:52 np0005542249 python3.9[228360]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:06:53 np0005542249 podman[228485]: 2025-12-02 11:06:53.302585976 +0000 UTC m=+0.065822493 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  2 06:06:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:53 np0005542249 python3.9[228532]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:06:54 np0005542249 python3.9[228684]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:06:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:06:55 np0005542249 python3.9[228836]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:06:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:56 np0005542249 python3.9[228988]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:06:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:06:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:06:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:06:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:06:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:06:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:06:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:06:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:06:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:06:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:06:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:06:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:06:56 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 3585ec76-9dac-47c5-9866-28890736a310 does not exist
Dec  2 06:06:56 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev a9b7cd78-2c34-4839-b46b-7457126b957b does not exist
Dec  2 06:06:56 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 36b300c7-a89e-49dc-9f90-be0a7368dc32 does not exist
Dec  2 06:06:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:06:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:06:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:06:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:06:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:06:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:06:56 np0005542249 python3.9[229259]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:06:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:57 np0005542249 podman[229565]: 2025-12-02 11:06:57.627786296 +0000 UTC m=+0.058625068 container create 53763e7836184be509d668b7748ec682025b6639adc8326f16f94e41d207c216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_sutherland, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  2 06:06:57 np0005542249 systemd[1]: Started libpod-conmon-53763e7836184be509d668b7748ec682025b6639adc8326f16f94e41d207c216.scope.
Dec  2 06:06:57 np0005542249 podman[229565]: 2025-12-02 11:06:57.603989282 +0000 UTC m=+0.034828134 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:06:57 np0005542249 python3.9[229552]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:06:57 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:06:57 np0005542249 podman[229565]: 2025-12-02 11:06:57.732616874 +0000 UTC m=+0.163455656 container init 53763e7836184be509d668b7748ec682025b6639adc8326f16f94e41d207c216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_sutherland, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:06:57 np0005542249 podman[229565]: 2025-12-02 11:06:57.742572523 +0000 UTC m=+0.173411295 container start 53763e7836184be509d668b7748ec682025b6639adc8326f16f94e41d207c216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_sutherland, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:06:57 np0005542249 podman[229565]: 2025-12-02 11:06:57.74613625 +0000 UTC m=+0.176975032 container attach 53763e7836184be509d668b7748ec682025b6639adc8326f16f94e41d207c216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_sutherland, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:06:57 np0005542249 gracious_sutherland[229583]: 167 167
Dec  2 06:06:57 np0005542249 systemd[1]: libpod-53763e7836184be509d668b7748ec682025b6639adc8326f16f94e41d207c216.scope: Deactivated successfully.
Dec  2 06:06:57 np0005542249 conmon[229583]: conmon 53763e7836184be509d6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-53763e7836184be509d668b7748ec682025b6639adc8326f16f94e41d207c216.scope/container/memory.events
Dec  2 06:06:57 np0005542249 podman[229565]: 2025-12-02 11:06:57.75130563 +0000 UTC m=+0.182144402 container died 53763e7836184be509d668b7748ec682025b6639adc8326f16f94e41d207c216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  2 06:06:57 np0005542249 systemd[1]: var-lib-containers-storage-overlay-d562cb22eee1e43eea2f0041717a3fd1d4d5522c0dd659b254bbb6f8a9033d39-merged.mount: Deactivated successfully.
Dec  2 06:06:57 np0005542249 podman[229565]: 2025-12-02 11:06:57.796123873 +0000 UTC m=+0.226962645 container remove 53763e7836184be509d668b7748ec682025b6639adc8326f16f94e41d207c216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_sutherland, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:06:57 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:06:57 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:06:57 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:06:57 np0005542249 systemd[1]: libpod-conmon-53763e7836184be509d668b7748ec682025b6639adc8326f16f94e41d207c216.scope: Deactivated successfully.
Dec  2 06:06:57 np0005542249 podman[229674]: 2025-12-02 11:06:57.979903629 +0000 UTC m=+0.050340164 container create aa10392af02bd754cc56d0957ae82540510307875a72b2dee5b46a0e9e9b2c4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  2 06:06:58 np0005542249 systemd[1]: Started libpod-conmon-aa10392af02bd754cc56d0957ae82540510307875a72b2dee5b46a0e9e9b2c4e.scope.
Dec  2 06:06:58 np0005542249 podman[229674]: 2025-12-02 11:06:57.961597212 +0000 UTC m=+0.032033767 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:06:58 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:06:58 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9c34c13414d16c66314f891765e6f7598626194378374e02ab8208a002df061/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:06:58 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9c34c13414d16c66314f891765e6f7598626194378374e02ab8208a002df061/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:06:58 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9c34c13414d16c66314f891765e6f7598626194378374e02ab8208a002df061/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:06:58 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9c34c13414d16c66314f891765e6f7598626194378374e02ab8208a002df061/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:06:58 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9c34c13414d16c66314f891765e6f7598626194378374e02ab8208a002df061/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:06:58 np0005542249 podman[229674]: 2025-12-02 11:06:58.08781543 +0000 UTC m=+0.158252025 container init aa10392af02bd754cc56d0957ae82540510307875a72b2dee5b46a0e9e9b2c4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  2 06:06:58 np0005542249 podman[229674]: 2025-12-02 11:06:58.095360124 +0000 UTC m=+0.165796679 container start aa10392af02bd754cc56d0957ae82540510307875a72b2dee5b46a0e9e9b2c4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:06:58 np0005542249 podman[229674]: 2025-12-02 11:06:58.099285161 +0000 UTC m=+0.169721716 container attach aa10392af02bd754cc56d0957ae82540510307875a72b2dee5b46a0e9e9b2c4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ritchie, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:06:58 np0005542249 python3.9[229780]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:06:59 np0005542249 python3.9[229934]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 06:06:59 np0005542249 cranky_ritchie[229727]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:06:59 np0005542249 cranky_ritchie[229727]: --> relative data size: 1.0
Dec  2 06:06:59 np0005542249 cranky_ritchie[229727]: --> All data devices are unavailable
Dec  2 06:06:59 np0005542249 systemd[1]: libpod-aa10392af02bd754cc56d0957ae82540510307875a72b2dee5b46a0e9e9b2c4e.scope: Deactivated successfully.
Dec  2 06:06:59 np0005542249 podman[229674]: 2025-12-02 11:06:59.270429365 +0000 UTC m=+1.340865970 container died aa10392af02bd754cc56d0957ae82540510307875a72b2dee5b46a0e9e9b2c4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ritchie, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:06:59 np0005542249 systemd[1]: libpod-aa10392af02bd754cc56d0957ae82540510307875a72b2dee5b46a0e9e9b2c4e.scope: Consumed 1.108s CPU time.
Dec  2 06:06:59 np0005542249 systemd[1]: var-lib-containers-storage-overlay-e9c34c13414d16c66314f891765e6f7598626194378374e02ab8208a002df061-merged.mount: Deactivated successfully.
Dec  2 06:06:59 np0005542249 podman[229674]: 2025-12-02 11:06:59.344311745 +0000 UTC m=+1.414748280 container remove aa10392af02bd754cc56d0957ae82540510307875a72b2dee5b46a0e9e9b2c4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ritchie, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  2 06:06:59 np0005542249 systemd[1]: libpod-conmon-aa10392af02bd754cc56d0957ae82540510307875a72b2dee5b46a0e9e9b2c4e.scope: Deactivated successfully.
Dec  2 06:06:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:06:59 np0005542249 python3.9[230197]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:07:00 np0005542249 podman[230288]: 2025-12-02 11:07:00.092892101 +0000 UTC m=+0.062056801 container create 789ec8ce77d2d254df23f4f0567339ae7ad0e89fbec3eb70f327affa3512ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:07:00 np0005542249 systemd[1]: Started libpod-conmon-789ec8ce77d2d254df23f4f0567339ae7ad0e89fbec3eb70f327affa3512ff33.scope.
Dec  2 06:07:00 np0005542249 podman[230288]: 2025-12-02 11:07:00.066539978 +0000 UTC m=+0.035704688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:07:00 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:07:00 np0005542249 podman[230288]: 2025-12-02 11:07:00.221672117 +0000 UTC m=+0.190836887 container init 789ec8ce77d2d254df23f4f0567339ae7ad0e89fbec3eb70f327affa3512ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:07:00 np0005542249 podman[230288]: 2025-12-02 11:07:00.231514064 +0000 UTC m=+0.200678764 container start 789ec8ce77d2d254df23f4f0567339ae7ad0e89fbec3eb70f327affa3512ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  2 06:07:00 np0005542249 podman[230288]: 2025-12-02 11:07:00.236362135 +0000 UTC m=+0.205526835 container attach 789ec8ce77d2d254df23f4f0567339ae7ad0e89fbec3eb70f327affa3512ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_gauss, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:07:00 np0005542249 thirsty_gauss[230328]: 167 167
Dec  2 06:07:00 np0005542249 systemd[1]: libpod-789ec8ce77d2d254df23f4f0567339ae7ad0e89fbec3eb70f327affa3512ff33.scope: Deactivated successfully.
Dec  2 06:07:00 np0005542249 podman[230288]: 2025-12-02 11:07:00.241372161 +0000 UTC m=+0.210536861 container died 789ec8ce77d2d254df23f4f0567339ae7ad0e89fbec3eb70f327affa3512ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:07:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:07:00 np0005542249 systemd[1]: var-lib-containers-storage-overlay-60ef5126cd0be6b18edb4d03b3ce61d17dc578ad548f5ff5912261f7738407fd-merged.mount: Deactivated successfully.
Dec  2 06:07:00 np0005542249 podman[230288]: 2025-12-02 11:07:00.285793483 +0000 UTC m=+0.254958153 container remove 789ec8ce77d2d254df23f4f0567339ae7ad0e89fbec3eb70f327affa3512ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 06:07:00 np0005542249 systemd[1]: libpod-conmon-789ec8ce77d2d254df23f4f0567339ae7ad0e89fbec3eb70f327affa3512ff33.scope: Deactivated successfully.
Dec  2 06:07:00 np0005542249 podman[230454]: 2025-12-02 11:07:00.501782261 +0000 UTC m=+0.050609851 container create 2a288ea634d2000c4666a3888c1e79661e9411fb8be311d4c5efa0baa2defa69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:07:00 np0005542249 systemd[1]: Started libpod-conmon-2a288ea634d2000c4666a3888c1e79661e9411fb8be311d4c5efa0baa2defa69.scope.
Dec  2 06:07:00 np0005542249 podman[230454]: 2025-12-02 11:07:00.479714733 +0000 UTC m=+0.028542363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:07:00 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:07:00 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457517e68d1d88bb5736293cd64bd743ee91bce6047253542bdb2ba677d4f050/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:07:00 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457517e68d1d88bb5736293cd64bd743ee91bce6047253542bdb2ba677d4f050/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:07:00 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457517e68d1d88bb5736293cd64bd743ee91bce6047253542bdb2ba677d4f050/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:07:00 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457517e68d1d88bb5736293cd64bd743ee91bce6047253542bdb2ba677d4f050/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:07:00 np0005542249 podman[230454]: 2025-12-02 11:07:00.593998567 +0000 UTC m=+0.142826247 container init 2a288ea634d2000c4666a3888c1e79661e9411fb8be311d4c5efa0baa2defa69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kare, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Dec  2 06:07:00 np0005542249 podman[230454]: 2025-12-02 11:07:00.609374393 +0000 UTC m=+0.158201983 container start 2a288ea634d2000c4666a3888c1e79661e9411fb8be311d4c5efa0baa2defa69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kare, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:07:00 np0005542249 podman[230454]: 2025-12-02 11:07:00.61367721 +0000 UTC m=+0.162504880 container attach 2a288ea634d2000c4666a3888c1e79661e9411fb8be311d4c5efa0baa2defa69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:07:00 np0005542249 python3.9[230463]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:07:01 np0005542249 python3.9[230630]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:07:01 np0005542249 loving_kare[230474]: {
Dec  2 06:07:01 np0005542249 loving_kare[230474]:    "0": [
Dec  2 06:07:01 np0005542249 loving_kare[230474]:        {
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "devices": [
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "/dev/loop3"
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            ],
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "lv_name": "ceph_lv0",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "lv_size": "21470642176",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "name": "ceph_lv0",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "tags": {
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.cluster_name": "ceph",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.crush_device_class": "",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.encrypted": "0",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.osd_id": "0",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.type": "block",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.vdo": "0"
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            },
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "type": "block",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "vg_name": "ceph_vg0"
Dec  2 06:07:01 np0005542249 loving_kare[230474]:        }
Dec  2 06:07:01 np0005542249 loving_kare[230474]:    ],
Dec  2 06:07:01 np0005542249 loving_kare[230474]:    "1": [
Dec  2 06:07:01 np0005542249 loving_kare[230474]:        {
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "devices": [
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "/dev/loop4"
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            ],
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "lv_name": "ceph_lv1",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "lv_size": "21470642176",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "name": "ceph_lv1",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "tags": {
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.cluster_name": "ceph",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.crush_device_class": "",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.encrypted": "0",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.osd_id": "1",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.type": "block",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.vdo": "0"
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            },
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "type": "block",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "vg_name": "ceph_vg1"
Dec  2 06:07:01 np0005542249 loving_kare[230474]:        }
Dec  2 06:07:01 np0005542249 loving_kare[230474]:    ],
Dec  2 06:07:01 np0005542249 loving_kare[230474]:    "2": [
Dec  2 06:07:01 np0005542249 loving_kare[230474]:        {
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "devices": [
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "/dev/loop5"
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            ],
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "lv_name": "ceph_lv2",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "lv_size": "21470642176",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "name": "ceph_lv2",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "tags": {
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.cluster_name": "ceph",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.crush_device_class": "",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.encrypted": "0",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.osd_id": "2",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.type": "block",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:                "ceph.vdo": "0"
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            },
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "type": "block",
Dec  2 06:07:01 np0005542249 loving_kare[230474]:            "vg_name": "ceph_vg2"
Dec  2 06:07:01 np0005542249 loving_kare[230474]:        }
Dec  2 06:07:01 np0005542249 loving_kare[230474]:    ]
Dec  2 06:07:01 np0005542249 loving_kare[230474]: }
Dec  2 06:07:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:01 np0005542249 systemd[1]: libpod-2a288ea634d2000c4666a3888c1e79661e9411fb8be311d4c5efa0baa2defa69.scope: Deactivated successfully.
Dec  2 06:07:01 np0005542249 podman[230454]: 2025-12-02 11:07:01.526375487 +0000 UTC m=+1.075203087 container died 2a288ea634d2000c4666a3888c1e79661e9411fb8be311d4c5efa0baa2defa69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  2 06:07:01 np0005542249 systemd[1]: var-lib-containers-storage-overlay-457517e68d1d88bb5736293cd64bd743ee91bce6047253542bdb2ba677d4f050-merged.mount: Deactivated successfully.
Dec  2 06:07:01 np0005542249 podman[230454]: 2025-12-02 11:07:01.591980333 +0000 UTC m=+1.140807913 container remove 2a288ea634d2000c4666a3888c1e79661e9411fb8be311d4c5efa0baa2defa69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  2 06:07:01 np0005542249 systemd[1]: libpod-conmon-2a288ea634d2000c4666a3888c1e79661e9411fb8be311d4c5efa0baa2defa69.scope: Deactivated successfully.
Dec  2 06:07:02 np0005542249 python3.9[230797]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:07:02 np0005542249 podman[230918]: 2025-12-02 11:07:02.437030611 +0000 UTC m=+0.063696236 container create a38623237974958ef2870b088a995f68df776c82518a81fd9d853ee5d4817e51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  2 06:07:02 np0005542249 systemd[1]: Started libpod-conmon-a38623237974958ef2870b088a995f68df776c82518a81fd9d853ee5d4817e51.scope.
Dec  2 06:07:02 np0005542249 podman[230918]: 2025-12-02 11:07:02.418725845 +0000 UTC m=+0.045391490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:07:02 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:07:02 np0005542249 podman[230918]: 2025-12-02 11:07:02.544919482 +0000 UTC m=+0.171585127 container init a38623237974958ef2870b088a995f68df776c82518a81fd9d853ee5d4817e51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  2 06:07:02 np0005542249 podman[230918]: 2025-12-02 11:07:02.557196194 +0000 UTC m=+0.183861829 container start a38623237974958ef2870b088a995f68df776c82518a81fd9d853ee5d4817e51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brahmagupta, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  2 06:07:02 np0005542249 podman[230918]: 2025-12-02 11:07:02.561273105 +0000 UTC m=+0.187938770 container attach a38623237974958ef2870b088a995f68df776c82518a81fd9d853ee5d4817e51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brahmagupta, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:07:02 np0005542249 kind_brahmagupta[230983]: 167 167
Dec  2 06:07:02 np0005542249 systemd[1]: libpod-a38623237974958ef2870b088a995f68df776c82518a81fd9d853ee5d4817e51.scope: Deactivated successfully.
Dec  2 06:07:02 np0005542249 podman[230918]: 2025-12-02 11:07:02.565141958 +0000 UTC m=+0.191807573 container died a38623237974958ef2870b088a995f68df776c82518a81fd9d853ee5d4817e51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  2 06:07:02 np0005542249 systemd[1]: var-lib-containers-storage-overlay-de0039c764e4491ff93a925d495db3711391d6d472813c06b04010ed203e2938-merged.mount: Deactivated successfully.
Dec  2 06:07:02 np0005542249 podman[230918]: 2025-12-02 11:07:02.614295359 +0000 UTC m=+0.240960974 container remove a38623237974958ef2870b088a995f68df776c82518a81fd9d853ee5d4817e51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Dec  2 06:07:02 np0005542249 systemd[1]: libpod-conmon-a38623237974958ef2870b088a995f68df776c82518a81fd9d853ee5d4817e51.scope: Deactivated successfully.
Dec  2 06:07:02 np0005542249 podman[231058]: 2025-12-02 11:07:02.818304892 +0000 UTC m=+0.056025537 container create 315d89f15139e1b607ead8b8f5771dcc1195ca7fef969727214b18196721d6ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  2 06:07:02 np0005542249 systemd[1]: Started libpod-conmon-315d89f15139e1b607ead8b8f5771dcc1195ca7fef969727214b18196721d6ee.scope.
Dec  2 06:07:02 np0005542249 python3.9[231052]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:07:02 np0005542249 podman[231058]: 2025-12-02 11:07:02.791107456 +0000 UTC m=+0.028828181 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:07:02 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:07:02 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bae65f3f06cd37e5b53e1e56059bc4b78ab3f49ad05def9474f01782f9b1d9d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:07:02 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bae65f3f06cd37e5b53e1e56059bc4b78ab3f49ad05def9474f01782f9b1d9d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:07:02 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bae65f3f06cd37e5b53e1e56059bc4b78ab3f49ad05def9474f01782f9b1d9d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:07:02 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bae65f3f06cd37e5b53e1e56059bc4b78ab3f49ad05def9474f01782f9b1d9d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:07:02 np0005542249 podman[231058]: 2025-12-02 11:07:02.926207323 +0000 UTC m=+0.163927978 container init 315d89f15139e1b607ead8b8f5771dcc1195ca7fef969727214b18196721d6ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  2 06:07:02 np0005542249 podman[231058]: 2025-12-02 11:07:02.943828201 +0000 UTC m=+0.181548876 container start 315d89f15139e1b607ead8b8f5771dcc1195ca7fef969727214b18196721d6ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mirzakhani, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  2 06:07:02 np0005542249 podman[231058]: 2025-12-02 11:07:02.948774424 +0000 UTC m=+0.186495059 container attach 315d89f15139e1b607ead8b8f5771dcc1195ca7fef969727214b18196721d6ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:07:03 np0005542249 python3.9[231157]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:07:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:03 np0005542249 mystifying_mirzakhani[231075]: {
Dec  2 06:07:03 np0005542249 mystifying_mirzakhani[231075]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:07:03 np0005542249 mystifying_mirzakhani[231075]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:07:03 np0005542249 mystifying_mirzakhani[231075]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:07:03 np0005542249 mystifying_mirzakhani[231075]:        "osd_id": 0,
Dec  2 06:07:03 np0005542249 mystifying_mirzakhani[231075]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:07:03 np0005542249 mystifying_mirzakhani[231075]:        "type": "bluestore"
Dec  2 06:07:03 np0005542249 mystifying_mirzakhani[231075]:    },
Dec  2 06:07:03 np0005542249 mystifying_mirzakhani[231075]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:07:03 np0005542249 mystifying_mirzakhani[231075]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:07:03 np0005542249 mystifying_mirzakhani[231075]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:07:03 np0005542249 mystifying_mirzakhani[231075]:        "osd_id": 2,
Dec  2 06:07:03 np0005542249 mystifying_mirzakhani[231075]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:07:03 np0005542249 mystifying_mirzakhani[231075]:        "type": "bluestore"
Dec  2 06:07:03 np0005542249 mystifying_mirzakhani[231075]:    },
Dec  2 06:07:03 np0005542249 mystifying_mirzakhani[231075]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:07:03 np0005542249 mystifying_mirzakhani[231075]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:07:03 np0005542249 mystifying_mirzakhani[231075]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:07:03 np0005542249 mystifying_mirzakhani[231075]:        "osd_id": 1,
Dec  2 06:07:03 np0005542249 mystifying_mirzakhani[231075]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:07:03 np0005542249 mystifying_mirzakhani[231075]:        "type": "bluestore"
Dec  2 06:07:03 np0005542249 mystifying_mirzakhani[231075]:    }
Dec  2 06:07:03 np0005542249 mystifying_mirzakhani[231075]: }
Dec  2 06:07:04 np0005542249 systemd[1]: libpod-315d89f15139e1b607ead8b8f5771dcc1195ca7fef969727214b18196721d6ee.scope: Deactivated successfully.
Dec  2 06:07:04 np0005542249 podman[231058]: 2025-12-02 11:07:04.02096311 +0000 UTC m=+1.258683755 container died 315d89f15139e1b607ead8b8f5771dcc1195ca7fef969727214b18196721d6ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mirzakhani, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:07:04 np0005542249 systemd[1]: libpod-315d89f15139e1b607ead8b8f5771dcc1195ca7fef969727214b18196721d6ee.scope: Consumed 1.079s CPU time.
Dec  2 06:07:04 np0005542249 systemd[1]: var-lib-containers-storage-overlay-bae65f3f06cd37e5b53e1e56059bc4b78ab3f49ad05def9474f01782f9b1d9d5-merged.mount: Deactivated successfully.
Dec  2 06:07:04 np0005542249 podman[231058]: 2025-12-02 11:07:04.091916912 +0000 UTC m=+1.329637557 container remove 315d89f15139e1b607ead8b8f5771dcc1195ca7fef969727214b18196721d6ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec  2 06:07:04 np0005542249 systemd[1]: libpod-conmon-315d89f15139e1b607ead8b8f5771dcc1195ca7fef969727214b18196721d6ee.scope: Deactivated successfully.
Dec  2 06:07:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:07:04 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:07:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:07:04 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:07:04 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 6fa9c722-42bc-40c0-9734-e3fd6829f537 does not exist
Dec  2 06:07:04 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 2673e043-bbef-4117-bb43-35c6cc27b88d does not exist
Dec  2 06:07:04 np0005542249 python3.9[231374]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:07:04 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:07:04 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:07:05 np0005542249 python3.9[231553]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:07:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:07:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:05 np0005542249 python3.9[231631]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:07:06 np0005542249 python3.9[231783]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:07:07 np0005542249 python3.9[231861]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:07:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:07 np0005542249 python3.9[232013]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 06:07:07 np0005542249 systemd[1]: Reloading.
Dec  2 06:07:07 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:07:07 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:07:09 np0005542249 python3.9[232202]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:07:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:09 np0005542249 python3.9[232280]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:07:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:07:10 np0005542249 python3.9[232432]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:07:11 np0005542249 python3.9[232510]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:07:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:11 np0005542249 python3.9[232662]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 06:07:11 np0005542249 systemd[1]: Reloading.
Dec  2 06:07:12 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:07:12 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:07:12 np0005542249 systemd[1]: Starting Create netns directory...
Dec  2 06:07:12 np0005542249 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  2 06:07:12 np0005542249 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  2 06:07:12 np0005542249 systemd[1]: Finished Create netns directory.
Dec  2 06:07:13 np0005542249 python3.9[232855]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:07:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:14 np0005542249 python3.9[233007]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:07:14 np0005542249 python3.9[233130]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764673633.6800225-437-75973712157108/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:07:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:07:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:15 np0005542249 python3.9[233282]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:07:16 np0005542249 python3.9[233434]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:07:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:17 np0005542249 python3.9[233557]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764673636.2692876-462-28468014577493/.source.json _original_basename=.sozcnf0p follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:07:18 np0005542249 podman[233610]: 2025-12-02 11:07:18.069087738 +0000 UTC m=+0.132571559 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:07:18 np0005542249 python3.9[233736]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:07:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:07:19.817 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:07:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:07:19.818 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:07:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:07:19.818 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:07:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:07:21 np0005542249 python3.9[234163]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec  2 06:07:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:22 np0005542249 python3.9[234315]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  2 06:07:23 np0005542249 python3.9[234467]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  2 06:07:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:23 np0005542249 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec  2 06:07:23 np0005542249 podman[234483]: 2025-12-02 11:07:23.616670842 +0000 UTC m=+0.078014544 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  2 06:07:24 np0005542249 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  2 06:07:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:07:25 np0005542249 python3[234665]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  2 06:07:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:07:26
Dec  2 06:07:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:07:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:07:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['.mgr', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', '.rgw.root', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'vms']
Dec  2 06:07:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:07:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:07:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:07:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:07:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:07:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:07:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:07:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:07:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:07:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:07:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:07:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:07:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:07:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:07:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:07:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:07:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:07:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:27 np0005542249 podman[234678]: 2025-12-02 11:07:27.56243048 +0000 UTC m=+2.084033350 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  2 06:07:27 np0005542249 podman[234734]: 2025-12-02 11:07:27.704988059 +0000 UTC m=+0.046327626 container create 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:07:27 np0005542249 podman[234734]: 2025-12-02 11:07:27.681908764 +0000 UTC m=+0.023248331 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  2 06:07:27 np0005542249 python3[234665]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  2 06:07:28 np0005542249 python3.9[234924]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 06:07:29 np0005542249 python3.9[235078]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:07:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:29 np0005542249 python3.9[235154]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 06:07:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:07:30 np0005542249 python3.9[235305]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764673649.940417-550-105212591298128/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:07:31 np0005542249 python3.9[235381]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 06:07:31 np0005542249 systemd[1]: Reloading.
Dec  2 06:07:31 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:07:31 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:07:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:32 np0005542249 python3.9[235492]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 06:07:32 np0005542249 systemd[1]: Reloading.
Dec  2 06:07:32 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:07:32 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:07:32 np0005542249 systemd[1]: Starting multipathd container...
Dec  2 06:07:32 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:07:32 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb26ad42649584d932fbce8ec5889bfb52368abb708e3f7b7feb373072c4fc1/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  2 06:07:32 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb26ad42649584d932fbce8ec5889bfb52368abb708e3f7b7feb373072c4fc1/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  2 06:07:33 np0005542249 systemd[1]: Started /usr/bin/podman healthcheck run 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193.
Dec  2 06:07:33 np0005542249 podman[235531]: 2025-12-02 11:07:33.036978566 +0000 UTC m=+0.163195060 container init 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Dec  2 06:07:33 np0005542249 multipathd[235547]: + sudo -E kolla_set_configs
Dec  2 06:07:33 np0005542249 podman[235531]: 2025-12-02 11:07:33.069158117 +0000 UTC m=+0.195374571 container start 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd)
Dec  2 06:07:33 np0005542249 podman[235531]: multipathd
Dec  2 06:07:33 np0005542249 systemd[1]: Started multipathd container.
Dec  2 06:07:33 np0005542249 multipathd[235547]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  2 06:07:33 np0005542249 multipathd[235547]: INFO:__main__:Validating config file
Dec  2 06:07:33 np0005542249 multipathd[235547]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  2 06:07:33 np0005542249 multipathd[235547]: INFO:__main__:Writing out command to execute
Dec  2 06:07:33 np0005542249 multipathd[235547]: ++ cat /run_command
Dec  2 06:07:33 np0005542249 multipathd[235547]: + CMD='/usr/sbin/multipathd -d'
Dec  2 06:07:33 np0005542249 multipathd[235547]: + ARGS=
Dec  2 06:07:33 np0005542249 multipathd[235547]: + sudo kolla_copy_cacerts
Dec  2 06:07:33 np0005542249 podman[235553]: 2025-12-02 11:07:33.17120774 +0000 UTC m=+0.082408132 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:07:33 np0005542249 systemd[1]: 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193-60ed5b7beef98132.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 06:07:33 np0005542249 systemd[1]: 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193-60ed5b7beef98132.service: Failed with result 'exit-code'.
Dec  2 06:07:33 np0005542249 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 06:07:33 np0005542249 multipathd[235547]: + [[ ! -n '' ]]
Dec  2 06:07:33 np0005542249 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 06:07:33 np0005542249 multipathd[235547]: + . kolla_extend_start
Dec  2 06:07:33 np0005542249 multipathd[235547]: Running command: '/usr/sbin/multipathd -d'
Dec  2 06:07:33 np0005542249 multipathd[235547]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  2 06:07:33 np0005542249 multipathd[235547]: + umask 0022
Dec  2 06:07:33 np0005542249 multipathd[235547]: + exec /usr/sbin/multipathd -d
Dec  2 06:07:33 np0005542249 multipathd[235547]: 3846.177137 | --------start up--------
Dec  2 06:07:33 np0005542249 multipathd[235547]: 3846.177160 | read /etc/multipath.conf
Dec  2 06:07:33 np0005542249 multipathd[235547]: 3846.182765 | path checkers start up
Dec  2 06:07:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:33 np0005542249 python3.9[235738]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 06:07:34 np0005542249 python3.9[235892]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:07:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:07:35 np0005542249 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  2 06:07:35 np0005542249 systemd[1]: virtqemud.service: Deactivated successfully.
Dec  2 06:07:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:07:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:07:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:07:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:07:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:07:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:07:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:07:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:07:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:07:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:07:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:07:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:07:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:07:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:07:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:07:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:07:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:07:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:07:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:07:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:07:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:07:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:07:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:07:35 np0005542249 python3.9[236059]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 06:07:35 np0005542249 systemd[1]: Stopping multipathd container...
Dec  2 06:07:36 np0005542249 multipathd[235547]: 3849.037467 | exit (signal)
Dec  2 06:07:36 np0005542249 multipathd[235547]: 3849.038538 | --------shut down-------
Dec  2 06:07:36 np0005542249 systemd[1]: libpod-130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193.scope: Deactivated successfully.
Dec  2 06:07:36 np0005542249 podman[236063]: 2025-12-02 11:07:36.104348754 +0000 UTC m=+0.101473408 container died 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  2 06:07:36 np0005542249 systemd[1]: 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193-60ed5b7beef98132.timer: Deactivated successfully.
Dec  2 06:07:36 np0005542249 systemd[1]: Stopped /usr/bin/podman healthcheck run 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193.
Dec  2 06:07:36 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193-userdata-shm.mount: Deactivated successfully.
Dec  2 06:07:36 np0005542249 systemd[1]: var-lib-containers-storage-overlay-4fb26ad42649584d932fbce8ec5889bfb52368abb708e3f7b7feb373072c4fc1-merged.mount: Deactivated successfully.
Dec  2 06:07:36 np0005542249 podman[236063]: 2025-12-02 11:07:36.332931133 +0000 UTC m=+0.330055787 container cleanup 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3)
Dec  2 06:07:36 np0005542249 podman[236063]: multipathd
Dec  2 06:07:36 np0005542249 podman[236093]: multipathd
Dec  2 06:07:36 np0005542249 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec  2 06:07:36 np0005542249 systemd[1]: Stopped multipathd container.
Dec  2 06:07:36 np0005542249 systemd[1]: Starting multipathd container...
Dec  2 06:07:36 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:07:36 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb26ad42649584d932fbce8ec5889bfb52368abb708e3f7b7feb373072c4fc1/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  2 06:07:36 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb26ad42649584d932fbce8ec5889bfb52368abb708e3f7b7feb373072c4fc1/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  2 06:07:36 np0005542249 systemd[1]: Started /usr/bin/podman healthcheck run 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193.
Dec  2 06:07:36 np0005542249 podman[236106]: 2025-12-02 11:07:36.59399366 +0000 UTC m=+0.128442369 container init 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd)
Dec  2 06:07:36 np0005542249 multipathd[236122]: + sudo -E kolla_set_configs
Dec  2 06:07:36 np0005542249 podman[236106]: 2025-12-02 11:07:36.633070578 +0000 UTC m=+0.167519267 container start 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:07:36 np0005542249 podman[236106]: multipathd
Dec  2 06:07:36 np0005542249 systemd[1]: Started multipathd container.
Dec  2 06:07:36 np0005542249 multipathd[236122]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  2 06:07:36 np0005542249 multipathd[236122]: INFO:__main__:Validating config file
Dec  2 06:07:36 np0005542249 multipathd[236122]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  2 06:07:36 np0005542249 multipathd[236122]: INFO:__main__:Writing out command to execute
Dec  2 06:07:36 np0005542249 multipathd[236122]: ++ cat /run_command
Dec  2 06:07:36 np0005542249 multipathd[236122]: + CMD='/usr/sbin/multipathd -d'
Dec  2 06:07:36 np0005542249 multipathd[236122]: + ARGS=
Dec  2 06:07:36 np0005542249 multipathd[236122]: + sudo kolla_copy_cacerts
Dec  2 06:07:36 np0005542249 multipathd[236122]: + [[ ! -n '' ]]
Dec  2 06:07:36 np0005542249 multipathd[236122]: + . kolla_extend_start
Dec  2 06:07:36 np0005542249 multipathd[236122]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  2 06:07:36 np0005542249 multipathd[236122]: Running command: '/usr/sbin/multipathd -d'
Dec  2 06:07:36 np0005542249 multipathd[236122]: + umask 0022
Dec  2 06:07:36 np0005542249 multipathd[236122]: + exec /usr/sbin/multipathd -d
Dec  2 06:07:36 np0005542249 multipathd[236122]: 3849.709578 | --------start up--------
Dec  2 06:07:36 np0005542249 multipathd[236122]: 3849.709601 | read /etc/multipath.conf
Dec  2 06:07:36 np0005542249 multipathd[236122]: 3849.717984 | path checkers start up
Dec  2 06:07:36 np0005542249 podman[236129]: 2025-12-02 11:07:36.744621648 +0000 UTC m=+0.101983712 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  2 06:07:36 np0005542249 systemd[1]: 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193-746b1df9bc05c1b6.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 06:07:36 np0005542249 systemd[1]: 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193-746b1df9bc05c1b6.service: Failed with result 'exit-code'.
Dec  2 06:07:37 np0005542249 python3.9[236313]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:07:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:38 np0005542249 python3.9[236465]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  2 06:07:39 np0005542249 python3.9[236617]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec  2 06:07:39 np0005542249 kernel: Key type psk registered
Dec  2 06:07:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:40 np0005542249 python3.9[236778]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:07:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:07:41 np0005542249 python3.9[236901]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764673659.6767867-630-194225244971501/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:07:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:41 np0005542249 python3.9[237053]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:07:42 np0005542249 python3.9[237205]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 06:07:42 np0005542249 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  2 06:07:42 np0005542249 systemd[1]: Stopped Load Kernel Modules.
Dec  2 06:07:42 np0005542249 systemd[1]: Stopping Load Kernel Modules...
Dec  2 06:07:42 np0005542249 systemd[1]: Starting Load Kernel Modules...
Dec  2 06:07:42 np0005542249 systemd[1]: Finished Load Kernel Modules.
Dec  2 06:07:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:43 np0005542249 python3.9[237361]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 06:07:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:07:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:48 np0005542249 systemd[1]: Reloading.
Dec  2 06:07:49 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:07:49 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:07:49 np0005542249 podman[237368]: 2025-12-02 11:07:49.06492658 +0000 UTC m=+0.140137265 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 06:07:49 np0005542249 systemd[1]: Reloading.
Dec  2 06:07:49 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:07:49 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:07:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:49 np0005542249 systemd-logind[787]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  2 06:07:49 np0005542249 systemd-logind[787]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  2 06:07:49 np0005542249 lvm[237504]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  2 06:07:49 np0005542249 lvm[237504]: VG ceph_vg1 finished
Dec  2 06:07:49 np0005542249 lvm[237503]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  2 06:07:49 np0005542249 lvm[237503]: VG ceph_vg0 finished
Dec  2 06:07:49 np0005542249 lvm[237506]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  2 06:07:49 np0005542249 lvm[237506]: VG ceph_vg2 finished
Dec  2 06:07:50 np0005542249 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  2 06:07:50 np0005542249 systemd[1]: Starting man-db-cache-update.service...
Dec  2 06:07:50 np0005542249 systemd[1]: Reloading.
Dec  2 06:07:50 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:07:50 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:07:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:07:50 np0005542249 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  2 06:07:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:51 np0005542249 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  2 06:07:51 np0005542249 systemd[1]: Finished man-db-cache-update.service.
Dec  2 06:07:51 np0005542249 systemd[1]: man-db-cache-update.service: Consumed 1.956s CPU time.
Dec  2 06:07:51 np0005542249 systemd[1]: run-ra8d591317d8744ff9772f44b14fa3d35.service: Deactivated successfully.
Dec  2 06:07:51 np0005542249 python3.9[238810]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 06:07:52 np0005542249 iscsid[226258]: iscsid shutting down.
Dec  2 06:07:52 np0005542249 systemd[1]: Stopping Open-iSCSI...
Dec  2 06:07:52 np0005542249 systemd[1]: iscsid.service: Deactivated successfully.
Dec  2 06:07:52 np0005542249 systemd[1]: Stopped Open-iSCSI.
Dec  2 06:07:52 np0005542249 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  2 06:07:52 np0005542249 systemd[1]: Starting Open-iSCSI...
Dec  2 06:07:52 np0005542249 systemd[1]: Started Open-iSCSI.
Dec  2 06:07:52 np0005542249 python3.9[239002]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 06:07:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:53 np0005542249 podman[239130]: 2025-12-02 11:07:53.843793082 +0000 UTC m=+0.083354277 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:07:54 np0005542249 python3.9[239175]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:07:55 np0005542249 python3.9[239327]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 06:07:55 np0005542249 systemd[1]: Reloading.
Dec  2 06:07:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:07:55 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:07:55 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:07:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:07:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:07:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:07:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:07:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:07:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:07:56 np0005542249 python3.9[239512]: ansible-ansible.builtin.service_facts Invoked
Dec  2 06:07:56 np0005542249 network[239529]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  2 06:07:56 np0005542249 network[239530]: 'network-scripts' will be removed from distribution in near future.
Dec  2 06:07:56 np0005542249 network[239531]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  2 06:07:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:07:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:08:01 np0005542249 python3.9[239806]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 06:08:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:02 np0005542249 python3.9[239959]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 06:08:03 np0005542249 python3.9[240112]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 06:08:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:03 np0005542249 python3.9[240265]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 06:08:04 np0005542249 python3.9[240472]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 06:08:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:08:05 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:08:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:08:05 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:08:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:08:05 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:08:05 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev e195bf6c-65fd-45c0-a1eb-7535e9a42d44 does not exist
Dec  2 06:08:05 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev a1a22669-399a-4e90-897a-68c038e7f492 does not exist
Dec  2 06:08:05 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev e0e80c92-cb64-4d20-9de7-f73629c4b3e1 does not exist
Dec  2 06:08:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:08:05 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:08:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:08:05 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:08:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:08:05 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:08:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:08:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:05 np0005542249 python3.9[240751]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 06:08:05 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:08:05 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:08:05 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:08:05 np0005542249 podman[240877]: 2025-12-02 11:08:05.796844242 +0000 UTC m=+0.046118830 container create 14cef4f04c37c32d3bc77daa3ddde5552209650195079bc97c6ed743a48f5beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_keller, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Dec  2 06:08:05 np0005542249 systemd[1]: Started libpod-conmon-14cef4f04c37c32d3bc77daa3ddde5552209650195079bc97c6ed743a48f5beb.scope.
Dec  2 06:08:05 np0005542249 podman[240877]: 2025-12-02 11:08:05.773574703 +0000 UTC m=+0.022849311 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:08:05 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:08:05 np0005542249 podman[240877]: 2025-12-02 11:08:05.919336529 +0000 UTC m=+0.168611137 container init 14cef4f04c37c32d3bc77daa3ddde5552209650195079bc97c6ed743a48f5beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_keller, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  2 06:08:05 np0005542249 podman[240877]: 2025-12-02 11:08:05.92973748 +0000 UTC m=+0.179012068 container start 14cef4f04c37c32d3bc77daa3ddde5552209650195079bc97c6ed743a48f5beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_keller, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  2 06:08:05 np0005542249 podman[240877]: 2025-12-02 11:08:05.933907933 +0000 UTC m=+0.183182521 container attach 14cef4f04c37c32d3bc77daa3ddde5552209650195079bc97c6ed743a48f5beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_keller, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  2 06:08:05 np0005542249 elegant_keller[240941]: 167 167
Dec  2 06:08:05 np0005542249 systemd[1]: libpod-14cef4f04c37c32d3bc77daa3ddde5552209650195079bc97c6ed743a48f5beb.scope: Deactivated successfully.
Dec  2 06:08:05 np0005542249 podman[240877]: 2025-12-02 11:08:05.938989761 +0000 UTC m=+0.188264349 container died 14cef4f04c37c32d3bc77daa3ddde5552209650195079bc97c6ed743a48f5beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_keller, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:08:05 np0005542249 systemd[1]: var-lib-containers-storage-overlay-cdc7ba2b92488a17a215b338f4798bbaf07baf117e417682b38a01f48bb9b609-merged.mount: Deactivated successfully.
Dec  2 06:08:05 np0005542249 podman[240877]: 2025-12-02 11:08:05.997553806 +0000 UTC m=+0.246828404 container remove 14cef4f04c37c32d3bc77daa3ddde5552209650195079bc97c6ed743a48f5beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  2 06:08:06 np0005542249 systemd[1]: libpod-conmon-14cef4f04c37c32d3bc77daa3ddde5552209650195079bc97c6ed743a48f5beb.scope: Deactivated successfully.
Dec  2 06:08:06 np0005542249 podman[241036]: 2025-12-02 11:08:06.158969366 +0000 UTC m=+0.044676371 container create 52f6f936ba67f0f58f44939bcebe30c2d9a991ef1a36581d5a572af2a252f80c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pascal, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  2 06:08:06 np0005542249 systemd[1]: Started libpod-conmon-52f6f936ba67f0f58f44939bcebe30c2d9a991ef1a36581d5a572af2a252f80c.scope.
Dec  2 06:08:06 np0005542249 podman[241036]: 2025-12-02 11:08:06.139421006 +0000 UTC m=+0.025128031 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:08:06 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:08:06 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32b3771d587f2defc0b6b58e6f10351721691c2e80f091b5e9e59e2019ab709b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:08:06 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32b3771d587f2defc0b6b58e6f10351721691c2e80f091b5e9e59e2019ab709b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:08:06 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32b3771d587f2defc0b6b58e6f10351721691c2e80f091b5e9e59e2019ab709b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:08:06 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32b3771d587f2defc0b6b58e6f10351721691c2e80f091b5e9e59e2019ab709b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:08:06 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32b3771d587f2defc0b6b58e6f10351721691c2e80f091b5e9e59e2019ab709b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:08:06 np0005542249 podman[241036]: 2025-12-02 11:08:06.277761552 +0000 UTC m=+0.163468607 container init 52f6f936ba67f0f58f44939bcebe30c2d9a991ef1a36581d5a572af2a252f80c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pascal, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:08:06 np0005542249 podman[241036]: 2025-12-02 11:08:06.285624034 +0000 UTC m=+0.171331039 container start 52f6f936ba67f0f58f44939bcebe30c2d9a991ef1a36581d5a572af2a252f80c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pascal, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:08:06 np0005542249 podman[241036]: 2025-12-02 11:08:06.289733065 +0000 UTC m=+0.175440080 container attach 52f6f936ba67f0f58f44939bcebe30c2d9a991ef1a36581d5a572af2a252f80c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pascal, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Dec  2 06:08:06 np0005542249 python3.9[241030]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 06:08:06 np0005542249 podman[241182]: 2025-12-02 11:08:06.978219324 +0000 UTC m=+0.085039513 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2)
Dec  2 06:08:07 np0005542249 python3.9[241232]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 06:08:07 np0005542249 adoring_pascal[241053]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:08:07 np0005542249 adoring_pascal[241053]: --> relative data size: 1.0
Dec  2 06:08:07 np0005542249 adoring_pascal[241053]: --> All data devices are unavailable
Dec  2 06:08:07 np0005542249 systemd[1]: libpod-52f6f936ba67f0f58f44939bcebe30c2d9a991ef1a36581d5a572af2a252f80c.scope: Deactivated successfully.
Dec  2 06:08:07 np0005542249 podman[241036]: 2025-12-02 11:08:07.338039945 +0000 UTC m=+1.223746950 container died 52f6f936ba67f0f58f44939bcebe30c2d9a991ef1a36581d5a572af2a252f80c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pascal, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:08:07 np0005542249 systemd[1]: var-lib-containers-storage-overlay-32b3771d587f2defc0b6b58e6f10351721691c2e80f091b5e9e59e2019ab709b-merged.mount: Deactivated successfully.
Dec  2 06:08:07 np0005542249 podman[241036]: 2025-12-02 11:08:07.393315511 +0000 UTC m=+1.279022506 container remove 52f6f936ba67f0f58f44939bcebe30c2d9a991ef1a36581d5a572af2a252f80c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pascal, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  2 06:08:07 np0005542249 systemd[1]: libpod-conmon-52f6f936ba67f0f58f44939bcebe30c2d9a991ef1a36581d5a572af2a252f80c.scope: Deactivated successfully.
Dec  2 06:08:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:07 np0005542249 podman[241559]: 2025-12-02 11:08:07.993354376 +0000 UTC m=+0.049073570 container create e53c89aad355faac87111ce163a3aa21db5b94bff2ae8bfe74957bfe694dd295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_brown, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  2 06:08:08 np0005542249 systemd[1]: Started libpod-conmon-e53c89aad355faac87111ce163a3aa21db5b94bff2ae8bfe74957bfe694dd295.scope.
Dec  2 06:08:08 np0005542249 python3.9[241541]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:08:08 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:08:08 np0005542249 podman[241559]: 2025-12-02 11:08:07.971089653 +0000 UTC m=+0.026808937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:08:08 np0005542249 podman[241559]: 2025-12-02 11:08:08.07073658 +0000 UTC m=+0.126455834 container init e53c89aad355faac87111ce163a3aa21db5b94bff2ae8bfe74957bfe694dd295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_brown, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  2 06:08:08 np0005542249 podman[241559]: 2025-12-02 11:08:08.07665245 +0000 UTC m=+0.132371654 container start e53c89aad355faac87111ce163a3aa21db5b94bff2ae8bfe74957bfe694dd295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_brown, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  2 06:08:08 np0005542249 podman[241559]: 2025-12-02 11:08:08.08033613 +0000 UTC m=+0.136055384 container attach e53c89aad355faac87111ce163a3aa21db5b94bff2ae8bfe74957bfe694dd295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_brown, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:08:08 np0005542249 cool_brown[241575]: 167 167
Dec  2 06:08:08 np0005542249 systemd[1]: libpod-e53c89aad355faac87111ce163a3aa21db5b94bff2ae8bfe74957bfe694dd295.scope: Deactivated successfully.
Dec  2 06:08:08 np0005542249 podman[241559]: 2025-12-02 11:08:08.082503629 +0000 UTC m=+0.138222883 container died e53c89aad355faac87111ce163a3aa21db5b94bff2ae8bfe74957bfe694dd295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  2 06:08:08 np0005542249 systemd[1]: var-lib-containers-storage-overlay-4d1b601e69ec4c9157658df75e4294b3809e2b7062afce13262559c2d394eefe-merged.mount: Deactivated successfully.
Dec  2 06:08:08 np0005542249 podman[241559]: 2025-12-02 11:08:08.130086818 +0000 UTC m=+0.185806012 container remove e53c89aad355faac87111ce163a3aa21db5b94bff2ae8bfe74957bfe694dd295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_brown, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  2 06:08:08 np0005542249 systemd[1]: libpod-conmon-e53c89aad355faac87111ce163a3aa21db5b94bff2ae8bfe74957bfe694dd295.scope: Deactivated successfully.
Dec  2 06:08:08 np0005542249 podman[241673]: 2025-12-02 11:08:08.31634697 +0000 UTC m=+0.047705452 container create 332a96db1c0075874815924326025c631611f56a07ddfd3e109d9714e94bd8f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  2 06:08:08 np0005542249 systemd[1]: Started libpod-conmon-332a96db1c0075874815924326025c631611f56a07ddfd3e109d9714e94bd8f3.scope.
Dec  2 06:08:08 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:08:08 np0005542249 podman[241673]: 2025-12-02 11:08:08.298799814 +0000 UTC m=+0.030158306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:08:08 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec85df6017d55a0f01f879ab47dcb44bd8ec84538bbaa59fe9fb3c17b50b0b87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:08:08 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec85df6017d55a0f01f879ab47dcb44bd8ec84538bbaa59fe9fb3c17b50b0b87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:08:08 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec85df6017d55a0f01f879ab47dcb44bd8ec84538bbaa59fe9fb3c17b50b0b87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:08:08 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec85df6017d55a0f01f879ab47dcb44bd8ec84538bbaa59fe9fb3c17b50b0b87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:08:08 np0005542249 podman[241673]: 2025-12-02 11:08:08.410498449 +0000 UTC m=+0.141856941 container init 332a96db1c0075874815924326025c631611f56a07ddfd3e109d9714e94bd8f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jennings, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  2 06:08:08 np0005542249 podman[241673]: 2025-12-02 11:08:08.419506592 +0000 UTC m=+0.150865084 container start 332a96db1c0075874815924326025c631611f56a07ddfd3e109d9714e94bd8f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jennings, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:08:08 np0005542249 podman[241673]: 2025-12-02 11:08:08.424431336 +0000 UTC m=+0.155789828 container attach 332a96db1c0075874815924326025c631611f56a07ddfd3e109d9714e94bd8f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jennings, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:08:08 np0005542249 python3.9[241770]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]: {
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:    "0": [
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:        {
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "devices": [
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "/dev/loop3"
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            ],
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "lv_name": "ceph_lv0",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "lv_size": "21470642176",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "name": "ceph_lv0",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "tags": {
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.cluster_name": "ceph",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.crush_device_class": "",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.encrypted": "0",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.osd_id": "0",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.type": "block",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.vdo": "0"
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            },
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "type": "block",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "vg_name": "ceph_vg0"
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:        }
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:    ],
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:    "1": [
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:        {
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "devices": [
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "/dev/loop4"
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            ],
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "lv_name": "ceph_lv1",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "lv_size": "21470642176",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "name": "ceph_lv1",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "tags": {
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.cluster_name": "ceph",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.crush_device_class": "",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.encrypted": "0",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.osd_id": "1",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.type": "block",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.vdo": "0"
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            },
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "type": "block",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "vg_name": "ceph_vg1"
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:        }
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:    ],
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:    "2": [
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:        {
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "devices": [
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "/dev/loop5"
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            ],
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "lv_name": "ceph_lv2",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "lv_size": "21470642176",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "name": "ceph_lv2",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "tags": {
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.cluster_name": "ceph",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.crush_device_class": "",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.encrypted": "0",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.osd_id": "2",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.type": "block",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:                "ceph.vdo": "0"
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            },
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "type": "block",
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:            "vg_name": "ceph_vg2"
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:        }
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]:    ]
Dec  2 06:08:09 np0005542249 mystifying_jennings[241713]: }
Dec  2 06:08:09 np0005542249 systemd[1]: libpod-332a96db1c0075874815924326025c631611f56a07ddfd3e109d9714e94bd8f3.scope: Deactivated successfully.
Dec  2 06:08:09 np0005542249 podman[241673]: 2025-12-02 11:08:09.216341324 +0000 UTC m=+0.947699826 container died 332a96db1c0075874815924326025c631611f56a07ddfd3e109d9714e94bd8f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  2 06:08:09 np0005542249 systemd[1]: var-lib-containers-storage-overlay-ec85df6017d55a0f01f879ab47dcb44bd8ec84538bbaa59fe9fb3c17b50b0b87-merged.mount: Deactivated successfully.
Dec  2 06:08:09 np0005542249 podman[241673]: 2025-12-02 11:08:09.288291602 +0000 UTC m=+1.019650084 container remove 332a96db1c0075874815924326025c631611f56a07ddfd3e109d9714e94bd8f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:08:09 np0005542249 systemd[1]: libpod-conmon-332a96db1c0075874815924326025c631611f56a07ddfd3e109d9714e94bd8f3.scope: Deactivated successfully.
Dec  2 06:08:09 np0005542249 python3.9[241926]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:08:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:09 np0005542249 podman[242230]: 2025-12-02 11:08:09.900364042 +0000 UTC m=+0.038285327 container create 4be3c1c61dc94b2a9bddb7ce6c499fefb19676ca0b5e1fc9361a510e4e0a9ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_carson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec  2 06:08:09 np0005542249 systemd[1]: Started libpod-conmon-4be3c1c61dc94b2a9bddb7ce6c499fefb19676ca0b5e1fc9361a510e4e0a9ba2.scope.
Dec  2 06:08:09 np0005542249 podman[242230]: 2025-12-02 11:08:09.883972268 +0000 UTC m=+0.021893573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:08:09 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:08:10 np0005542249 podman[242230]: 2025-12-02 11:08:10.00002875 +0000 UTC m=+0.137950045 container init 4be3c1c61dc94b2a9bddb7ce6c499fefb19676ca0b5e1fc9361a510e4e0a9ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  2 06:08:10 np0005542249 podman[242230]: 2025-12-02 11:08:10.006872025 +0000 UTC m=+0.144793310 container start 4be3c1c61dc94b2a9bddb7ce6c499fefb19676ca0b5e1fc9361a510e4e0a9ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  2 06:08:10 np0005542249 podman[242230]: 2025-12-02 11:08:10.010682779 +0000 UTC m=+0.148604084 container attach 4be3c1c61dc94b2a9bddb7ce6c499fefb19676ca0b5e1fc9361a510e4e0a9ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_carson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  2 06:08:10 np0005542249 python3.9[242227]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:08:10 np0005542249 zealous_carson[242246]: 167 167
Dec  2 06:08:10 np0005542249 systemd[1]: libpod-4be3c1c61dc94b2a9bddb7ce6c499fefb19676ca0b5e1fc9361a510e4e0a9ba2.scope: Deactivated successfully.
Dec  2 06:08:10 np0005542249 conmon[242246]: conmon 4be3c1c61dc94b2a9bdd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4be3c1c61dc94b2a9bddb7ce6c499fefb19676ca0b5e1fc9361a510e4e0a9ba2.scope/container/memory.events
Dec  2 06:08:10 np0005542249 podman[242230]: 2025-12-02 11:08:10.015389586 +0000 UTC m=+0.153310871 container died 4be3c1c61dc94b2a9bddb7ce6c499fefb19676ca0b5e1fc9361a510e4e0a9ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_carson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:08:10 np0005542249 systemd[1]: var-lib-containers-storage-overlay-b202a5f3df639ea42676317cf84f7766921e81234aa8412af197e6082a334e56-merged.mount: Deactivated successfully.
Dec  2 06:08:10 np0005542249 podman[242230]: 2025-12-02 11:08:10.057045784 +0000 UTC m=+0.194967069 container remove 4be3c1c61dc94b2a9bddb7ce6c499fefb19676ca0b5e1fc9361a510e4e0a9ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:08:10 np0005542249 systemd[1]: libpod-conmon-4be3c1c61dc94b2a9bddb7ce6c499fefb19676ca0b5e1fc9361a510e4e0a9ba2.scope: Deactivated successfully.
Dec  2 06:08:10 np0005542249 podman[242318]: 2025-12-02 11:08:10.218885654 +0000 UTC m=+0.044903086 container create ae9537ae02d77f756812af0edf33e90738e6789546c2c256ffde004ecf5cdfb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_fermat, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:08:10 np0005542249 systemd[1]: Started libpod-conmon-ae9537ae02d77f756812af0edf33e90738e6789546c2c256ffde004ecf5cdfb9.scope.
Dec  2 06:08:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:08:10 np0005542249 podman[242318]: 2025-12-02 11:08:10.197070114 +0000 UTC m=+0.023087566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:08:10 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:08:10 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f7d6c846c8a8f6e263bd4d07ab0c36ce0a47d6ebf8f8126f0c29cc1a35e5a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:08:10 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f7d6c846c8a8f6e263bd4d07ab0c36ce0a47d6ebf8f8126f0c29cc1a35e5a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:08:10 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f7d6c846c8a8f6e263bd4d07ab0c36ce0a47d6ebf8f8126f0c29cc1a35e5a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:08:10 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f7d6c846c8a8f6e263bd4d07ab0c36ce0a47d6ebf8f8126f0c29cc1a35e5a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:08:10 np0005542249 podman[242318]: 2025-12-02 11:08:10.313234619 +0000 UTC m=+0.139252081 container init ae9537ae02d77f756812af0edf33e90738e6789546c2c256ffde004ecf5cdfb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:08:10 np0005542249 podman[242318]: 2025-12-02 11:08:10.322111359 +0000 UTC m=+0.148128791 container start ae9537ae02d77f756812af0edf33e90738e6789546c2c256ffde004ecf5cdfb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  2 06:08:10 np0005542249 podman[242318]: 2025-12-02 11:08:10.326763916 +0000 UTC m=+0.152781348 container attach ae9537ae02d77f756812af0edf33e90738e6789546c2c256ffde004ecf5cdfb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_fermat, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:08:10 np0005542249 python3.9[242443]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:08:11 np0005542249 hungry_fermat[242371]: {
Dec  2 06:08:11 np0005542249 python3.9[242603]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:08:11 np0005542249 hungry_fermat[242371]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:08:11 np0005542249 hungry_fermat[242371]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:08:11 np0005542249 hungry_fermat[242371]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:08:11 np0005542249 hungry_fermat[242371]:        "osd_id": 0,
Dec  2 06:08:11 np0005542249 hungry_fermat[242371]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:08:11 np0005542249 hungry_fermat[242371]:        "type": "bluestore"
Dec  2 06:08:11 np0005542249 hungry_fermat[242371]:    },
Dec  2 06:08:11 np0005542249 hungry_fermat[242371]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:08:11 np0005542249 hungry_fermat[242371]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:08:11 np0005542249 hungry_fermat[242371]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:08:11 np0005542249 hungry_fermat[242371]:        "osd_id": 2,
Dec  2 06:08:11 np0005542249 hungry_fermat[242371]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:08:11 np0005542249 hungry_fermat[242371]:        "type": "bluestore"
Dec  2 06:08:11 np0005542249 hungry_fermat[242371]:    },
Dec  2 06:08:11 np0005542249 hungry_fermat[242371]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:08:11 np0005542249 hungry_fermat[242371]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:08:11 np0005542249 hungry_fermat[242371]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:08:11 np0005542249 hungry_fermat[242371]:        "osd_id": 1,
Dec  2 06:08:11 np0005542249 hungry_fermat[242371]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:08:11 np0005542249 hungry_fermat[242371]:        "type": "bluestore"
Dec  2 06:08:11 np0005542249 hungry_fermat[242371]:    }
Dec  2 06:08:11 np0005542249 hungry_fermat[242371]: }
Dec  2 06:08:11 np0005542249 systemd[1]: libpod-ae9537ae02d77f756812af0edf33e90738e6789546c2c256ffde004ecf5cdfb9.scope: Deactivated successfully.
Dec  2 06:08:11 np0005542249 podman[242628]: 2025-12-02 11:08:11.356453241 +0000 UTC m=+0.023529078 container died ae9537ae02d77f756812af0edf33e90738e6789546c2c256ffde004ecf5cdfb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_fermat, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:08:11 np0005542249 systemd[1]: var-lib-containers-storage-overlay-94f7d6c846c8a8f6e263bd4d07ab0c36ce0a47d6ebf8f8126f0c29cc1a35e5a6-merged.mount: Deactivated successfully.
Dec  2 06:08:11 np0005542249 podman[242628]: 2025-12-02 11:08:11.413913486 +0000 UTC m=+0.080989303 container remove ae9537ae02d77f756812af0edf33e90738e6789546c2c256ffde004ecf5cdfb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  2 06:08:11 np0005542249 systemd[1]: libpod-conmon-ae9537ae02d77f756812af0edf33e90738e6789546c2c256ffde004ecf5cdfb9.scope: Deactivated successfully.
Dec  2 06:08:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:08:11 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:08:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:08:11 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:08:11 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 06e13aae-022d-4bcb-bccc-bfc7712944ce does not exist
Dec  2 06:08:11 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 0a25d9b1-690f-4b34-9a7d-8b13faef7d24 does not exist
Dec  2 06:08:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:11 np0005542249 python3.9[242840]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:08:12 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:08:12 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:08:12 np0005542249 python3.9[242992]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:08:13 np0005542249 python3.9[243144]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:08:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:13 np0005542249 python3.9[243296]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:08:14 np0005542249 python3.9[243448]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:08:15 np0005542249 python3.9[243600]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:08:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:08:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:15 np0005542249 python3.9[243752]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:08:16 np0005542249 python3.9[243904]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:08:17 np0005542249 python3.9[244056]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:08:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:17 np0005542249 python3.9[244208]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:08:18 np0005542249 python3.9[244360]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:08:19 np0005542249 python3.9[244512]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  2 06:08:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:08:19.819 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:08:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:08:19.821 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:08:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:08:19.821 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:08:20 np0005542249 podman[244636]: 2025-12-02 11:08:20.051345684 +0000 UTC m=+0.123588138 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  2 06:08:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:08:20 np0005542249 python3.9[244680]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 06:08:20 np0005542249 systemd[1]: Reloading.
Dec  2 06:08:20 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:08:20 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:08:21 np0005542249 python3.9[244876]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:08:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:22 np0005542249 python3.9[245029]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:08:22 np0005542249 python3.9[245182]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:08:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:23 np0005542249 python3.9[245335]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:08:24 np0005542249 podman[245388]: 2025-12-02 11:08:24.023712358 +0000 UTC m=+0.091943342 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  2 06:08:24 np0005542249 python3.9[245508]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:08:25 np0005542249 python3.9[245661]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:08:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:08:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:25 np0005542249 python3.9[245814]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:08:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:08:26
Dec  2 06:08:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:08:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:08:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'images', 'backups', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'vms']
Dec  2 06:08:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:08:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:08:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:08:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:08:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:08:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:08:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:08:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:08:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:08:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:08:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:08:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:08:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:08:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:08:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:08:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:08:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:08:26 np0005542249 python3.9[245967]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 06:08:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:28 np0005542249 python3.9[246120]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:08:28 np0005542249 python3.9[246272]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:08:29 np0005542249 python3.9[246424]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:08:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:30 np0005542249 python3.9[246576]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:08:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:08:30 np0005542249 python3.9[246728]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:08:31 np0005542249 python3.9[246880]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:08:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:31 np0005542249 python3.9[247032]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:08:32 np0005542249 python3.9[247184]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:08:33 np0005542249 python3.9[247336]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:08:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:33 np0005542249 python3.9[247488]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:08:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:08:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:08:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:08:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:08:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:08:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:08:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:08:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:08:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:08:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:08:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:08:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:08:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:08:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:08:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:08:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:08:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:08:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:08:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:08:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:08:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:08:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:08:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:08:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:08:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:37 np0005542249 podman[247513]: 2025-12-02 11:08:37.991835108 +0000 UTC m=+0.071951729 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  2 06:08:39 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  2 06:08:39 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3285 writes, 14K keys, 3285 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3285 writes, 3285 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1290 writes, 5589 keys, 1290 commit groups, 1.0 writes per commit group, ingest: 8.51 MB, 0.01 MB/s#012Interval WAL: 1290 writes, 1290 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    119.2      0.12              0.04         6    0.020       0      0       0.0       0.0#012  L6      1/0    7.18 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.4    144.5    120.1      0.29              0.13         5    0.058     19K   2185       0.0       0.0#012 Sum      1/0    7.18 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.4    102.2    119.8      0.41              0.17        11    0.037     19K   2185       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6    106.8    109.2      0.24              0.12         6    0.041     12K   1450       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    144.5    120.1      0.29              0.13         5    0.058     19K   2185       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    121.5      0.12              0.04         5    0.023       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.014, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.05 GB write, 0.04 MB/s write, 0.04 GB read, 0.03 MB/s read, 0.4 seconds#012Interval compaction: 0.03 GB write, 0.04 MB/s write, 0.03 GB read, 0.04 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560e2b4e71f0#2 capacity: 308.00 MB usage: 1.39 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 7.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(82,1.20 MB,0.388505%) FilterBlock(12,63.48 KB,0.0201287%) IndexBlock(12,130.14 KB,0.0412631%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  2 06:08:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:39 np0005542249 python3.9[247662]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec  2 06:08:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:08:40 np0005542249 python3.9[247815]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  2 06:08:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:41 np0005542249 python3.9[247973]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  2 06:08:42 np0005542249 systemd-logind[787]: New session 51 of user zuul.
Dec  2 06:08:43 np0005542249 systemd[1]: Started Session 51 of User zuul.
Dec  2 06:08:43 np0005542249 systemd[1]: session-51.scope: Deactivated successfully.
Dec  2 06:08:43 np0005542249 systemd-logind[787]: Session 51 logged out. Waiting for processes to exit.
Dec  2 06:08:43 np0005542249 systemd-logind[787]: Removed session 51.
Dec  2 06:08:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:43 np0005542249 python3.9[248159]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:08:44 np0005542249 python3.9[248280]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764673723.3291337-1249-153828489438172/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:08:45 np0005542249 python3.9[248430]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:08:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:08:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:45 np0005542249 python3.9[248506]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:08:46 np0005542249 python3.9[248656]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:08:46 np0005542249 python3.9[248777]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764673725.7849154-1249-143098407538927/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:08:47 np0005542249 python3.9[248927]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:08:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:48 np0005542249 python3.9[249048]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764673726.9876003-1249-2859599685067/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:08:48 np0005542249 python3.9[249198]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:08:49 np0005542249 python3.9[249319]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764673728.2297752-1249-195210646955120/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:08:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:49 np0005542249 python3.9[249469]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:08:50.283499) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673730283590, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1498, "num_deletes": 506, "total_data_size": 1933927, "memory_usage": 1972400, "flush_reason": "Manual Compaction"}
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673730295939, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1904822, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13514, "largest_seqno": 15011, "table_properties": {"data_size": 1898240, "index_size": 3273, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 15931, "raw_average_key_size": 18, "raw_value_size": 1883203, "raw_average_value_size": 2142, "num_data_blocks": 150, "num_entries": 879, "num_filter_entries": 879, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764673600, "oldest_key_time": 1764673600, "file_creation_time": 1764673730, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 12520 microseconds, and 5465 cpu microseconds.
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:08:50.296035) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1904822 bytes OK
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:08:50.296057) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:08:50.298740) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:08:50.298769) EVENT_LOG_v1 {"time_micros": 1764673730298761, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:08:50.298792) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1926295, prev total WAL file size 1926295, number of live WAL files 2.
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:08:50.299670) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1860KB)], [32(7351KB)]
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673730299752, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9433215, "oldest_snapshot_seqno": -1}
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3864 keys, 7470559 bytes, temperature: kUnknown
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673730351186, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7470559, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7442589, "index_size": 17236, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9669, "raw_key_size": 94617, "raw_average_key_size": 24, "raw_value_size": 7370425, "raw_average_value_size": 1907, "num_data_blocks": 731, "num_entries": 3864, "num_filter_entries": 3864, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672515, "oldest_key_time": 0, "file_creation_time": 1764673730, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:08:50.351804) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7470559 bytes
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:08:50.353462) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.0 rd, 144.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 7.2 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(8.9) write-amplify(3.9) OK, records in: 4889, records dropped: 1025 output_compression: NoCompression
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:08:50.353485) EVENT_LOG_v1 {"time_micros": 1764673730353474, "job": 14, "event": "compaction_finished", "compaction_time_micros": 51827, "compaction_time_cpu_micros": 17126, "output_level": 6, "num_output_files": 1, "total_output_size": 7470559, "num_input_records": 4889, "num_output_records": 3864, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673730354145, "job": 14, "event": "table_file_deletion", "file_number": 34}
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673730356245, "job": 14, "event": "table_file_deletion", "file_number": 32}
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:08:50.299552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:08:50.356506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:08:50.356513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:08:50.356515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:08:50.356517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:08:50 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:08:50.356520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:08:50 np0005542249 podman[249564]: 2025-12-02 11:08:50.528483016 +0000 UTC m=+0.194074508 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  2 06:08:50 np0005542249 python3.9[249601]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764673729.4566305-1249-86236490776418/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:08:51 np0005542249 python3.9[249767]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:08:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:52 np0005542249 python3.9[249919]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:08:52 np0005542249 python3.9[250071]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 06:08:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:53 np0005542249 python3.9[250223]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:08:54 np0005542249 podman[250318]: 2025-12-02 11:08:54.256084269 +0000 UTC m=+0.061869476 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec  2 06:08:54 np0005542249 python3.9[250366]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764673733.2300184-1356-155241825878513/.source _original_basename=.x2w9ku2p follow=False checksum=1de94b6154917bb02955f22af0db43aee2d94ff8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec  2 06:08:55 np0005542249 python3.9[250518]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 06:08:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:08:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:55 np0005542249 python3.9[250670]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:08:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:08:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:08:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:08:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:08:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:08:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:08:56 np0005542249 python3.9[250791]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764673735.408013-1382-13647594107866/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:08:57 np0005542249 python3.9[250941]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 06:08:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:57 np0005542249 python3.9[251062]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764673736.7027297-1397-111013365784356/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 06:08:58 np0005542249 python3.9[251214]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec  2 06:08:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:08:59 np0005542249 python3.9[251366]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  2 06:09:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:09:00 np0005542249 python3[251518]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec  2 06:09:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:09:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:09:11 np0005542249 podman[251591]: 2025-12-02 11:09:11.097672708 +0000 UTC m=+2.173742337 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  2 06:09:11 np0005542249 podman[251532]: 2025-12-02 11:09:11.165709181 +0000 UTC m=+10.514102928 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  2 06:09:11 np0005542249 podman[251631]: 2025-12-02 11:09:11.383177391 +0000 UTC m=+0.082635879 container create 473189829e7117a1d1eab4a762a38a4795dfa3e1dd1d4d965c36c0097a74ceba (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute_init, managed_by=edpm_ansible, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 06:09:11 np0005542249 podman[251631]: 2025-12-02 11:09:11.344959406 +0000 UTC m=+0.044417954 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  2 06:09:11 np0005542249 python3[251518]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec  2 06:09:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:12 np0005542249 python3.9[251923]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 06:09:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:09:12 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:09:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:09:12 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:09:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:09:12 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:09:12 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev ad3a279f-b064-4976-b719-f413d7a84590 does not exist
Dec  2 06:09:12 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev f3d27165-9500-4d35-beba-d44d0b876dc0 does not exist
Dec  2 06:09:12 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev e3a82b14-8423-4e95-83ce-ec757e39f323 does not exist
Dec  2 06:09:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:09:12 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:09:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:09:12 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:09:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:09:12 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:09:13 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:09:13 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:09:13 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:09:13 np0005542249 python3.9[252225]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec  2 06:09:13 np0005542249 podman[252245]: 2025-12-02 11:09:13.301866619 +0000 UTC m=+0.067820807 container create 0183e15f1948b813773ddd0a3ffb519ff7f5813423f9f39d513564cd0166621e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hellman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:09:13 np0005542249 systemd[1]: Started libpod-conmon-0183e15f1948b813773ddd0a3ffb519ff7f5813423f9f39d513564cd0166621e.scope.
Dec  2 06:09:13 np0005542249 podman[252245]: 2025-12-02 11:09:13.272274178 +0000 UTC m=+0.038228396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:09:13 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:09:13 np0005542249 podman[252245]: 2025-12-02 11:09:13.399305169 +0000 UTC m=+0.165259347 container init 0183e15f1948b813773ddd0a3ffb519ff7f5813423f9f39d513564cd0166621e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:09:13 np0005542249 podman[252245]: 2025-12-02 11:09:13.412801314 +0000 UTC m=+0.178755492 container start 0183e15f1948b813773ddd0a3ffb519ff7f5813423f9f39d513564cd0166621e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:09:13 np0005542249 podman[252245]: 2025-12-02 11:09:13.416851444 +0000 UTC m=+0.182805632 container attach 0183e15f1948b813773ddd0a3ffb519ff7f5813423f9f39d513564cd0166621e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  2 06:09:13 np0005542249 compassionate_hellman[252267]: 167 167
Dec  2 06:09:13 np0005542249 systemd[1]: libpod-0183e15f1948b813773ddd0a3ffb519ff7f5813423f9f39d513564cd0166621e.scope: Deactivated successfully.
Dec  2 06:09:13 np0005542249 podman[252245]: 2025-12-02 11:09:13.420092382 +0000 UTC m=+0.186046550 container died 0183e15f1948b813773ddd0a3ffb519ff7f5813423f9f39d513564cd0166621e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Dec  2 06:09:13 np0005542249 systemd[1]: var-lib-containers-storage-overlay-2e63cf6c52d9b481a700d1447b7a3c1de6b4e26e55dcc8ff99a93494193180b5-merged.mount: Deactivated successfully.
Dec  2 06:09:13 np0005542249 podman[252245]: 2025-12-02 11:09:13.463207659 +0000 UTC m=+0.229161827 container remove 0183e15f1948b813773ddd0a3ffb519ff7f5813423f9f39d513564cd0166621e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:09:13 np0005542249 systemd[1]: libpod-conmon-0183e15f1948b813773ddd0a3ffb519ff7f5813423f9f39d513564cd0166621e.scope: Deactivated successfully.
Dec  2 06:09:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:13 np0005542249 podman[252343]: 2025-12-02 11:09:13.671660755 +0000 UTC m=+0.052743349 container create 98d4e8f12551756cb843eafdbd6829ed79edf1476afa6921b015c7b26c3e3c25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:09:13 np0005542249 systemd[1]: Started libpod-conmon-98d4e8f12551756cb843eafdbd6829ed79edf1476afa6921b015c7b26c3e3c25.scope.
Dec  2 06:09:13 np0005542249 podman[252343]: 2025-12-02 11:09:13.652747033 +0000 UTC m=+0.033829647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:09:13 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:09:13 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f795325f2379149af50c1f91a94d3f924804ed35b53fa2aaf81d74a024cec10f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:09:13 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f795325f2379149af50c1f91a94d3f924804ed35b53fa2aaf81d74a024cec10f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:09:13 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f795325f2379149af50c1f91a94d3f924804ed35b53fa2aaf81d74a024cec10f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:09:13 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f795325f2379149af50c1f91a94d3f924804ed35b53fa2aaf81d74a024cec10f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:09:13 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f795325f2379149af50c1f91a94d3f924804ed35b53fa2aaf81d74a024cec10f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:09:13 np0005542249 podman[252343]: 2025-12-02 11:09:13.775114517 +0000 UTC m=+0.156197141 container init 98d4e8f12551756cb843eafdbd6829ed79edf1476afa6921b015c7b26c3e3c25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pasteur, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:09:13 np0005542249 podman[252343]: 2025-12-02 11:09:13.782832997 +0000 UTC m=+0.163915591 container start 98d4e8f12551756cb843eafdbd6829ed79edf1476afa6921b015c7b26c3e3c25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:09:13 np0005542249 podman[252343]: 2025-12-02 11:09:13.787068112 +0000 UTC m=+0.168150716 container attach 98d4e8f12551756cb843eafdbd6829ed79edf1476afa6921b015c7b26c3e3c25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Dec  2 06:09:14 np0005542249 python3.9[252459]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  2 06:09:14 np0005542249 compassionate_pasteur[252402]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:09:14 np0005542249 compassionate_pasteur[252402]: --> relative data size: 1.0
Dec  2 06:09:14 np0005542249 compassionate_pasteur[252402]: --> All data devices are unavailable
Dec  2 06:09:14 np0005542249 systemd[1]: libpod-98d4e8f12551756cb843eafdbd6829ed79edf1476afa6921b015c7b26c3e3c25.scope: Deactivated successfully.
Dec  2 06:09:14 np0005542249 systemd[1]: libpod-98d4e8f12551756cb843eafdbd6829ed79edf1476afa6921b015c7b26c3e3c25.scope: Consumed 1.041s CPU time.
Dec  2 06:09:14 np0005542249 podman[252343]: 2025-12-02 11:09:14.904860427 +0000 UTC m=+1.285943051 container died 98d4e8f12551756cb843eafdbd6829ed79edf1476afa6921b015c7b26c3e3c25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  2 06:09:14 np0005542249 systemd[1]: var-lib-containers-storage-overlay-f795325f2379149af50c1f91a94d3f924804ed35b53fa2aaf81d74a024cec10f-merged.mount: Deactivated successfully.
Dec  2 06:09:14 np0005542249 podman[252343]: 2025-12-02 11:09:14.998698748 +0000 UTC m=+1.379781352 container remove 98d4e8f12551756cb843eafdbd6829ed79edf1476afa6921b015c7b26c3e3c25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:09:15 np0005542249 systemd[1]: libpod-conmon-98d4e8f12551756cb843eafdbd6829ed79edf1476afa6921b015c7b26c3e3c25.scope: Deactivated successfully.
Dec  2 06:09:15 np0005542249 python3[252628]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec  2 06:09:15 np0005542249 podman[252734]: 2025-12-02 11:09:15.26272584 +0000 UTC m=+0.051855225 container create 5fbc74cebe070d8cc77fb5ba95cab60fe5a6a788996a0004958af316ccf471ad (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:09:15 np0005542249 podman[252734]: 2025-12-02 11:09:15.234360581 +0000 UTC m=+0.023489996 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  2 06:09:15 np0005542249 python3[252628]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Dec  2 06:09:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:15 np0005542249 podman[252928]: 2025-12-02 11:09:15.683057235 +0000 UTC m=+0.042766150 container create 0a80a5e50c6916275d4c326dad0040da8f209e6ca681055dc971959a46859665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_galois, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:09:15 np0005542249 systemd[1]: Started libpod-conmon-0a80a5e50c6916275d4c326dad0040da8f209e6ca681055dc971959a46859665.scope.
Dec  2 06:09:15 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:09:15 np0005542249 podman[252928]: 2025-12-02 11:09:15.664631666 +0000 UTC m=+0.024340611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:09:15 np0005542249 podman[252928]: 2025-12-02 11:09:15.776701741 +0000 UTC m=+0.136410746 container init 0a80a5e50c6916275d4c326dad0040da8f209e6ca681055dc971959a46859665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_galois, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:09:15 np0005542249 podman[252928]: 2025-12-02 11:09:15.785105939 +0000 UTC m=+0.144814854 container start 0a80a5e50c6916275d4c326dad0040da8f209e6ca681055dc971959a46859665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_galois, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  2 06:09:15 np0005542249 podman[252928]: 2025-12-02 11:09:15.788566522 +0000 UTC m=+0.148275537 container attach 0a80a5e50c6916275d4c326dad0040da8f209e6ca681055dc971959a46859665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  2 06:09:15 np0005542249 systemd[1]: libpod-0a80a5e50c6916275d4c326dad0040da8f209e6ca681055dc971959a46859665.scope: Deactivated successfully.
Dec  2 06:09:15 np0005542249 charming_galois[252981]: 167 167
Dec  2 06:09:15 np0005542249 conmon[252981]: conmon 0a80a5e50c6916275d4c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0a80a5e50c6916275d4c326dad0040da8f209e6ca681055dc971959a46859665.scope/container/memory.events
Dec  2 06:09:15 np0005542249 podman[252928]: 2025-12-02 11:09:15.792404666 +0000 UTC m=+0.152113611 container died 0a80a5e50c6916275d4c326dad0040da8f209e6ca681055dc971959a46859665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_galois, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  2 06:09:15 np0005542249 systemd[1]: var-lib-containers-storage-overlay-86603d534a651687fca6bc485eedc4a0a24cfb57b16fbdf4718a07a40ac3abbb-merged.mount: Deactivated successfully.
Dec  2 06:09:15 np0005542249 podman[252928]: 2025-12-02 11:09:15.84017022 +0000 UTC m=+0.199879175 container remove 0a80a5e50c6916275d4c326dad0040da8f209e6ca681055dc971959a46859665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 06:09:15 np0005542249 systemd[1]: libpod-conmon-0a80a5e50c6916275d4c326dad0040da8f209e6ca681055dc971959a46859665.scope: Deactivated successfully.
Dec  2 06:09:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:09:16 np0005542249 podman[253057]: 2025-12-02 11:09:16.022944391 +0000 UTC m=+0.047374594 container create 1a18085506aa89ca1d7dc51be60c538f4d50990ac84b3071312f077e765bf72a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_matsumoto, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  2 06:09:16 np0005542249 systemd[1]: Started libpod-conmon-1a18085506aa89ca1d7dc51be60c538f4d50990ac84b3071312f077e765bf72a.scope.
Dec  2 06:09:16 np0005542249 python3.9[253051]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 06:09:16 np0005542249 podman[253057]: 2025-12-02 11:09:16.000865463 +0000 UTC m=+0.025295706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:09:16 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:09:16 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc95456212abddd34e3ff6e55749c4baad0b9dcdf2bcdcc90760acfa26ed3684/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:09:16 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc95456212abddd34e3ff6e55749c4baad0b9dcdf2bcdcc90760acfa26ed3684/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:09:16 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc95456212abddd34e3ff6e55749c4baad0b9dcdf2bcdcc90760acfa26ed3684/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:09:16 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc95456212abddd34e3ff6e55749c4baad0b9dcdf2bcdcc90760acfa26ed3684/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:09:16 np0005542249 podman[253057]: 2025-12-02 11:09:16.121947512 +0000 UTC m=+0.146377755 container init 1a18085506aa89ca1d7dc51be60c538f4d50990ac84b3071312f077e765bf72a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_matsumoto, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  2 06:09:16 np0005542249 podman[253057]: 2025-12-02 11:09:16.137337239 +0000 UTC m=+0.161767472 container start 1a18085506aa89ca1d7dc51be60c538f4d50990ac84b3071312f077e765bf72a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_matsumoto, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:09:16 np0005542249 podman[253057]: 2025-12-02 11:09:16.14181364 +0000 UTC m=+0.166243893 container attach 1a18085506aa89ca1d7dc51be60c538f4d50990ac84b3071312f077e765bf72a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_matsumoto, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]: {
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:    "0": [
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:        {
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "devices": [
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "/dev/loop3"
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            ],
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "lv_name": "ceph_lv0",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "lv_size": "21470642176",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "name": "ceph_lv0",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "tags": {
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.cluster_name": "ceph",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.crush_device_class": "",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.encrypted": "0",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.osd_id": "0",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.type": "block",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.vdo": "0"
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            },
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "type": "block",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "vg_name": "ceph_vg0"
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:        }
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:    ],
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:    "1": [
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:        {
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "devices": [
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "/dev/loop4"
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            ],
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "lv_name": "ceph_lv1",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "lv_size": "21470642176",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "name": "ceph_lv1",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "tags": {
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.cluster_name": "ceph",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.crush_device_class": "",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.encrypted": "0",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.osd_id": "1",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.type": "block",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.vdo": "0"
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            },
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "type": "block",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "vg_name": "ceph_vg1"
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:        }
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:    ],
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:    "2": [
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:        {
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "devices": [
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "/dev/loop5"
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            ],
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "lv_name": "ceph_lv2",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "lv_size": "21470642176",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "name": "ceph_lv2",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "tags": {
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.cluster_name": "ceph",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.crush_device_class": "",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.encrypted": "0",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.osd_id": "2",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.type": "block",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:                "ceph.vdo": "0"
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            },
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "type": "block",
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:            "vg_name": "ceph_vg2"
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:        }
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]:    ]
Dec  2 06:09:16 np0005542249 festive_matsumoto[253074]: }
Dec  2 06:09:16 np0005542249 python3.9[253232]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:09:16 np0005542249 systemd[1]: libpod-1a18085506aa89ca1d7dc51be60c538f4d50990ac84b3071312f077e765bf72a.scope: Deactivated successfully.
Dec  2 06:09:16 np0005542249 podman[253057]: 2025-12-02 11:09:16.944654825 +0000 UTC m=+0.969085068 container died 1a18085506aa89ca1d7dc51be60c538f4d50990ac84b3071312f077e765bf72a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:09:16 np0005542249 systemd[1]: var-lib-containers-storage-overlay-bc95456212abddd34e3ff6e55749c4baad0b9dcdf2bcdcc90760acfa26ed3684-merged.mount: Deactivated successfully.
Dec  2 06:09:17 np0005542249 podman[253057]: 2025-12-02 11:09:17.030603303 +0000 UTC m=+1.055033516 container remove 1a18085506aa89ca1d7dc51be60c538f4d50990ac84b3071312f077e765bf72a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_matsumoto, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  2 06:09:17 np0005542249 systemd[1]: libpod-conmon-1a18085506aa89ca1d7dc51be60c538f4d50990ac84b3071312f077e765bf72a.scope: Deactivated successfully.
Dec  2 06:09:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:17 np0005542249 python3.9[253499]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764673757.0411758-1489-182577842096270/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 06:09:17 np0005542249 podman[253542]: 2025-12-02 11:09:17.734463488 +0000 UTC m=+0.045275307 container create eaf93b084b23780c3d54dfc7e58fe0a15bff69d4da2ab654737baffa01458bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  2 06:09:17 np0005542249 systemd[1]: Started libpod-conmon-eaf93b084b23780c3d54dfc7e58fe0a15bff69d4da2ab654737baffa01458bc6.scope.
Dec  2 06:09:17 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:09:17 np0005542249 podman[253542]: 2025-12-02 11:09:17.71315031 +0000 UTC m=+0.023962169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:09:17 np0005542249 podman[253542]: 2025-12-02 11:09:17.818486933 +0000 UTC m=+0.129298832 container init eaf93b084b23780c3d54dfc7e58fe0a15bff69d4da2ab654737baffa01458bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:09:17 np0005542249 podman[253542]: 2025-12-02 11:09:17.830956651 +0000 UTC m=+0.141768500 container start eaf93b084b23780c3d54dfc7e58fe0a15bff69d4da2ab654737baffa01458bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:09:17 np0005542249 podman[253542]: 2025-12-02 11:09:17.835445853 +0000 UTC m=+0.146257792 container attach eaf93b084b23780c3d54dfc7e58fe0a15bff69d4da2ab654737baffa01458bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  2 06:09:17 np0005542249 brave_lamport[253582]: 167 167
Dec  2 06:09:17 np0005542249 systemd[1]: libpod-eaf93b084b23780c3d54dfc7e58fe0a15bff69d4da2ab654737baffa01458bc6.scope: Deactivated successfully.
Dec  2 06:09:17 np0005542249 podman[253542]: 2025-12-02 11:09:17.83867144 +0000 UTC m=+0.149483279 container died eaf93b084b23780c3d54dfc7e58fe0a15bff69d4da2ab654737baffa01458bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 06:09:17 np0005542249 systemd[1]: var-lib-containers-storage-overlay-b41880997a5f093ed361ee007778e231e30a50fd6d3fb1fa58382300fd924ee2-merged.mount: Deactivated successfully.
Dec  2 06:09:17 np0005542249 podman[253542]: 2025-12-02 11:09:17.892984611 +0000 UTC m=+0.203796460 container remove eaf93b084b23780c3d54dfc7e58fe0a15bff69d4da2ab654737baffa01458bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lamport, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  2 06:09:17 np0005542249 systemd[1]: libpod-conmon-eaf93b084b23780c3d54dfc7e58fe0a15bff69d4da2ab654737baffa01458bc6.scope: Deactivated successfully.
Dec  2 06:09:18 np0005542249 podman[253659]: 2025-12-02 11:09:18.07643651 +0000 UTC m=+0.043702415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:09:18 np0005542249 python3.9[253653]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 06:09:18 np0005542249 systemd[1]: Reloading.
Dec  2 06:09:18 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:09:18 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:09:18 np0005542249 podman[253659]: 2025-12-02 11:09:18.628316427 +0000 UTC m=+0.595582242 container create 472bdfa7a62e2405241f3a71c4b1f4142b455d7ce6a20ec42a63b48fc1c077c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keldysh, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:09:18 np0005542249 systemd[1]: Started libpod-conmon-472bdfa7a62e2405241f3a71c4b1f4142b455d7ce6a20ec42a63b48fc1c077c7.scope.
Dec  2 06:09:18 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:09:18 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5498babd8e9e0aa5004ce9f7d0013abe5e0512b7d40119ee8a87f6a63035ab7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:09:18 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5498babd8e9e0aa5004ce9f7d0013abe5e0512b7d40119ee8a87f6a63035ab7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:09:18 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5498babd8e9e0aa5004ce9f7d0013abe5e0512b7d40119ee8a87f6a63035ab7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:09:18 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5498babd8e9e0aa5004ce9f7d0013abe5e0512b7d40119ee8a87f6a63035ab7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:09:18 np0005542249 podman[253659]: 2025-12-02 11:09:18.74838642 +0000 UTC m=+0.715652285 container init 472bdfa7a62e2405241f3a71c4b1f4142b455d7ce6a20ec42a63b48fc1c077c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keldysh, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  2 06:09:18 np0005542249 podman[253659]: 2025-12-02 11:09:18.762457311 +0000 UTC m=+0.729723136 container start 472bdfa7a62e2405241f3a71c4b1f4142b455d7ce6a20ec42a63b48fc1c077c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keldysh, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  2 06:09:18 np0005542249 podman[253659]: 2025-12-02 11:09:18.765984827 +0000 UTC m=+0.733250682 container attach 472bdfa7a62e2405241f3a71c4b1f4142b455d7ce6a20ec42a63b48fc1c077c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keldysh, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Dec  2 06:09:19 np0005542249 python3.9[253791]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 06:09:19 np0005542249 systemd[1]: Reloading.
Dec  2 06:09:19 np0005542249 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 06:09:19 np0005542249 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 06:09:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:19 np0005542249 systemd[1]: Starting nova_compute container...
Dec  2 06:09:19 np0005542249 eager_keldysh[253711]: {
Dec  2 06:09:19 np0005542249 eager_keldysh[253711]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:09:19 np0005542249 eager_keldysh[253711]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:09:19 np0005542249 eager_keldysh[253711]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:09:19 np0005542249 eager_keldysh[253711]:        "osd_id": 0,
Dec  2 06:09:19 np0005542249 eager_keldysh[253711]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:09:19 np0005542249 eager_keldysh[253711]:        "type": "bluestore"
Dec  2 06:09:19 np0005542249 eager_keldysh[253711]:    },
Dec  2 06:09:19 np0005542249 eager_keldysh[253711]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:09:19 np0005542249 eager_keldysh[253711]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:09:19 np0005542249 eager_keldysh[253711]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:09:19 np0005542249 eager_keldysh[253711]:        "osd_id": 2,
Dec  2 06:09:19 np0005542249 eager_keldysh[253711]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:09:19 np0005542249 eager_keldysh[253711]:        "type": "bluestore"
Dec  2 06:09:19 np0005542249 eager_keldysh[253711]:    },
Dec  2 06:09:19 np0005542249 eager_keldysh[253711]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:09:19 np0005542249 eager_keldysh[253711]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:09:19 np0005542249 eager_keldysh[253711]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:09:19 np0005542249 eager_keldysh[253711]:        "osd_id": 1,
Dec  2 06:09:19 np0005542249 eager_keldysh[253711]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:09:19 np0005542249 eager_keldysh[253711]:        "type": "bluestore"
Dec  2 06:09:19 np0005542249 eager_keldysh[253711]:    }
Dec  2 06:09:19 np0005542249 eager_keldysh[253711]: }
Dec  2 06:09:19 np0005542249 systemd[1]: libpod-472bdfa7a62e2405241f3a71c4b1f4142b455d7ce6a20ec42a63b48fc1c077c7.scope: Deactivated successfully.
Dec  2 06:09:19 np0005542249 systemd[1]: libpod-472bdfa7a62e2405241f3a71c4b1f4142b455d7ce6a20ec42a63b48fc1c077c7.scope: Consumed 1.008s CPU time.
Dec  2 06:09:19 np0005542249 podman[253659]: 2025-12-02 11:09:19.777555655 +0000 UTC m=+1.744821490 container died 472bdfa7a62e2405241f3a71c4b1f4142b455d7ce6a20ec42a63b48fc1c077c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keldysh, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  2 06:09:19 np0005542249 systemd[1]: var-lib-containers-storage-overlay-5498babd8e9e0aa5004ce9f7d0013abe5e0512b7d40119ee8a87f6a63035ab7c-merged.mount: Deactivated successfully.
Dec  2 06:09:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:09:19.820 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:09:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:09:19.821 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:09:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:09:19.821 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:09:19 np0005542249 podman[253659]: 2025-12-02 11:09:19.858269201 +0000 UTC m=+1.825535026 container remove 472bdfa7a62e2405241f3a71c4b1f4142b455d7ce6a20ec42a63b48fc1c077c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keldysh, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:09:19 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:09:19 np0005542249 systemd[1]: libpod-conmon-472bdfa7a62e2405241f3a71c4b1f4142b455d7ce6a20ec42a63b48fc1c077c7.scope: Deactivated successfully.
Dec  2 06:09:19 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10da5fa7620e3f9c9d84b6427da33e91b9d0c0662d4a06a6e927aec3f6ee065/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  2 06:09:19 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10da5fa7620e3f9c9d84b6427da33e91b9d0c0662d4a06a6e927aec3f6ee065/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  2 06:09:19 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10da5fa7620e3f9c9d84b6427da33e91b9d0c0662d4a06a6e927aec3f6ee065/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  2 06:09:19 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10da5fa7620e3f9c9d84b6427da33e91b9d0c0662d4a06a6e927aec3f6ee065/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  2 06:09:19 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10da5fa7620e3f9c9d84b6427da33e91b9d0c0662d4a06a6e927aec3f6ee065/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  2 06:09:19 np0005542249 podman[253857]: 2025-12-02 11:09:19.888892671 +0000 UTC m=+0.115748776 container init 5fbc74cebe070d8cc77fb5ba95cab60fe5a6a788996a0004958af316ccf471ad (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Dec  2 06:09:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:09:19 np0005542249 podman[253857]: 2025-12-02 11:09:19.900565717 +0000 UTC m=+0.127421832 container start 5fbc74cebe070d8cc77fb5ba95cab60fe5a6a788996a0004958af316ccf471ad (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 06:09:19 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:09:19 np0005542249 podman[253857]: nova_compute
Dec  2 06:09:19 np0005542249 nova_compute[253887]: + sudo -E kolla_set_configs
Dec  2 06:09:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:09:19 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:09:19 np0005542249 systemd[1]: Started nova_compute container.
Dec  2 06:09:19 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev d7471ac0-ece7-4110-bffe-1bc07a9f8ada does not exist
Dec  2 06:09:19 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev fbc9c25c-836d-4eae-aa54-0571b04265c4 does not exist
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Validating config file
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Copying service configuration files
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Deleting /etc/ceph
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Creating directory /etc/ceph
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Setting permission for /etc/ceph
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Writing out command to execute
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  2 06:09:19 np0005542249 nova_compute[253887]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  2 06:09:20 np0005542249 nova_compute[253887]: ++ cat /run_command
Dec  2 06:09:20 np0005542249 nova_compute[253887]: + CMD=nova-compute
Dec  2 06:09:20 np0005542249 nova_compute[253887]: + ARGS=
Dec  2 06:09:20 np0005542249 nova_compute[253887]: + sudo kolla_copy_cacerts
Dec  2 06:09:20 np0005542249 nova_compute[253887]: + [[ ! -n '' ]]
Dec  2 06:09:20 np0005542249 nova_compute[253887]: + . kolla_extend_start
Dec  2 06:09:20 np0005542249 nova_compute[253887]: + echo 'Running command: '\''nova-compute'\'''
Dec  2 06:09:20 np0005542249 nova_compute[253887]: Running command: 'nova-compute'
Dec  2 06:09:20 np0005542249 nova_compute[253887]: + umask 0022
Dec  2 06:09:20 np0005542249 nova_compute[253887]: + exec nova-compute
Dec  2 06:09:20 np0005542249 podman[254072]: 2025-12-02 11:09:20.761176937 +0000 UTC m=+0.093414131 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 06:09:20 np0005542249 python3.9[254112]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 06:09:20 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:09:20 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:09:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:09:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:21 np0005542249 python3.9[254275]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 06:09:22 np0005542249 nova_compute[253887]: 2025-12-02 11:09:22.134 253891 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  2 06:09:22 np0005542249 nova_compute[253887]: 2025-12-02 11:09:22.135 253891 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  2 06:09:22 np0005542249 nova_compute[253887]: 2025-12-02 11:09:22.135 253891 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  2 06:09:22 np0005542249 nova_compute[253887]: 2025-12-02 11:09:22.136 253891 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  2 06:09:22 np0005542249 nova_compute[253887]: 2025-12-02 11:09:22.278 253891 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:09:22 np0005542249 nova_compute[253887]: 2025-12-02 11:09:22.304 253891 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:09:22 np0005542249 nova_compute[253887]: 2025-12-02 11:09:22.305 253891 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  2 06:09:22 np0005542249 python3.9[254427]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 06:09:22 np0005542249 nova_compute[253887]: 2025-12-02 11:09:22.953 253891 INFO nova.virt.driver [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.109 253891 INFO nova.compute.provider_config [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.126 253891 DEBUG oslo_concurrency.lockutils [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.127 253891 DEBUG oslo_concurrency.lockutils [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.127 253891 DEBUG oslo_concurrency.lockutils [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.127 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.127 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.128 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.128 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.128 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.128 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.128 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.128 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.128 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.128 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.129 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.129 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.129 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.129 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.129 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.129 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.130 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.130 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.130 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.130 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.130 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.130 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.130 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.131 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.131 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.131 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.131 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.131 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.131 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.132 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.132 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.132 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.132 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.132 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.132 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.132 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.132 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.133 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.133 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.133 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.133 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.133 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.133 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.134 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.134 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.134 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.134 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.134 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.134 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.134 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.135 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.135 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.135 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.135 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.135 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.135 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.135 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.136 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.136 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.136 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.136 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.136 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.136 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.136 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.137 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.137 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.137 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.137 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.137 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.137 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.137 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.138 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.138 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.138 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.138 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.138 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.138 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.139 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.139 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.139 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.139 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.139 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.139 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.139 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.140 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.140 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.140 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.140 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.140 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.140 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.140 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.141 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.141 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.141 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.141 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.141 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.141 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.142 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.142 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.142 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.142 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.142 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.142 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.142 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.143 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.143 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.143 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.143 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.143 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.143 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.143 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.144 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.144 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.144 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.144 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.144 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.144 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.145 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.145 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.145 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.145 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.145 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.146 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.146 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.146 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.146 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.146 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.146 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.147 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.147 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.147 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.147 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.147 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.148 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.148 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.148 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.148 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.148 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.148 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.148 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.149 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.149 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.149 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.149 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.149 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.149 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.150 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.150 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.150 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.150 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.150 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.150 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.151 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.151 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.151 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.151 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.151 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.152 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.152 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.152 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.152 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.152 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.152 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.153 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.153 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.153 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.153 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.153 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.153 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.154 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.154 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.154 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.154 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.154 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.154 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.154 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.155 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.155 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.155 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.155 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.155 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.155 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.156 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.156 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.156 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.156 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.156 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.156 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.157 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.157 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.157 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.157 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.157 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.157 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.157 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.158 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.158 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.158 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.158 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.158 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.158 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.158 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.159 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.159 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.159 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.159 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.159 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.159 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.159 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.160 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.160 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.160 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.160 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.160 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.161 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.161 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.161 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.161 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.161 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.162 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.162 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.162 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.162 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.162 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.162 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.162 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.163 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.163 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.163 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.163 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.163 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.163 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.163 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.164 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.164 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.164 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.164 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.164 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.164 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.164 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.165 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.165 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.165 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.165 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.165 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.165 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.165 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.166 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.166 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.166 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.166 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.166 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.166 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.166 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.167 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.167 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.167 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.167 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.167 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.167 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.167 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.168 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.168 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.168 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.168 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.168 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.168 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.168 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.169 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.169 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.169 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.169 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.169 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.169 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.169 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.170 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.170 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.170 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.170 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.170 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.170 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.170 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.171 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.171 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.171 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.171 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.171 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.171 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.171 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.172 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.172 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.172 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.172 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.172 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.172 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.172 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.173 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.173 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.173 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.173 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.173 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.173 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.173 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.174 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.174 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.174 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.174 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.174 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.174 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.174 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.175 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.175 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.175 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.175 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.175 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.175 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.175 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.176 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.176 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.176 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.176 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.176 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.176 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.176 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.177 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.177 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.177 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.177 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.177 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.177 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.178 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.178 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.178 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.178 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.178 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.178 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.178 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.179 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.179 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.179 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.179 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.179 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.179 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.179 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.180 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.180 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.180 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.180 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.180 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.180 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.181 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.181 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.181 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.181 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.181 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.181 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.182 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.182 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.182 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.182 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.182 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.182 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.182 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.183 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.183 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.183 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.183 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.183 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.183 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.183 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.184 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.184 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.184 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.184 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.184 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.184 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.184 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.185 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.185 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.185 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.185 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.185 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.185 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.186 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.186 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.186 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.186 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.186 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.186 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.186 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.187 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.187 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.187 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.187 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.187 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.187 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.188 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.188 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.188 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.188 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.188 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.188 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.188 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.189 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.189 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.189 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.189 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.189 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.189 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.189 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.190 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.190 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.190 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.190 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.190 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.190 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.190 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.191 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.191 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.191 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.191 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.191 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.191 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.191 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.192 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.192 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.192 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.192 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.192 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.192 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.192 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.192 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.193 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.193 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.193 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.193 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.193 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.193 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.193 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.194 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.194 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.194 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.194 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.194 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.194 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.194 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.195 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.195 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.195 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.195 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.195 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.195 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.195 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.196 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.196 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.196 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.196 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.196 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.196 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.196 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.197 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.197 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.197 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.197 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.197 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.197 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.197 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.198 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.198 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.198 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.198 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.198 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.198 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.198 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.199 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.199 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.199 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.199 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.199 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.199 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.199 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.200 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.200 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.200 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.200 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.200 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.200 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.200 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.201 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.201 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.201 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.201 253891 WARNING oslo_config.cfg [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  2 06:09:23 np0005542249 nova_compute[253887]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  2 06:09:23 np0005542249 nova_compute[253887]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  2 06:09:23 np0005542249 nova_compute[253887]: and ``live_migration_inbound_addr`` respectively.
Dec  2 06:09:23 np0005542249 nova_compute[253887]: ).  Its value may be silently ignored in the future.#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.201 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.201 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.202 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.202 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.202 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.202 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.202 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.202 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.202 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.203 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.203 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.203 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.203 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.203 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.203 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.203 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.204 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.204 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.204 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.rbd_secret_uuid        = 95bc4eaa-1a14-59bf-acf2-4b3da055547d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.204 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.204 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.204 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.204 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.205 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.205 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.205 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.205 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.205 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.205 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.205 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.206 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.206 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.206 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.206 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.206 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.206 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.207 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.207 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.207 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.207 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.207 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.207 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.207 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.208 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.208 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.208 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.208 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.208 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.208 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.208 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.209 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.209 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.209 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.209 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.209 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.209 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.209 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.210 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.210 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.210 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.210 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.210 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.210 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.210 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.211 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.211 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.211 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.211 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.211 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.211 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.211 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.212 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.212 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.212 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.212 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.212 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.212 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.212 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.212 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.213 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.213 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.213 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.213 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.213 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.213 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.214 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.214 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.214 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.214 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.214 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.214 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.214 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.215 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.215 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.215 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.215 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.215 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.215 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.215 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.216 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.216 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.216 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.216 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.216 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.216 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.216 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.217 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.217 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.217 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.217 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.217 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.217 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.217 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.218 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.218 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.218 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.218 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.218 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.218 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.218 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.218 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.219 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.219 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.219 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.219 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.219 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.219 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.219 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.220 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.220 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.220 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.220 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.220 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.220 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.220 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.221 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.221 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.221 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.221 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.221 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.221 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.222 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.222 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.222 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.222 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.222 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.222 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.222 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.223 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.223 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.223 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.223 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.223 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.223 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.223 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.224 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.224 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.224 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.224 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.224 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.224 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.224 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.225 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.225 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.225 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.225 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.225 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.225 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.225 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.226 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.226 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.226 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.226 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.226 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.226 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.226 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.227 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.227 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.227 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.227 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.227 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.227 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.227 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.228 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.228 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.228 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.228 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.228 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.228 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.228 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.229 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.229 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.229 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.229 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.229 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.229 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.229 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.230 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.230 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.230 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.230 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.230 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.230 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.231 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.231 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.231 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.231 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.231 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.231 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.231 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.232 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.232 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.232 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.232 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.232 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.232 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.232 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.232 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.233 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.233 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.233 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.233 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.233 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.233 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.233 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.234 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.234 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.234 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.234 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.234 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.234 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.234 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.235 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.235 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.235 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.235 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.235 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.235 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.235 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.236 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.236 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.236 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.236 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.236 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.236 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.236 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.237 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.237 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.237 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.237 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.237 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.237 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.238 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.238 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.238 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.238 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.238 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.238 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.238 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.238 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.239 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.239 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.239 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.239 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.239 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.239 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.239 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.240 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.240 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.240 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.240 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.240 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.240 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.240 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.241 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.241 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.241 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.241 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.241 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.241 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.241 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.242 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.242 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.242 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.242 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.242 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.242 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.242 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.243 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.243 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.243 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.243 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.243 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.243 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.244 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.244 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.244 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.244 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.244 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.244 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.245 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.245 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.245 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.245 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.245 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.245 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.246 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.246 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.246 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.246 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.247 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.247 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.247 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.247 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.247 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.247 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.247 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.248 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.248 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.248 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.248 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.248 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.248 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.248 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.249 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.249 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.249 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.249 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.249 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.249 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.250 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.250 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.250 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.250 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.250 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.250 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.250 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.251 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.251 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.251 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.251 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.251 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.251 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.251 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.252 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.252 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.252 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.252 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.252 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.252 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.252 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.253 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.253 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.253 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.253 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.253 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.253 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.253 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.253 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.254 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.254 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.254 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.254 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.254 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.254 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.254 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.255 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.255 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.255 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.255 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.255 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.255 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.255 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.256 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.256 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.256 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.256 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.256 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.256 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.256 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.257 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.257 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.257 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.257 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.257 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.257 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.257 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.258 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.258 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.258 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.258 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.258 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.258 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.258 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.259 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.259 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.259 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.259 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.259 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.259 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.259 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.260 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.260 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.260 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.260 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.260 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.260 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.260 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.260 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.261 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.261 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.261 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.261 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.261 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.261 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.261 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.262 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.262 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.262 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.262 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.262 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.262 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.262 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.262 253891 DEBUG oslo_service.service [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.263 253891 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.276 253891 DEBUG nova.virt.libvirt.host [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.276 253891 DEBUG nova.virt.libvirt.host [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.276 253891 DEBUG nova.virt.libvirt.host [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.277 253891 DEBUG nova.virt.libvirt.host [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  2 06:09:23 np0005542249 systemd[1]: Starting libvirt QEMU daemon...
Dec  2 06:09:23 np0005542249 systemd[1]: Started libvirt QEMU daemon.
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.378 253891 DEBUG nova.virt.libvirt.host [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f822fb063a0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.381 253891 DEBUG nova.virt.libvirt.host [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f822fb063a0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.382 253891 INFO nova.virt.libvirt.driver [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.396 253891 WARNING nova.virt.libvirt.driver [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  2 06:09:23 np0005542249 nova_compute[253887]: 2025-12-02 11:09:23.397 253891 DEBUG nova.virt.libvirt.volume.mount [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  2 06:09:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:23 np0005542249 python3.9[254603]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  2 06:09:23 np0005542249 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 2025-12-02 11:09:24.338 253891 INFO nova.virt.libvirt.host [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Libvirt host capabilities <capabilities>
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <host>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <uuid>b5d8029e-bce4-4398-9c24-ad4d219021cb</uuid>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <cpu>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <arch>x86_64</arch>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model>EPYC-Rome-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <vendor>AMD</vendor>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <microcode version='16777317'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <signature family='23' model='49' stepping='0'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature name='x2apic'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature name='tsc-deadline'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature name='osxsave'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature name='hypervisor'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature name='tsc_adjust'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature name='spec-ctrl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature name='stibp'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature name='arch-capabilities'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature name='ssbd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature name='cmp_legacy'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature name='topoext'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature name='virt-ssbd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature name='lbrv'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature name='tsc-scale'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature name='vmcb-clean'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature name='pause-filter'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature name='pfthreshold'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature name='svme-addr-chk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature name='rdctl-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature name='skip-l1dfl-vmentry'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature name='mds-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature name='pschange-mc-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <pages unit='KiB' size='4'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <pages unit='KiB' size='2048'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <pages unit='KiB' size='1048576'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </cpu>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <power_management>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <suspend_mem/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </power_management>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <iommu support='no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <migration_features>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <live/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <uri_transports>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <uri_transport>tcp</uri_transport>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <uri_transport>rdma</uri_transport>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </uri_transports>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </migration_features>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <topology>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <cells num='1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <cell id='0'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:          <memory unit='KiB'>7864320</memory>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:          <pages unit='KiB' size='4'>1966080</pages>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:          <pages unit='KiB' size='2048'>0</pages>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:          <distances>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:            <sibling id='0' value='10'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:          </distances>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:          <cpus num='8'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:          </cpus>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        </cell>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </cells>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </topology>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <cache>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </cache>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <secmodel>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model>selinux</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <doi>0</doi>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </secmodel>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <secmodel>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model>dac</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <doi>0</doi>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </secmodel>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  </host>
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <guest>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <os_type>hvm</os_type>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <arch name='i686'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <wordsize>32</wordsize>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <domain type='qemu'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <domain type='kvm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </arch>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <features>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <pae/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <nonpae/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <acpi default='on' toggle='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <apic default='on' toggle='no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <cpuselection/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <deviceboot/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <disksnapshot default='on' toggle='no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <externalSnapshot/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </features>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  </guest>
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <guest>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <os_type>hvm</os_type>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <arch name='x86_64'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <wordsize>64</wordsize>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <domain type='qemu'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <domain type='kvm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </arch>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <features>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <acpi default='on' toggle='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <apic default='on' toggle='no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <cpuselection/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <deviceboot/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <disksnapshot default='on' toggle='no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <externalSnapshot/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </features>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  </guest>
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 
Dec  2 06:09:24 np0005542249 nova_compute[253887]: </capabilities>
Dec  2 06:09:24 np0005542249 nova_compute[253887]: #033[00m
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 2025-12-02 11:09:24.350 253891 DEBUG nova.virt.libvirt.host [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 2025-12-02 11:09:24.385 253891 DEBUG nova.virt.libvirt.host [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  2 06:09:24 np0005542249 nova_compute[253887]: <domainCapabilities>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <path>/usr/libexec/qemu-kvm</path>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <domain>kvm</domain>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <arch>i686</arch>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <vcpu max='240'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <iothreads supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <os supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <enum name='firmware'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <loader supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='type'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>rom</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>pflash</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='readonly'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>yes</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>no</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='secure'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>no</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </loader>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  </os>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <cpu>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <mode name='host-passthrough' supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='hostPassthroughMigratable'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>on</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>off</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </mode>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <mode name='maximum' supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='maximumMigratable'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>on</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>off</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </mode>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <mode name='host-model' supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <vendor>AMD</vendor>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='x2apic'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='tsc-deadline'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='hypervisor'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='tsc_adjust'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='spec-ctrl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='stibp'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='ssbd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='cmp_legacy'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='overflow-recov'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='succor'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='ibrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='amd-ssbd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='virt-ssbd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='lbrv'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='tsc-scale'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='vmcb-clean'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='flushbyasid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='pause-filter'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='pfthreshold'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='svme-addr-chk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='disable' name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </mode>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <mode name='custom' supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-noTSX'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server-v5'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cooperlake'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cooperlake-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cooperlake-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Denverton'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mpx'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Denverton-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mpx'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Denverton-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Denverton-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Dhyana-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Genoa'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amd-psfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='auto-ibrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='no-nested-data-bp'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='null-sel-clr-base'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='stibp-always-on'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Genoa-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amd-psfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='auto-ibrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='no-nested-data-bp'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='null-sel-clr-base'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='stibp-always-on'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Milan'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Milan-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Milan-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amd-psfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='no-nested-data-bp'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='null-sel-clr-base'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='stibp-always-on'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Rome'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Rome-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Rome-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Rome-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='GraniteRapids'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mcdt-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pbrsb-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='prefetchiti'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='GraniteRapids-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mcdt-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pbrsb-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='prefetchiti'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='GraniteRapids-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx10'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx10-128'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx10-256'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx10-512'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mcdt-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pbrsb-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='prefetchiti'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-noTSX'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-noTSX'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v5'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v6'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v7'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='IvyBridge'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='IvyBridge-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='IvyBridge-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='IvyBridge-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='KnightsMill'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-4fmaps'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-4vnniw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512er'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512pf'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='KnightsMill-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-4fmaps'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-4vnniw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512er'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512pf'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Opteron_G4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fma4'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xop'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Opteron_G4-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fma4'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xop'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Opteron_G5'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fma4'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tbm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xop'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Opteron_G5-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fma4'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tbm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xop'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='SapphireRapids'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='SapphireRapids-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='SapphireRapids-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='SapphireRapids-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='SierraForest'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-ne-convert'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cmpccxadd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mcdt-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pbrsb-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='SierraForest-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-ne-convert'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cmpccxadd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mcdt-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pbrsb-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-v5'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Snowridge'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='core-capability'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mpx'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='split-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Snowridge-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='core-capability'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mpx'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='split-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Snowridge-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='core-capability'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='split-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Snowridge-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='core-capability'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='split-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Snowridge-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='athlon'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnow'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnowext'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='athlon-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnow'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnowext'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='core2duo'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='core2duo-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='coreduo'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='coreduo-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='n270'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='n270-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='phenom'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnow'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnowext'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='phenom-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnow'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnowext'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </mode>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  </cpu>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <memoryBacking supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <enum name='sourceType'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <value>file</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <value>anonymous</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <value>memfd</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  </memoryBacking>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <devices>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <disk supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='diskDevice'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>disk</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>cdrom</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>floppy</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>lun</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='bus'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>ide</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>fdc</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>scsi</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>usb</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>sata</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='model'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio-transitional</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio-non-transitional</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </disk>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <graphics supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='type'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>vnc</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>egl-headless</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>dbus</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </graphics>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <video supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='modelType'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>vga</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>cirrus</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>none</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>bochs</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>ramfb</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </video>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <hostdev supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='mode'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>subsystem</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='startupPolicy'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>default</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>mandatory</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>requisite</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>optional</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='subsysType'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>usb</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>pci</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>scsi</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='capsType'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='pciBackend'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </hostdev>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <rng supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='model'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio-transitional</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio-non-transitional</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='backendModel'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>random</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>egd</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>builtin</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </rng>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <filesystem supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='driverType'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>path</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>handle</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtiofs</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </filesystem>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <tpm supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='model'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>tpm-tis</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>tpm-crb</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='backendModel'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>emulator</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>external</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='backendVersion'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>2.0</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </tpm>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <redirdev supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='bus'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>usb</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </redirdev>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <channel supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='type'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>pty</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>unix</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </channel>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <crypto supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='model'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='type'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>qemu</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='backendModel'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>builtin</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </crypto>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <interface supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='backendType'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>default</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>passt</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </interface>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <panic supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='model'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>isa</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>hyperv</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </panic>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <console supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='type'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>null</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>vc</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>pty</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>dev</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>file</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>pipe</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>stdio</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>udp</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>tcp</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>unix</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>qemu-vdagent</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>dbus</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </console>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  </devices>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <features>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <gic supported='no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <vmcoreinfo supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <genid supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <backingStoreInput supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <backup supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <async-teardown supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <ps2 supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <sev supported='no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <sgx supported='no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <hyperv supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='features'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>relaxed</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>vapic</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>spinlocks</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>vpindex</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>runtime</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>synic</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>stimer</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>reset</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>vendor_id</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>frequencies</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>reenlightenment</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>tlbflush</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>ipi</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>avic</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>emsr_bitmap</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>xmm_input</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <defaults>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <spinlocks>4095</spinlocks>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <stimer_direct>on</stimer_direct>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <tlbflush_direct>on</tlbflush_direct>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <tlbflush_extended>on</tlbflush_extended>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </defaults>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </hyperv>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <launchSecurity supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='sectype'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>tdx</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </launchSecurity>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  </features>
Dec  2 06:09:24 np0005542249 nova_compute[253887]: </domainCapabilities>
Dec  2 06:09:24 np0005542249 nova_compute[253887]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 2025-12-02 11:09:24.394 253891 DEBUG nova.virt.libvirt.host [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  2 06:09:24 np0005542249 nova_compute[253887]: <domainCapabilities>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <path>/usr/libexec/qemu-kvm</path>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <domain>kvm</domain>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <arch>i686</arch>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <vcpu max='4096'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <iothreads supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <os supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <enum name='firmware'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <loader supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='type'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>rom</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>pflash</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='readonly'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>yes</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>no</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='secure'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>no</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </loader>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  </os>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <cpu>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <mode name='host-passthrough' supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='hostPassthroughMigratable'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>on</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>off</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </mode>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <mode name='maximum' supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='maximumMigratable'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>on</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>off</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </mode>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <mode name='host-model' supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <vendor>AMD</vendor>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='x2apic'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='tsc-deadline'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='hypervisor'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='tsc_adjust'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='spec-ctrl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='stibp'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='ssbd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='cmp_legacy'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='overflow-recov'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='succor'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='ibrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='amd-ssbd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='virt-ssbd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='lbrv'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='tsc-scale'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='vmcb-clean'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='flushbyasid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='pause-filter'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='pfthreshold'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='svme-addr-chk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='disable' name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </mode>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <mode name='custom' supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-noTSX'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 podman[254817]: 2025-12-02 11:09:24.431515828 +0000 UTC m=+0.081071596 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server-v5'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cooperlake'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cooperlake-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cooperlake-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Denverton'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mpx'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Denverton-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mpx'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Denverton-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Denverton-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Dhyana-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Genoa'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amd-psfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='auto-ibrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='no-nested-data-bp'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='null-sel-clr-base'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='stibp-always-on'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Genoa-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amd-psfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='auto-ibrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='no-nested-data-bp'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='null-sel-clr-base'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='stibp-always-on'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Milan'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Milan-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Milan-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amd-psfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='no-nested-data-bp'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='null-sel-clr-base'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='stibp-always-on'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Rome'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Rome-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Rome-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Rome-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='GraniteRapids'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mcdt-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pbrsb-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='prefetchiti'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='GraniteRapids-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mcdt-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pbrsb-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='prefetchiti'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='GraniteRapids-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx10'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx10-128'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx10-256'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx10-512'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mcdt-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pbrsb-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='prefetchiti'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-noTSX'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-noTSX'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v5'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v6'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v7'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='IvyBridge'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='IvyBridge-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='IvyBridge-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='IvyBridge-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='KnightsMill'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-4fmaps'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-4vnniw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512er'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512pf'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='KnightsMill-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-4fmaps'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-4vnniw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512er'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512pf'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Opteron_G4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fma4'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xop'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Opteron_G4-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fma4'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xop'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Opteron_G5'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fma4'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tbm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xop'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Opteron_G5-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fma4'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tbm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xop'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='SapphireRapids'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='SapphireRapids-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='SapphireRapids-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='SapphireRapids-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='SierraForest'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-ne-convert'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cmpccxadd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mcdt-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pbrsb-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='SierraForest-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-ne-convert'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cmpccxadd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mcdt-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pbrsb-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-v5'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Snowridge'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='core-capability'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mpx'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='split-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Snowridge-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='core-capability'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mpx'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='split-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Snowridge-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='core-capability'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='split-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Snowridge-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='core-capability'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='split-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Snowridge-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='athlon'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnow'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnowext'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='athlon-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnow'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnowext'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='core2duo'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='core2duo-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='coreduo'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='coreduo-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='n270'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='n270-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='phenom'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnow'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnowext'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='phenom-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnow'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnowext'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </mode>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  </cpu>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <memoryBacking supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <enum name='sourceType'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <value>file</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <value>anonymous</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <value>memfd</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  </memoryBacking>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <devices>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <disk supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='diskDevice'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>disk</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>cdrom</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>floppy</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>lun</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='bus'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>fdc</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>scsi</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>usb</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>sata</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='model'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio-transitional</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio-non-transitional</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </disk>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <graphics supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='type'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>vnc</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>egl-headless</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>dbus</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </graphics>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <video supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='modelType'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>vga</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>cirrus</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>none</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>bochs</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>ramfb</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </video>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <hostdev supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='mode'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>subsystem</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='startupPolicy'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>default</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>mandatory</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>requisite</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>optional</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='subsysType'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>usb</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>pci</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>scsi</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='capsType'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='pciBackend'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </hostdev>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <rng supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='model'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio-transitional</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio-non-transitional</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='backendModel'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>random</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>egd</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>builtin</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </rng>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <filesystem supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='driverType'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>path</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>handle</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtiofs</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </filesystem>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <tpm supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='model'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>tpm-tis</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>tpm-crb</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='backendModel'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>emulator</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>external</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='backendVersion'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>2.0</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </tpm>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <redirdev supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='bus'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>usb</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </redirdev>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <channel supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='type'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>pty</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>unix</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </channel>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <crypto supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='model'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='type'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>qemu</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='backendModel'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>builtin</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </crypto>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <interface supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='backendType'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>default</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>passt</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </interface>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <panic supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='model'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>isa</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>hyperv</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </panic>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <console supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='type'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>null</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>vc</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>pty</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>dev</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>file</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>pipe</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>stdio</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>udp</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>tcp</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>unix</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>qemu-vdagent</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>dbus</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </console>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  </devices>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <features>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <gic supported='no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <vmcoreinfo supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <genid supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <backingStoreInput supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <backup supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <async-teardown supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <ps2 supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <sev supported='no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <sgx supported='no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <hyperv supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='features'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>relaxed</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>vapic</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>spinlocks</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>vpindex</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>runtime</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>synic</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>stimer</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>reset</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>vendor_id</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>frequencies</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>reenlightenment</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>tlbflush</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>ipi</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>avic</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>emsr_bitmap</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>xmm_input</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <defaults>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <spinlocks>4095</spinlocks>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <stimer_direct>on</stimer_direct>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <tlbflush_direct>on</tlbflush_direct>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <tlbflush_extended>on</tlbflush_extended>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </defaults>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </hyperv>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <launchSecurity supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='sectype'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>tdx</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </launchSecurity>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  </features>
Dec  2 06:09:24 np0005542249 nova_compute[253887]: </domainCapabilities>
Dec  2 06:09:24 np0005542249 nova_compute[253887]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 2025-12-02 11:09:24.419 253891 DEBUG nova.virt.libvirt.host [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 2025-12-02 11:09:24.427 253891 DEBUG nova.virt.libvirt.host [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec  2 06:09:24 np0005542249 nova_compute[253887]: <domainCapabilities>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <path>/usr/libexec/qemu-kvm</path>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <domain>kvm</domain>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <arch>x86_64</arch>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <vcpu max='240'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <iothreads supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <os supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <enum name='firmware'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <loader supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='type'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>rom</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>pflash</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='readonly'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>yes</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>no</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='secure'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>no</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </loader>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  </os>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <cpu>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <mode name='host-passthrough' supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='hostPassthroughMigratable'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>on</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>off</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </mode>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <mode name='maximum' supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='maximumMigratable'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>on</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>off</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </mode>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <mode name='host-model' supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <vendor>AMD</vendor>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='x2apic'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='tsc-deadline'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='hypervisor'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='tsc_adjust'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='spec-ctrl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='stibp'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='ssbd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='cmp_legacy'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='overflow-recov'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='succor'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='ibrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='amd-ssbd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='virt-ssbd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='lbrv'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='tsc-scale'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='vmcb-clean'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='flushbyasid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='pause-filter'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='pfthreshold'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='svme-addr-chk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='disable' name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </mode>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <mode name='custom' supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-noTSX'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server-v5'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cooperlake'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cooperlake-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cooperlake-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Denverton'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mpx'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Denverton-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mpx'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Denverton-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Denverton-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Dhyana-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Genoa'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amd-psfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='auto-ibrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='no-nested-data-bp'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='null-sel-clr-base'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='stibp-always-on'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Genoa-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amd-psfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='auto-ibrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='no-nested-data-bp'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='null-sel-clr-base'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='stibp-always-on'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Milan'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Milan-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Milan-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amd-psfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='no-nested-data-bp'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='null-sel-clr-base'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='stibp-always-on'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Rome'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Rome-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Rome-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Rome-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='GraniteRapids'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mcdt-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pbrsb-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='prefetchiti'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='GraniteRapids-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mcdt-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pbrsb-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='prefetchiti'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='GraniteRapids-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx10'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx10-128'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx10-256'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx10-512'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mcdt-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pbrsb-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='prefetchiti'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-noTSX'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-noTSX'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v5'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v6'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v7'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='IvyBridge'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='IvyBridge-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='IvyBridge-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='IvyBridge-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='KnightsMill'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-4fmaps'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-4vnniw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512er'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512pf'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='KnightsMill-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-4fmaps'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-4vnniw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512er'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512pf'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Opteron_G4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fma4'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xop'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Opteron_G4-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fma4'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xop'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Opteron_G5'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fma4'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tbm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xop'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Opteron_G5-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fma4'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tbm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xop'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='SapphireRapids'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='SapphireRapids-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='SapphireRapids-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='SapphireRapids-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='SierraForest'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-ne-convert'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cmpccxadd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mcdt-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pbrsb-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='SierraForest-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-ne-convert'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cmpccxadd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mcdt-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pbrsb-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-v5'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Snowridge'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='core-capability'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mpx'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='split-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Snowridge-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='core-capability'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mpx'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='split-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Snowridge-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='core-capability'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='split-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Snowridge-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='core-capability'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='split-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Snowridge-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='athlon'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnow'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnowext'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='athlon-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnow'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnowext'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='core2duo'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='core2duo-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='coreduo'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='coreduo-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='n270'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='n270-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='phenom'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnow'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnowext'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='phenom-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnow'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnowext'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </mode>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  </cpu>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <memoryBacking supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <enum name='sourceType'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <value>file</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <value>anonymous</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <value>memfd</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  </memoryBacking>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <devices>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <disk supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='diskDevice'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>disk</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>cdrom</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>floppy</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>lun</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='bus'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>ide</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>fdc</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>scsi</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>usb</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>sata</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='model'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio-transitional</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio-non-transitional</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </disk>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <graphics supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='type'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>vnc</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>egl-headless</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>dbus</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </graphics>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <video supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='modelType'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>vga</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>cirrus</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>none</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>bochs</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>ramfb</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </video>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <hostdev supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='mode'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>subsystem</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='startupPolicy'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>default</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>mandatory</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>requisite</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>optional</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='subsysType'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>usb</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>pci</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>scsi</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='capsType'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='pciBackend'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </hostdev>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <rng supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='model'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio-transitional</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio-non-transitional</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='backendModel'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>random</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>egd</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>builtin</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </rng>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <filesystem supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='driverType'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>path</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>handle</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtiofs</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </filesystem>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <tpm supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='model'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>tpm-tis</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>tpm-crb</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='backendModel'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>emulator</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>external</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='backendVersion'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>2.0</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </tpm>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <redirdev supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='bus'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>usb</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </redirdev>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <channel supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='type'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>pty</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>unix</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </channel>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <crypto supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='model'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='type'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>qemu</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='backendModel'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>builtin</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </crypto>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <interface supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='backendType'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>default</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>passt</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </interface>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <panic supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='model'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>isa</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>hyperv</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </panic>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <console supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='type'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>null</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>vc</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>pty</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>dev</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>file</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>pipe</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>stdio</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>udp</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>tcp</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>unix</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>qemu-vdagent</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>dbus</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </console>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  </devices>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <features>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <gic supported='no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <vmcoreinfo supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <genid supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <backingStoreInput supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <backup supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <async-teardown supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <ps2 supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <sev supported='no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <sgx supported='no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <hyperv supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='features'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>relaxed</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>vapic</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>spinlocks</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>vpindex</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>runtime</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>synic</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>stimer</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>reset</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>vendor_id</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>frequencies</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>reenlightenment</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>tlbflush</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>ipi</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>avic</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>emsr_bitmap</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>xmm_input</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <defaults>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <spinlocks>4095</spinlocks>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <stimer_direct>on</stimer_direct>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <tlbflush_direct>on</tlbflush_direct>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <tlbflush_extended>on</tlbflush_extended>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </defaults>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </hyperv>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <launchSecurity supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='sectype'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>tdx</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </launchSecurity>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  </features>
Dec  2 06:09:24 np0005542249 nova_compute[253887]: </domainCapabilities>
Dec  2 06:09:24 np0005542249 nova_compute[253887]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 2025-12-02 11:09:24.489 253891 DEBUG nova.virt.libvirt.host [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec  2 06:09:24 np0005542249 nova_compute[253887]: <domainCapabilities>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <path>/usr/libexec/qemu-kvm</path>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <domain>kvm</domain>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <arch>x86_64</arch>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <vcpu max='4096'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <iothreads supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <os supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <enum name='firmware'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <value>efi</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <loader supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='type'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>rom</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>pflash</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='readonly'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>yes</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>no</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='secure'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>yes</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>no</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </loader>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  </os>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <cpu>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <mode name='host-passthrough' supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='hostPassthroughMigratable'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>on</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>off</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </mode>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <mode name='maximum' supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='maximumMigratable'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>on</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>off</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </mode>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <mode name='host-model' supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <vendor>AMD</vendor>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='x2apic'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='tsc-deadline'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='hypervisor'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='tsc_adjust'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='spec-ctrl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='stibp'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='ssbd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='cmp_legacy'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='overflow-recov'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='succor'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='ibrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='amd-ssbd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='virt-ssbd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='lbrv'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='tsc-scale'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='vmcb-clean'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='flushbyasid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='pause-filter'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='pfthreshold'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='svme-addr-chk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <feature policy='disable' name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </mode>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <mode name='custom' supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-noTSX'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Broadwell-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cascadelake-Server-v5'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cooperlake'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cooperlake-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Cooperlake-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Denverton'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mpx'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Denverton-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mpx'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Denverton-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Denverton-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Dhyana-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Genoa'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amd-psfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='auto-ibrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='no-nested-data-bp'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='null-sel-clr-base'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='stibp-always-on'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Genoa-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amd-psfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='auto-ibrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='no-nested-data-bp'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='null-sel-clr-base'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='stibp-always-on'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Milan'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Milan-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Milan-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amd-psfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='no-nested-data-bp'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='null-sel-clr-base'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='stibp-always-on'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Rome'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Rome-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Rome-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-Rome-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='EPYC-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='GraniteRapids'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mcdt-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pbrsb-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='prefetchiti'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='GraniteRapids-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mcdt-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pbrsb-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='prefetchiti'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='GraniteRapids-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx10'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx10-128'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx10-256'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx10-512'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mcdt-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pbrsb-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='prefetchiti'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-noTSX'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Haswell-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-noTSX'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v5'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v6'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Icelake-Server-v7'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='IvyBridge'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='IvyBridge-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='IvyBridge-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='IvyBridge-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='KnightsMill'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-4fmaps'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-4vnniw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512er'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512pf'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='KnightsMill-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-4fmaps'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-4vnniw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512er'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512pf'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Opteron_G4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fma4'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xop'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Opteron_G4-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fma4'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xop'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Opteron_G5'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fma4'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tbm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xop'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Opteron_G5-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fma4'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tbm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xop'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='SapphireRapids'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='SapphireRapids-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='SapphireRapids-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='SapphireRapids-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='amx-tile'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-bf16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-fp16'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bitalg'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrc'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fzrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='la57'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='taa-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xfd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='SierraForest'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-ne-convert'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cmpccxadd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mcdt-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pbrsb-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='SierraForest-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-ifma'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-ne-convert'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx-vnni-int8'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cmpccxadd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fbsdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='fsrs'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ibrs-all'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mcdt-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pbrsb-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='psdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='serialize'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vaes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Client-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='hle'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='rtm'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Skylake-Server-v5'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512bw'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512cd'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512dq'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512f'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='avx512vl'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='invpcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pcid'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='pku'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Snowridge'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='core-capability'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mpx'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='split-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Snowridge-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='core-capability'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='mpx'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='split-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Snowridge-v2'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='core-capability'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='split-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Snowridge-v3'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='core-capability'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='split-lock-detect'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='Snowridge-v4'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='cldemote'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='erms'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='gfni'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdir64b'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='movdiri'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='xsaves'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='athlon'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnow'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnowext'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='athlon-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnow'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnowext'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='core2duo'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='core2duo-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='coreduo'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='coreduo-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='n270'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='n270-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='ss'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='phenom'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnow'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnowext'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <blockers model='phenom-v1'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnow'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <feature name='3dnowext'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </blockers>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </mode>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  </cpu>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <memoryBacking supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <enum name='sourceType'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <value>file</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <value>anonymous</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <value>memfd</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  </memoryBacking>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <devices>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <disk supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='diskDevice'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>disk</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>cdrom</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>floppy</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>lun</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='bus'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>fdc</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>scsi</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>usb</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>sata</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='model'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio-transitional</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio-non-transitional</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </disk>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <graphics supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='type'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>vnc</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>egl-headless</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>dbus</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </graphics>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <video supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='modelType'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>vga</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>cirrus</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>none</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>bochs</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>ramfb</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </video>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <hostdev supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='mode'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>subsystem</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='startupPolicy'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>default</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>mandatory</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>requisite</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>optional</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='subsysType'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>usb</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>pci</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>scsi</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='capsType'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='pciBackend'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </hostdev>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <rng supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='model'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio-transitional</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtio-non-transitional</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='backendModel'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>random</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>egd</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>builtin</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </rng>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <filesystem supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='driverType'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>path</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>handle</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>virtiofs</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </filesystem>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <tpm supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='model'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>tpm-tis</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>tpm-crb</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='backendModel'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>emulator</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>external</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='backendVersion'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>2.0</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </tpm>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <redirdev supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='bus'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>usb</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </redirdev>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <channel supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='type'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>pty</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>unix</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </channel>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <crypto supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='model'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='type'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>qemu</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='backendModel'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>builtin</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </crypto>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <interface supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='backendType'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>default</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>passt</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </interface>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <panic supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='model'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>isa</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>hyperv</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </panic>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <console supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='type'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>null</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>vc</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>pty</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>dev</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>file</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>pipe</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>stdio</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>udp</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>tcp</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>unix</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>qemu-vdagent</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>dbus</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </console>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  </devices>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  <features>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <gic supported='no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <vmcoreinfo supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <genid supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <backingStoreInput supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <backup supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <async-teardown supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <ps2 supported='yes'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <sev supported='no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <sgx supported='no'/>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <hyperv supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='features'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>relaxed</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>vapic</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>spinlocks</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>vpindex</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>runtime</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>synic</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>stimer</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>reset</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>vendor_id</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>frequencies</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>reenlightenment</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>tlbflush</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>ipi</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>avic</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>emsr_bitmap</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>xmm_input</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <defaults>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <spinlocks>4095</spinlocks>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <stimer_direct>on</stimer_direct>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <tlbflush_direct>on</tlbflush_direct>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <tlbflush_extended>on</tlbflush_extended>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </defaults>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </hyperv>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    <launchSecurity supported='yes'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      <enum name='sectype'>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:        <value>tdx</value>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:      </enum>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:    </launchSecurity>
Dec  2 06:09:24 np0005542249 nova_compute[253887]:  </features>
Dec  2 06:09:24 np0005542249 nova_compute[253887]: </domainCapabilities>
Dec  2 06:09:24 np0005542249 nova_compute[253887]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 2025-12-02 11:09:24.544 253891 DEBUG nova.virt.libvirt.host [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 2025-12-02 11:09:24.545 253891 DEBUG nova.virt.libvirt.host [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 2025-12-02 11:09:24.545 253891 DEBUG nova.virt.libvirt.host [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 2025-12-02 11:09:24.545 253891 INFO nova.virt.libvirt.host [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Secure Boot support detected#033[00m
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 2025-12-02 11:09:24.547 253891 INFO nova.virt.libvirt.driver [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 2025-12-02 11:09:24.547 253891 INFO nova.virt.libvirt.driver [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 2025-12-02 11:09:24.558 253891 DEBUG nova.virt.libvirt.driver [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 2025-12-02 11:09:24.594 253891 INFO nova.virt.node [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Determined node identity 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 from /var/lib/nova/compute_id#033[00m
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 2025-12-02 11:09:24.614 253891 WARNING nova.compute.manager [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Compute nodes ['02b9b0a3-ac9d-4426-baf4-5ebd782a4062'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 2025-12-02 11:09:24.667 253891 INFO nova.compute.manager [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Dec  2 06:09:24 np0005542249 python3.9[254819]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 2025-12-02 11:09:24.708 253891 WARNING nova.compute.manager [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 2025-12-02 11:09:24.708 253891 DEBUG oslo_concurrency.lockutils [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 2025-12-02 11:09:24.708 253891 DEBUG oslo_concurrency.lockutils [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 2025-12-02 11:09:24.708 253891 DEBUG oslo_concurrency.lockutils [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 2025-12-02 11:09:24.709 253891 DEBUG nova.compute.resource_tracker [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 2025-12-02 11:09:24.709 253891 DEBUG oslo_concurrency.processutils [None req-e942a977-37f7-4cd3-bad0-2571d0ca1f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:09:24 np0005542249 systemd[1]: Stopping nova_compute container...
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 2025-12-02 11:09:24.788 253891 DEBUG oslo_concurrency.lockutils [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 2025-12-02 11:09:24.788 253891 DEBUG oslo_concurrency.lockutils [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:09:24 np0005542249 nova_compute[253887]: 2025-12-02 11:09:24.788 253891 DEBUG oslo_concurrency.lockutils [None req-e7ce24fe-5962-4f0c-a63f-d4e5a16b68c6 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:09:25 np0005542249 virtqemud[254597]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec  2 06:09:25 np0005542249 virtqemud[254597]: hostname: compute-0
Dec  2 06:09:25 np0005542249 virtqemud[254597]: End of file while reading data: Input/output error
Dec  2 06:09:25 np0005542249 systemd[1]: libpod-5fbc74cebe070d8cc77fb5ba95cab60fe5a6a788996a0004958af316ccf471ad.scope: Deactivated successfully.
Dec  2 06:09:25 np0005542249 systemd[1]: libpod-5fbc74cebe070d8cc77fb5ba95cab60fe5a6a788996a0004958af316ccf471ad.scope: Consumed 3.334s CPU time.
Dec  2 06:09:25 np0005542249 podman[254845]: 2025-12-02 11:09:25.239131363 +0000 UTC m=+0.491924715 container died 5fbc74cebe070d8cc77fb5ba95cab60fe5a6a788996a0004958af316ccf471ad (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2)
Dec  2 06:09:25 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5fbc74cebe070d8cc77fb5ba95cab60fe5a6a788996a0004958af316ccf471ad-userdata-shm.mount: Deactivated successfully.
Dec  2 06:09:25 np0005542249 systemd[1]: var-lib-containers-storage-overlay-f10da5fa7620e3f9c9d84b6427da33e91b9d0c0662d4a06a6e927aec3f6ee065-merged.mount: Deactivated successfully.
Dec  2 06:09:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:25 np0005542249 podman[254845]: 2025-12-02 11:09:25.779694944 +0000 UTC m=+1.032488286 container cleanup 5fbc74cebe070d8cc77fb5ba95cab60fe5a6a788996a0004958af316ccf471ad (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  2 06:09:25 np0005542249 podman[254845]: nova_compute
Dec  2 06:09:25 np0005542249 podman[254872]: nova_compute
Dec  2 06:09:25 np0005542249 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec  2 06:09:25 np0005542249 systemd[1]: Stopped nova_compute container.
Dec  2 06:09:25 np0005542249 systemd[1]: Starting nova_compute container...
Dec  2 06:09:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:09:25 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:09:25 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10da5fa7620e3f9c9d84b6427da33e91b9d0c0662d4a06a6e927aec3f6ee065/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  2 06:09:25 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10da5fa7620e3f9c9d84b6427da33e91b9d0c0662d4a06a6e927aec3f6ee065/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  2 06:09:25 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10da5fa7620e3f9c9d84b6427da33e91b9d0c0662d4a06a6e927aec3f6ee065/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  2 06:09:25 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10da5fa7620e3f9c9d84b6427da33e91b9d0c0662d4a06a6e927aec3f6ee065/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  2 06:09:25 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10da5fa7620e3f9c9d84b6427da33e91b9d0c0662d4a06a6e927aec3f6ee065/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  2 06:09:26 np0005542249 podman[254885]: 2025-12-02 11:09:26.010720942 +0000 UTC m=+0.128807780 container init 5fbc74cebe070d8cc77fb5ba95cab60fe5a6a788996a0004958af316ccf471ad (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 06:09:26 np0005542249 podman[254885]: 2025-12-02 11:09:26.025144162 +0000 UTC m=+0.143230950 container start 5fbc74cebe070d8cc77fb5ba95cab60fe5a6a788996a0004958af316ccf471ad (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  2 06:09:26 np0005542249 podman[254885]: nova_compute
Dec  2 06:09:26 np0005542249 nova_compute[254900]: + sudo -E kolla_set_configs
Dec  2 06:09:26 np0005542249 systemd[1]: Started nova_compute container.
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Validating config file
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Copying service configuration files
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Deleting /etc/ceph
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Creating directory /etc/ceph
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Setting permission for /etc/ceph
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Writing out command to execute
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  2 06:09:26 np0005542249 nova_compute[254900]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  2 06:09:26 np0005542249 nova_compute[254900]: ++ cat /run_command
Dec  2 06:09:26 np0005542249 nova_compute[254900]: + CMD=nova-compute
Dec  2 06:09:26 np0005542249 nova_compute[254900]: + ARGS=
Dec  2 06:09:26 np0005542249 nova_compute[254900]: + sudo kolla_copy_cacerts
Dec  2 06:09:26 np0005542249 nova_compute[254900]: + [[ ! -n '' ]]
Dec  2 06:09:26 np0005542249 nova_compute[254900]: + . kolla_extend_start
Dec  2 06:09:26 np0005542249 nova_compute[254900]: Running command: 'nova-compute'
Dec  2 06:09:26 np0005542249 nova_compute[254900]: + echo 'Running command: '\''nova-compute'\'''
Dec  2 06:09:26 np0005542249 nova_compute[254900]: + umask 0022
Dec  2 06:09:26 np0005542249 nova_compute[254900]: + exec nova-compute
Dec  2 06:09:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:09:26
Dec  2 06:09:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:09:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:09:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'backups', '.mgr', 'vms', 'images', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'volumes']
Dec  2 06:09:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:09:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:09:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:09:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:09:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:09:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:09:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:09:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:09:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:09:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:09:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:09:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:09:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:09:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:09:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:09:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:09:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:09:26 np0005542249 python3.9[255063]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  2 06:09:27 np0005542249 systemd[1]: Started libpod-conmon-473189829e7117a1d1eab4a762a38a4795dfa3e1dd1d4d965c36c0097a74ceba.scope.
Dec  2 06:09:27 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:09:27 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60b230dd81e3ed83abbc815ef4ec48328683a4b0615bf8221d0082ca428e252/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec  2 06:09:27 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60b230dd81e3ed83abbc815ef4ec48328683a4b0615bf8221d0082ca428e252/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  2 06:09:27 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60b230dd81e3ed83abbc815ef4ec48328683a4b0615bf8221d0082ca428e252/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec  2 06:09:27 np0005542249 podman[255088]: 2025-12-02 11:09:27.062274563 +0000 UTC m=+0.115161840 container init 473189829e7117a1d1eab4a762a38a4795dfa3e1dd1d4d965c36c0097a74ceba (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:09:27 np0005542249 podman[255088]: 2025-12-02 11:09:27.071792921 +0000 UTC m=+0.124680178 container start 473189829e7117a1d1eab4a762a38a4795dfa3e1dd1d4d965c36c0097a74ceba (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  2 06:09:27 np0005542249 python3.9[255063]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec  2 06:09:27 np0005542249 nova_compute_init[255110]: INFO:nova_statedir:Applying nova statedir ownership
Dec  2 06:09:27 np0005542249 nova_compute_init[255110]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec  2 06:09:27 np0005542249 nova_compute_init[255110]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec  2 06:09:27 np0005542249 nova_compute_init[255110]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec  2 06:09:27 np0005542249 nova_compute_init[255110]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec  2 06:09:27 np0005542249 nova_compute_init[255110]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec  2 06:09:27 np0005542249 nova_compute_init[255110]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec  2 06:09:27 np0005542249 nova_compute_init[255110]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec  2 06:09:27 np0005542249 nova_compute_init[255110]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec  2 06:09:27 np0005542249 nova_compute_init[255110]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec  2 06:09:27 np0005542249 nova_compute_init[255110]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec  2 06:09:27 np0005542249 nova_compute_init[255110]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec  2 06:09:27 np0005542249 nova_compute_init[255110]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec  2 06:09:27 np0005542249 nova_compute_init[255110]: INFO:nova_statedir:Nova statedir ownership complete
Dec  2 06:09:27 np0005542249 systemd[1]: libpod-473189829e7117a1d1eab4a762a38a4795dfa3e1dd1d4d965c36c0097a74ceba.scope: Deactivated successfully.
Dec  2 06:09:27 np0005542249 podman[255124]: 2025-12-02 11:09:27.18730727 +0000 UTC m=+0.028501613 container died 473189829e7117a1d1eab4a762a38a4795dfa3e1dd1d4d965c36c0097a74ceba (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.schema-version=1.0)
Dec  2 06:09:27 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-473189829e7117a1d1eab4a762a38a4795dfa3e1dd1d4d965c36c0097a74ceba-userdata-shm.mount: Deactivated successfully.
Dec  2 06:09:27 np0005542249 systemd[1]: var-lib-containers-storage-overlay-f60b230dd81e3ed83abbc815ef4ec48328683a4b0615bf8221d0082ca428e252-merged.mount: Deactivated successfully.
Dec  2 06:09:27 np0005542249 podman[255124]: 2025-12-02 11:09:27.236601775 +0000 UTC m=+0.077796098 container cleanup 473189829e7117a1d1eab4a762a38a4795dfa3e1dd1d4d965c36c0097a74ceba (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute_init, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  2 06:09:27 np0005542249 systemd[1]: libpod-conmon-473189829e7117a1d1eab4a762a38a4795dfa3e1dd1d4d965c36c0097a74ceba.scope: Deactivated successfully.
Dec  2 06:09:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:27 np0005542249 systemd[1]: session-50.scope: Deactivated successfully.
Dec  2 06:09:27 np0005542249 systemd[1]: session-50.scope: Consumed 2min 39.022s CPU time.
Dec  2 06:09:27 np0005542249 systemd-logind[787]: Session 50 logged out. Waiting for processes to exit.
Dec  2 06:09:27 np0005542249 systemd-logind[787]: Removed session 50.
Dec  2 06:09:28 np0005542249 nova_compute[254900]: 2025-12-02 11:09:28.215 254904 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  2 06:09:28 np0005542249 nova_compute[254900]: 2025-12-02 11:09:28.215 254904 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  2 06:09:28 np0005542249 nova_compute[254900]: 2025-12-02 11:09:28.216 254904 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  2 06:09:28 np0005542249 nova_compute[254900]: 2025-12-02 11:09:28.216 254904 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  2 06:09:28 np0005542249 nova_compute[254900]: 2025-12-02 11:09:28.345 254904 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:09:28 np0005542249 nova_compute[254900]: 2025-12-02 11:09:28.372 254904 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:09:28 np0005542249 nova_compute[254900]: 2025-12-02 11:09:28.372 254904 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.122 254904 INFO nova.virt.driver [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.247 254904 INFO nova.compute.provider_config [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.270 254904 DEBUG oslo_concurrency.lockutils [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.270 254904 DEBUG oslo_concurrency.lockutils [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.270 254904 DEBUG oslo_concurrency.lockutils [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.271 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.271 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.271 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.271 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.271 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.271 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.272 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.272 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.272 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.272 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.272 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.273 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.273 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.273 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.273 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.273 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.274 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.274 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.274 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.274 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.274 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.274 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.275 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.275 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.275 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.275 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.275 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.275 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.276 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.276 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.276 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.276 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.276 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.276 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.276 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.277 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.277 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.277 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.277 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.277 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.278 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.278 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.278 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.278 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.278 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.278 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.279 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.279 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.279 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.279 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.279 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.279 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.279 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.280 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.280 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.280 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.280 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.280 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.280 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.281 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.281 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.281 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.281 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.281 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.281 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.281 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.282 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.282 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.282 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.282 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.282 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.282 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.282 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.283 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.283 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.283 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.283 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.283 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.283 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.284 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.284 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.284 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.284 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.284 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.284 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.284 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.285 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.285 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.285 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.285 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.285 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.285 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.285 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.286 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.286 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.286 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.286 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.286 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.287 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.287 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.287 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.287 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.287 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.287 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.287 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.287 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.288 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.288 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.288 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.288 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.288 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.288 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.288 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.289 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.289 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.289 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.289 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.289 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.289 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.289 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.290 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.290 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.290 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.290 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.290 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.290 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.290 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.291 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.291 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.291 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.291 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.291 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.291 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.292 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.292 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.292 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.292 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.292 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.292 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.293 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.293 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.293 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.293 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.293 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.293 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.293 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.294 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.294 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.294 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.294 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.294 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.294 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.295 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.295 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.295 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.295 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.295 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.295 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.295 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.296 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.296 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.296 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.296 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.296 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.296 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.296 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.297 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.297 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.297 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.297 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.297 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.297 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.298 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.298 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.298 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.298 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.298 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.298 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.298 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.299 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.299 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.299 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.299 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.299 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.299 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.300 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.300 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.300 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.300 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.300 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.300 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.300 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.300 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.301 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.301 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.301 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.301 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.301 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.301 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.302 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.302 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.302 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.302 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.302 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.302 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.303 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.303 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.303 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.303 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.303 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.303 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.303 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.304 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.304 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.304 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.304 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.304 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.304 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.305 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.305 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.305 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.305 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.305 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.305 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.306 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.306 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.306 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.306 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.306 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.306 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.306 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.307 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.307 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.307 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.307 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.307 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.308 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.308 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.308 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.308 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.308 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.308 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.308 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.309 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.309 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.309 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.309 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.309 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.309 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.310 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.310 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.310 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.310 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.310 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.310 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.310 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.311 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.311 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.311 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.311 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.311 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.311 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.312 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.312 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.312 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.312 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.312 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.312 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.313 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.313 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.313 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.313 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.313 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.313 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.314 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.314 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.314 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.314 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.314 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.314 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.314 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.315 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.315 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.315 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.315 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.315 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.315 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.315 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.316 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.316 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.316 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.316 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.316 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.316 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.316 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.317 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.317 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.317 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.317 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.317 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.317 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.317 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.318 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.318 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.318 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.318 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.318 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.318 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.318 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.319 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.319 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.319 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.319 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.319 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.319 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.320 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.320 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.320 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.320 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.320 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.320 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.321 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.321 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.321 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.321 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.321 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.321 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.322 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.322 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.322 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.322 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.322 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.323 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.323 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.323 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.323 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.323 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.323 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.324 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.324 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.324 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.324 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.324 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.324 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.324 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.325 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.325 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.325 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.325 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.325 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.325 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.325 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.326 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.326 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.326 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.326 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.326 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.327 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.327 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.327 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.327 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.327 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.327 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.328 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.328 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.328 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.328 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.328 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.328 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.329 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.329 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.329 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.329 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.329 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.329 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.330 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.330 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.330 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.330 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.330 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.330 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.331 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.331 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.331 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.331 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.331 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.331 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.331 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.332 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.332 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.332 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.332 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.332 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.333 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.333 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.333 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.333 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.333 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.334 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.334 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.334 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.334 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.334 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.334 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.334 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.335 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.335 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.335 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.335 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.336 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.336 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.336 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.336 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.336 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.336 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.337 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.337 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.337 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.337 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.337 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.338 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.338 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.338 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.338 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.338 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.338 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.338 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.339 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.339 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.339 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.339 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.339 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.339 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.339 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.340 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.340 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.340 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.340 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.340 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.340 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.340 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.341 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.341 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.341 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.341 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.341 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.341 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.341 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.342 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.342 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.342 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.342 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.342 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.342 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.343 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.343 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.343 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.343 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.343 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.343 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.343 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.344 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.344 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.344 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.344 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.344 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.344 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.345 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.345 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.345 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.345 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.345 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.345 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.345 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.346 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.346 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.346 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.346 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.346 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.346 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.346 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.347 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.347 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.347 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.347 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.347 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.347 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.348 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.348 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.348 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.348 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.348 254904 WARNING oslo_config.cfg [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  2 06:09:29 np0005542249 nova_compute[254900]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  2 06:09:29 np0005542249 nova_compute[254900]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  2 06:09:29 np0005542249 nova_compute[254900]: and ``live_migration_inbound_addr`` respectively.
Dec  2 06:09:29 np0005542249 nova_compute[254900]: ).  Its value may be silently ignored in the future.#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.348 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.349 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.349 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.349 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.349 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.349 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.349 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.350 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.350 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.350 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.350 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.350 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.350 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.350 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.351 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.351 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.351 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.351 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.351 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.rbd_secret_uuid        = 95bc4eaa-1a14-59bf-acf2-4b3da055547d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.351 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.351 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.352 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.352 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.352 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.352 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.352 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.352 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.352 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.353 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.353 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.353 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.353 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.353 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.354 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.354 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.354 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.354 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.354 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.354 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.354 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.355 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.355 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.355 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.355 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.355 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.355 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.356 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.356 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.356 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.356 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.356 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.356 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.356 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.357 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.357 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.357 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.357 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.357 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.357 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.357 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.358 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.358 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.358 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.358 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.358 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.358 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.358 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.359 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.359 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.359 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.359 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.359 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.359 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.359 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.360 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.360 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.360 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.360 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.360 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.360 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.360 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.361 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.361 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.361 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.361 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.361 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.361 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.362 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.362 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.362 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.362 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.362 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.362 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.362 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.363 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.363 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.363 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.363 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.363 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.363 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.364 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.364 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.364 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.364 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.364 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.364 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.365 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.365 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.365 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.365 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.365 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.365 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.366 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.366 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.366 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.366 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.366 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.367 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.367 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.367 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.367 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.367 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.367 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.367 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.368 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.368 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.368 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.368 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.368 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.368 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.368 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.369 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.369 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.369 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.369 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.369 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.369 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.370 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.370 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.370 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.370 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.370 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.371 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.371 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.371 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.371 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.372 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.372 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.372 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.372 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.372 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.373 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.373 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.373 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.373 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.373 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.373 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.374 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.374 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.374 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.374 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.374 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.375 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.375 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.375 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.375 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.375 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.375 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.376 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.376 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.376 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.376 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.376 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.377 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.377 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.377 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.377 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.377 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.377 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.378 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.378 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.378 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.378 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.378 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.379 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.379 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.379 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.379 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.379 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.380 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.380 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.380 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.380 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.380 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.381 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.381 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.381 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.381 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.381 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.382 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.382 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.382 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.382 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.382 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.382 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.383 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.383 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.383 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.383 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.383 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.384 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.384 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.384 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.384 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.384 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.384 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.385 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.385 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.385 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.385 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.385 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.386 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.386 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.386 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.386 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.386 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.386 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.387 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.387 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.387 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.387 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.387 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.388 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.388 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.388 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.388 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.388 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.388 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.389 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.389 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.389 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.389 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.389 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.390 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.390 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.390 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.390 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.390 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.391 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.391 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.391 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.391 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.392 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.392 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.392 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.392 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.392 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.393 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.393 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.393 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.393 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.393 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.393 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.393 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.394 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.394 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.394 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.394 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.394 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.394 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.394 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.395 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.395 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.395 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.395 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.395 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.395 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.396 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.396 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.396 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.396 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.396 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.396 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.397 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.397 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.397 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.397 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.397 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.397 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.398 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.398 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.398 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.398 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.398 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.398 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.399 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.399 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.399 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.399 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.399 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.399 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.399 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.400 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.400 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.400 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.400 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.400 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.400 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.400 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.401 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.401 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.401 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.401 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.401 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.401 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.401 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.402 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.402 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.402 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.402 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.402 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.402 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.403 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.403 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.403 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.403 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.403 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.403 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.404 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.404 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.404 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.404 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.404 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.404 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.405 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.405 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.405 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.405 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.405 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.406 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.406 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.406 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.406 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.406 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.407 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.407 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.407 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.407 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.407 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.408 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.408 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.408 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.408 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.408 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.408 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.409 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.409 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.409 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.409 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.409 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.409 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.410 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.410 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.410 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.410 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.410 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.410 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.410 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.411 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.411 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.411 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.411 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.411 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.411 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.412 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.412 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.412 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.412 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.412 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.412 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.413 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.413 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.413 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.413 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.413 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.413 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.413 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.413 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.414 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.414 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.414 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.414 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.414 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.414 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.414 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.415 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.415 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.415 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.415 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.415 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.415 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.416 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.416 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.416 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.416 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.416 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.416 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.416 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.417 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.417 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.417 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.417 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.417 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.417 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.418 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.418 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.418 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.418 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.418 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.418 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.418 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.419 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.419 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.419 254904 DEBUG oslo_service.service [None req-a5aee48f-56d8-44b2-ae99-2761cd9299cb - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.420 254904 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.439 254904 INFO nova.virt.node [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Determined node identity 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 from /var/lib/nova/compute_id#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.440 254904 DEBUG nova.virt.libvirt.host [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.440 254904 DEBUG nova.virt.libvirt.host [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.441 254904 DEBUG nova.virt.libvirt.host [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.441 254904 DEBUG nova.virt.libvirt.host [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.453 254904 DEBUG nova.virt.libvirt.host [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fea3d990ee0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.456 254904 DEBUG nova.virt.libvirt.host [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fea3d990ee0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.457 254904 INFO nova.virt.libvirt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.465 254904 INFO nova.virt.libvirt.host [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Libvirt host capabilities <capabilities>
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <host>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <uuid>b5d8029e-bce4-4398-9c24-ad4d219021cb</uuid>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <cpu>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <arch>x86_64</arch>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model>EPYC-Rome-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <vendor>AMD</vendor>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <microcode version='16777317'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <signature family='23' model='49' stepping='0'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature name='x2apic'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature name='tsc-deadline'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature name='osxsave'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature name='hypervisor'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature name='tsc_adjust'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature name='spec-ctrl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature name='stibp'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature name='arch-capabilities'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature name='ssbd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature name='cmp_legacy'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature name='topoext'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature name='virt-ssbd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature name='lbrv'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature name='tsc-scale'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature name='vmcb-clean'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature name='pause-filter'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature name='pfthreshold'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature name='svme-addr-chk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature name='rdctl-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature name='skip-l1dfl-vmentry'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature name='mds-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature name='pschange-mc-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <pages unit='KiB' size='4'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <pages unit='KiB' size='2048'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <pages unit='KiB' size='1048576'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </cpu>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <power_management>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <suspend_mem/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </power_management>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <iommu support='no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <migration_features>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <live/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <uri_transports>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <uri_transport>tcp</uri_transport>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <uri_transport>rdma</uri_transport>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </uri_transports>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </migration_features>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <topology>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <cells num='1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <cell id='0'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:          <memory unit='KiB'>7864320</memory>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:          <pages unit='KiB' size='4'>1966080</pages>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:          <pages unit='KiB' size='2048'>0</pages>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:          <distances>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:            <sibling id='0' value='10'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:          </distances>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:          <cpus num='8'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:          </cpus>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        </cell>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </cells>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </topology>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <cache>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </cache>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <secmodel>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model>selinux</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <doi>0</doi>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </secmodel>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <secmodel>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model>dac</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <doi>0</doi>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </secmodel>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  </host>
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <guest>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <os_type>hvm</os_type>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <arch name='i686'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <wordsize>32</wordsize>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <domain type='qemu'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <domain type='kvm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </arch>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <features>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <pae/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <nonpae/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <acpi default='on' toggle='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <apic default='on' toggle='no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <cpuselection/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <deviceboot/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <disksnapshot default='on' toggle='no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <externalSnapshot/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </features>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  </guest>
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <guest>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <os_type>hvm</os_type>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <arch name='x86_64'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <wordsize>64</wordsize>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <domain type='qemu'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <domain type='kvm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </arch>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <features>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <acpi default='on' toggle='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <apic default='on' toggle='no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <cpuselection/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <deviceboot/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <disksnapshot default='on' toggle='no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <externalSnapshot/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </features>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  </guest>
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 
Dec  2 06:09:29 np0005542249 nova_compute[254900]: </capabilities>
Dec  2 06:09:29 np0005542249 nova_compute[254900]: #033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.469 254904 DEBUG nova.virt.libvirt.volume.mount [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.471 254904 DEBUG nova.virt.libvirt.host [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.475 254904 DEBUG nova.virt.libvirt.host [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  2 06:09:29 np0005542249 nova_compute[254900]: <domainCapabilities>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <path>/usr/libexec/qemu-kvm</path>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <domain>kvm</domain>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <arch>i686</arch>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <vcpu max='4096'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <iothreads supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <os supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <enum name='firmware'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <loader supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='type'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>rom</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>pflash</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='readonly'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>yes</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>no</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='secure'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>no</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </loader>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <cpu>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <mode name='host-passthrough' supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='hostPassthroughMigratable'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>on</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>off</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </mode>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <mode name='maximum' supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='maximumMigratable'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>on</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>off</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </mode>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <mode name='host-model' supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <vendor>AMD</vendor>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='x2apic'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='tsc-deadline'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='hypervisor'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='tsc_adjust'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='spec-ctrl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='stibp'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='ssbd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='cmp_legacy'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='overflow-recov'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='succor'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='ibrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='amd-ssbd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='virt-ssbd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='lbrv'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='tsc-scale'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='vmcb-clean'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='flushbyasid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='pause-filter'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='pfthreshold'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='svme-addr-chk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='disable' name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </mode>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <mode name='custom' supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-noTSX'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server-v5'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cooperlake'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cooperlake-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cooperlake-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Denverton'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mpx'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Denverton-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mpx'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Denverton-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Denverton-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Dhyana-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Genoa'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amd-psfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='auto-ibrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='no-nested-data-bp'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='null-sel-clr-base'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='stibp-always-on'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Genoa-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amd-psfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='auto-ibrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='no-nested-data-bp'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='null-sel-clr-base'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='stibp-always-on'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Milan'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Milan-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Milan-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amd-psfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='no-nested-data-bp'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='null-sel-clr-base'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='stibp-always-on'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Rome'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Rome-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Rome-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Rome-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='GraniteRapids'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mcdt-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pbrsb-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='prefetchiti'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='GraniteRapids-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mcdt-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pbrsb-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='prefetchiti'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='GraniteRapids-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx10'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx10-128'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx10-256'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx10-512'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mcdt-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pbrsb-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='prefetchiti'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-noTSX'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-noTSX'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v5'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v6'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v7'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='IvyBridge'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='IvyBridge-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='IvyBridge-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='IvyBridge-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='KnightsMill'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-4fmaps'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-4vnniw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512er'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512pf'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='KnightsMill-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-4fmaps'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-4vnniw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512er'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512pf'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Opteron_G4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fma4'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xop'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Opteron_G4-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fma4'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xop'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Opteron_G5'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fma4'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tbm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xop'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Opteron_G5-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fma4'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tbm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xop'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='SapphireRapids'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='SapphireRapids-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='SapphireRapids-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='SapphireRapids-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='SierraForest'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-ne-convert'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cmpccxadd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mcdt-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pbrsb-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='SierraForest-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-ne-convert'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cmpccxadd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mcdt-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pbrsb-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-v5'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Snowridge'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='core-capability'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mpx'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='split-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Snowridge-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='core-capability'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mpx'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='split-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Snowridge-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='core-capability'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='split-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Snowridge-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='core-capability'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='split-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Snowridge-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='athlon'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnow'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnowext'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='athlon-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnow'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnowext'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='core2duo'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='core2duo-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='coreduo'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='coreduo-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='n270'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='n270-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='phenom'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnow'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnowext'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='phenom-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnow'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnowext'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </mode>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <memoryBacking supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <enum name='sourceType'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <value>file</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <value>anonymous</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <value>memfd</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  </memoryBacking>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <disk supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='diskDevice'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>disk</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>cdrom</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>floppy</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>lun</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='bus'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>fdc</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>scsi</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>usb</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>sata</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='model'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio-transitional</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio-non-transitional</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <graphics supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='type'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>vnc</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>egl-headless</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>dbus</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </graphics>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <video supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='modelType'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>vga</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>cirrus</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>none</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>bochs</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>ramfb</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <hostdev supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='mode'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>subsystem</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='startupPolicy'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>default</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>mandatory</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>requisite</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>optional</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='subsysType'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>usb</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>pci</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>scsi</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='capsType'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='pciBackend'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </hostdev>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <rng supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='model'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio-transitional</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio-non-transitional</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='backendModel'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>random</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>egd</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>builtin</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <filesystem supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='driverType'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>path</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>handle</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtiofs</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </filesystem>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <tpm supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='model'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>tpm-tis</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>tpm-crb</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='backendModel'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>emulator</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>external</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='backendVersion'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>2.0</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </tpm>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <redirdev supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='bus'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>usb</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </redirdev>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <channel supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='type'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>pty</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>unix</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </channel>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <crypto supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='model'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='type'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>qemu</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='backendModel'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>builtin</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </crypto>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <interface supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='backendType'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>default</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>passt</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <panic supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='model'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>isa</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>hyperv</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </panic>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <console supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='type'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>null</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>vc</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>pty</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>dev</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>file</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>pipe</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>stdio</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>udp</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>tcp</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>unix</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>qemu-vdagent</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>dbus</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </console>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <gic supported='no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <vmcoreinfo supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <genid supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <backingStoreInput supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <backup supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <async-teardown supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <ps2 supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <sev supported='no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <sgx supported='no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <hyperv supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='features'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>relaxed</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>vapic</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>spinlocks</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>vpindex</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>runtime</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>synic</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>stimer</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>reset</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>vendor_id</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>frequencies</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>reenlightenment</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>tlbflush</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>ipi</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>avic</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>emsr_bitmap</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>xmm_input</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <defaults>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <spinlocks>4095</spinlocks>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <stimer_direct>on</stimer_direct>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <tlbflush_direct>on</tlbflush_direct>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <tlbflush_extended>on</tlbflush_extended>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </defaults>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </hyperv>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <launchSecurity supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='sectype'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>tdx</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </launchSecurity>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:09:29 np0005542249 nova_compute[254900]: </domainCapabilities>
Dec  2 06:09:29 np0005542249 nova_compute[254900]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.480 254904 DEBUG nova.virt.libvirt.host [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  2 06:09:29 np0005542249 nova_compute[254900]: <domainCapabilities>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <path>/usr/libexec/qemu-kvm</path>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <domain>kvm</domain>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <arch>i686</arch>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <vcpu max='240'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <iothreads supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <os supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <enum name='firmware'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <loader supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='type'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>rom</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>pflash</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='readonly'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>yes</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>no</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='secure'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>no</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </loader>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <cpu>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <mode name='host-passthrough' supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='hostPassthroughMigratable'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>on</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>off</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </mode>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <mode name='maximum' supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='maximumMigratable'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>on</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>off</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </mode>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <mode name='host-model' supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <vendor>AMD</vendor>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='x2apic'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='tsc-deadline'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='hypervisor'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='tsc_adjust'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='spec-ctrl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='stibp'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='ssbd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='cmp_legacy'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='overflow-recov'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='succor'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='ibrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='amd-ssbd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='virt-ssbd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='lbrv'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='tsc-scale'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='vmcb-clean'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='flushbyasid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='pause-filter'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='pfthreshold'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='svme-addr-chk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='disable' name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </mode>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <mode name='custom' supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-noTSX'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server-v5'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cooperlake'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cooperlake-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cooperlake-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Denverton'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mpx'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Denverton-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mpx'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Denverton-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Denverton-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Dhyana-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Genoa'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amd-psfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='auto-ibrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='no-nested-data-bp'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='null-sel-clr-base'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='stibp-always-on'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Genoa-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amd-psfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='auto-ibrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='no-nested-data-bp'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='null-sel-clr-base'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='stibp-always-on'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Milan'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Milan-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Milan-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amd-psfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='no-nested-data-bp'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='null-sel-clr-base'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='stibp-always-on'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Rome'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Rome-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Rome-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Rome-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='GraniteRapids'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mcdt-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pbrsb-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='prefetchiti'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='GraniteRapids-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mcdt-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pbrsb-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='prefetchiti'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='GraniteRapids-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx10'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx10-128'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx10-256'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx10-512'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mcdt-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pbrsb-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='prefetchiti'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-noTSX'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-noTSX'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v5'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v6'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v7'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='IvyBridge'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='IvyBridge-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='IvyBridge-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='IvyBridge-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='KnightsMill'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-4fmaps'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-4vnniw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512er'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512pf'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='KnightsMill-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-4fmaps'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-4vnniw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512er'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512pf'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Opteron_G4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fma4'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xop'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Opteron_G4-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fma4'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xop'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Opteron_G5'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fma4'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tbm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xop'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Opteron_G5-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fma4'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tbm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xop'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='SapphireRapids'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='SapphireRapids-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='SapphireRapids-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='SapphireRapids-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='SierraForest'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-ne-convert'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cmpccxadd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mcdt-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pbrsb-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='SierraForest-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-ne-convert'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cmpccxadd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mcdt-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pbrsb-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-v5'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Snowridge'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='core-capability'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mpx'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='split-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Snowridge-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='core-capability'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mpx'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='split-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Snowridge-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='core-capability'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='split-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Snowridge-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='core-capability'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='split-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Snowridge-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='athlon'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnow'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnowext'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='athlon-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnow'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnowext'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='core2duo'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='core2duo-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='coreduo'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='coreduo-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='n270'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='n270-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='phenom'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnow'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnowext'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='phenom-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnow'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnowext'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </mode>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <memoryBacking supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <enum name='sourceType'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <value>file</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <value>anonymous</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <value>memfd</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  </memoryBacking>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <disk supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='diskDevice'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>disk</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>cdrom</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>floppy</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>lun</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='bus'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>ide</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>fdc</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>scsi</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>usb</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>sata</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='model'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio-transitional</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio-non-transitional</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <graphics supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='type'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>vnc</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>egl-headless</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>dbus</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </graphics>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <video supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='modelType'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>vga</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>cirrus</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>none</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>bochs</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>ramfb</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <hostdev supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='mode'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>subsystem</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='startupPolicy'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>default</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>mandatory</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>requisite</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>optional</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='subsysType'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>usb</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>pci</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>scsi</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='capsType'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='pciBackend'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </hostdev>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <rng supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='model'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio-transitional</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio-non-transitional</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='backendModel'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>random</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>egd</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>builtin</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <filesystem supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='driverType'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>path</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>handle</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtiofs</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </filesystem>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <tpm supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='model'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>tpm-tis</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>tpm-crb</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='backendModel'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>emulator</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>external</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='backendVersion'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>2.0</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </tpm>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <redirdev supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='bus'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>usb</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </redirdev>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <channel supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='type'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>pty</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>unix</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </channel>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <crypto supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='model'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='type'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>qemu</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='backendModel'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>builtin</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </crypto>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <interface supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='backendType'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>default</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>passt</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <panic supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='model'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>isa</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>hyperv</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </panic>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <console supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='type'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>null</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>vc</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>pty</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>dev</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>file</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>pipe</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>stdio</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>udp</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>tcp</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>unix</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>qemu-vdagent</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>dbus</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </console>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <gic supported='no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <vmcoreinfo supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <genid supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <backingStoreInput supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <backup supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <async-teardown supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <ps2 supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <sev supported='no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <sgx supported='no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <hyperv supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='features'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>relaxed</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>vapic</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>spinlocks</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>vpindex</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>runtime</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>synic</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>stimer</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>reset</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>vendor_id</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>frequencies</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>reenlightenment</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>tlbflush</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>ipi</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>avic</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>emsr_bitmap</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>xmm_input</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <defaults>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <spinlocks>4095</spinlocks>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <stimer_direct>on</stimer_direct>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <tlbflush_direct>on</tlbflush_direct>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <tlbflush_extended>on</tlbflush_extended>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </defaults>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </hyperv>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <launchSecurity supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='sectype'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>tdx</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </launchSecurity>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:09:29 np0005542249 nova_compute[254900]: </domainCapabilities>
Dec  2 06:09:29 np0005542249 nova_compute[254900]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.516 254904 DEBUG nova.virt.libvirt.host [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.520 254904 DEBUG nova.virt.libvirt.host [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec  2 06:09:29 np0005542249 nova_compute[254900]: <domainCapabilities>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <path>/usr/libexec/qemu-kvm</path>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <domain>kvm</domain>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <arch>x86_64</arch>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <vcpu max='4096'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <iothreads supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <os supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <enum name='firmware'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <value>efi</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <loader supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='type'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>rom</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>pflash</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='readonly'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>yes</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>no</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='secure'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>yes</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>no</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </loader>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <cpu>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <mode name='host-passthrough' supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='hostPassthroughMigratable'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>on</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>off</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </mode>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <mode name='maximum' supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='maximumMigratable'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>on</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>off</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </mode>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <mode name='host-model' supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <vendor>AMD</vendor>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='x2apic'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='tsc-deadline'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='hypervisor'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='tsc_adjust'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='spec-ctrl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='stibp'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='ssbd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='cmp_legacy'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='overflow-recov'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='succor'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='ibrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='amd-ssbd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='virt-ssbd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='lbrv'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='tsc-scale'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='vmcb-clean'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='flushbyasid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='pause-filter'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='pfthreshold'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='svme-addr-chk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='disable' name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </mode>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <mode name='custom' supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-noTSX'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server-v5'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cooperlake'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cooperlake-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cooperlake-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Denverton'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mpx'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Denverton-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mpx'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Denverton-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Denverton-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Dhyana-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Genoa'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amd-psfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='auto-ibrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='no-nested-data-bp'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='null-sel-clr-base'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='stibp-always-on'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Genoa-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amd-psfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='auto-ibrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='no-nested-data-bp'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='null-sel-clr-base'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='stibp-always-on'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Milan'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Milan-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Milan-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amd-psfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='no-nested-data-bp'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='null-sel-clr-base'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='stibp-always-on'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Rome'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Rome-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Rome-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Rome-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='GraniteRapids'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mcdt-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pbrsb-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='prefetchiti'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='GraniteRapids-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mcdt-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pbrsb-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='prefetchiti'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='GraniteRapids-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx10'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx10-128'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx10-256'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx10-512'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mcdt-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pbrsb-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='prefetchiti'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-noTSX'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-noTSX'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v5'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v6'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v7'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='IvyBridge'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='IvyBridge-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='IvyBridge-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='IvyBridge-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='KnightsMill'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-4fmaps'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-4vnniw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512er'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512pf'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='KnightsMill-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-4fmaps'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-4vnniw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512er'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512pf'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Opteron_G4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fma4'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xop'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Opteron_G4-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fma4'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xop'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Opteron_G5'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fma4'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tbm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xop'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Opteron_G5-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fma4'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tbm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xop'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='SapphireRapids'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='SapphireRapids-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='SapphireRapids-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='SapphireRapids-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='SierraForest'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-ne-convert'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cmpccxadd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mcdt-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pbrsb-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='SierraForest-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-ne-convert'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cmpccxadd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mcdt-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pbrsb-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-v5'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Snowridge'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='core-capability'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mpx'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='split-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Snowridge-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='core-capability'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mpx'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='split-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Snowridge-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='core-capability'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='split-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Snowridge-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='core-capability'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='split-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Snowridge-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='athlon'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnow'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnowext'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='athlon-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnow'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnowext'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='core2duo'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='core2duo-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='coreduo'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='coreduo-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='n270'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='n270-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='phenom'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnow'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnowext'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='phenom-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnow'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnowext'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </mode>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <memoryBacking supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <enum name='sourceType'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <value>file</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <value>anonymous</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <value>memfd</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  </memoryBacking>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <disk supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='diskDevice'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>disk</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>cdrom</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>floppy</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>lun</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='bus'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>fdc</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>scsi</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>usb</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>sata</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='model'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio-transitional</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio-non-transitional</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <graphics supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='type'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>vnc</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>egl-headless</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>dbus</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </graphics>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <video supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='modelType'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>vga</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>cirrus</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>none</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>bochs</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>ramfb</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <hostdev supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='mode'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>subsystem</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='startupPolicy'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>default</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>mandatory</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>requisite</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>optional</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='subsysType'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>usb</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>pci</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>scsi</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='capsType'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='pciBackend'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </hostdev>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <rng supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='model'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio-transitional</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio-non-transitional</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='backendModel'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>random</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>egd</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>builtin</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <filesystem supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='driverType'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>path</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>handle</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtiofs</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </filesystem>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <tpm supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='model'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>tpm-tis</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>tpm-crb</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='backendModel'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>emulator</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>external</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='backendVersion'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>2.0</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </tpm>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <redirdev supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='bus'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>usb</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </redirdev>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <channel supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='type'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>pty</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>unix</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </channel>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <crypto supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='model'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='type'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>qemu</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='backendModel'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>builtin</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </crypto>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <interface supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='backendType'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>default</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>passt</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <panic supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='model'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>isa</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>hyperv</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </panic>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <console supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='type'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>null</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>vc</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>pty</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>dev</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>file</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>pipe</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>stdio</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>udp</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>tcp</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>unix</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>qemu-vdagent</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>dbus</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </console>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <gic supported='no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <vmcoreinfo supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <genid supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <backingStoreInput supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <backup supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <async-teardown supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <ps2 supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <sev supported='no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <sgx supported='no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <hyperv supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='features'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>relaxed</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>vapic</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>spinlocks</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>vpindex</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>runtime</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>synic</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>stimer</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>reset</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>vendor_id</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>frequencies</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>reenlightenment</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>tlbflush</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>ipi</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>avic</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>emsr_bitmap</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>xmm_input</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <defaults>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <spinlocks>4095</spinlocks>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <stimer_direct>on</stimer_direct>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <tlbflush_direct>on</tlbflush_direct>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <tlbflush_extended>on</tlbflush_extended>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </defaults>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </hyperv>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <launchSecurity supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='sectype'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>tdx</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </launchSecurity>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:09:29 np0005542249 nova_compute[254900]: </domainCapabilities>
Dec  2 06:09:29 np0005542249 nova_compute[254900]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.607 254904 DEBUG nova.virt.libvirt.host [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec  2 06:09:29 np0005542249 nova_compute[254900]: <domainCapabilities>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <path>/usr/libexec/qemu-kvm</path>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <domain>kvm</domain>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <arch>x86_64</arch>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <vcpu max='240'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <iothreads supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <os supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <enum name='firmware'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <loader supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='type'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>rom</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>pflash</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='readonly'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>yes</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>no</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='secure'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>no</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </loader>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <cpu>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <mode name='host-passthrough' supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='hostPassthroughMigratable'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>on</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>off</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </mode>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <mode name='maximum' supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='maximumMigratable'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>on</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>off</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </mode>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <mode name='host-model' supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <vendor>AMD</vendor>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='x2apic'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='tsc-deadline'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='hypervisor'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='tsc_adjust'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='spec-ctrl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='stibp'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='ssbd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='cmp_legacy'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='overflow-recov'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='succor'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='ibrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='amd-ssbd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='virt-ssbd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='lbrv'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='tsc-scale'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='vmcb-clean'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='flushbyasid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='pause-filter'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='pfthreshold'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='svme-addr-chk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <feature policy='disable' name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </mode>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <mode name='custom' supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-noTSX'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Broadwell-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cascadelake-Server-v5'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cooperlake'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cooperlake-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Cooperlake-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Denverton'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mpx'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Denverton-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mpx'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Denverton-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Denverton-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Dhyana-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Genoa'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amd-psfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='auto-ibrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='no-nested-data-bp'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='null-sel-clr-base'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='stibp-always-on'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Genoa-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amd-psfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='auto-ibrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='no-nested-data-bp'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='null-sel-clr-base'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='stibp-always-on'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Milan'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Milan-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Milan-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amd-psfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='no-nested-data-bp'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='null-sel-clr-base'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='stibp-always-on'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Rome'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Rome-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Rome-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-Rome-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='EPYC-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='GraniteRapids'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mcdt-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pbrsb-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='prefetchiti'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='GraniteRapids-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mcdt-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pbrsb-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='prefetchiti'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='GraniteRapids-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx10'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx10-128'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx10-256'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx10-512'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mcdt-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pbrsb-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='prefetchiti'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-noTSX'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Haswell-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-noTSX'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v5'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v6'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Icelake-Server-v7'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='IvyBridge'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='IvyBridge-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='IvyBridge-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='IvyBridge-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='KnightsMill'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-4fmaps'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-4vnniw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512er'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512pf'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='KnightsMill-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-4fmaps'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-4vnniw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512er'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512pf'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Opteron_G4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fma4'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xop'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Opteron_G4-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fma4'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xop'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Opteron_G5'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fma4'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tbm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xop'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Opteron_G5-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fma4'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tbm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xop'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='SapphireRapids'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='SapphireRapids-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='SapphireRapids-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='SapphireRapids-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='amx-tile'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-bf16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-fp16'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512-vpopcntdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bitalg'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vbmi2'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrc'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fzrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='la57'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='taa-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='tsx-ldtrk'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xfd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='SierraForest'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-ne-convert'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cmpccxadd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mcdt-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pbrsb-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='SierraForest-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-ifma'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-ne-convert'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx-vnni-int8'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='bus-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cmpccxadd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fbsdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='fsrs'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ibrs-all'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mcdt-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pbrsb-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='psdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='sbdr-ssdp-no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='serialize'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vaes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='vpclmulqdq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Client-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='hle'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='rtm'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Skylake-Server-v5'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512bw'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512cd'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512dq'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512f'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='avx512vl'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='invpcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pcid'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='pku'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Snowridge'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='core-capability'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mpx'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='split-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Snowridge-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='core-capability'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='mpx'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='split-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Snowridge-v2'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='core-capability'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='split-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Snowridge-v3'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='core-capability'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='split-lock-detect'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='Snowridge-v4'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='cldemote'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='erms'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='gfni'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdir64b'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='movdiri'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='xsaves'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='athlon'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnow'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnowext'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='athlon-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnow'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnowext'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='core2duo'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='core2duo-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='coreduo'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='coreduo-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='n270'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='n270-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='ss'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='phenom'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnow'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnowext'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <blockers model='phenom-v1'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnow'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <feature name='3dnowext'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </blockers>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </mode>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <memoryBacking supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <enum name='sourceType'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <value>file</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <value>anonymous</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <value>memfd</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  </memoryBacking>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <disk supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='diskDevice'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>disk</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>cdrom</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>floppy</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>lun</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='bus'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>ide</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>fdc</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>scsi</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>usb</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>sata</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='model'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio-transitional</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio-non-transitional</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <graphics supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='type'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>vnc</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>egl-headless</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>dbus</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </graphics>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <video supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='modelType'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>vga</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>cirrus</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>none</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>bochs</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>ramfb</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <hostdev supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='mode'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>subsystem</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='startupPolicy'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>default</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>mandatory</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>requisite</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>optional</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='subsysType'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>usb</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>pci</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>scsi</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='capsType'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='pciBackend'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </hostdev>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <rng supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='model'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio-transitional</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtio-non-transitional</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='backendModel'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>random</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>egd</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>builtin</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <filesystem supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='driverType'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>path</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>handle</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>virtiofs</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </filesystem>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <tpm supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='model'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>tpm-tis</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>tpm-crb</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='backendModel'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>emulator</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>external</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='backendVersion'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>2.0</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </tpm>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <redirdev supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='bus'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>usb</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </redirdev>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <channel supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='type'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>pty</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>unix</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </channel>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <crypto supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='model'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='type'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>qemu</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='backendModel'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>builtin</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </crypto>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <interface supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='backendType'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>default</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>passt</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <panic supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='model'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>isa</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>hyperv</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </panic>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <console supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='type'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>null</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>vc</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>pty</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>dev</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>file</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>pipe</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>stdio</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>udp</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>tcp</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>unix</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>qemu-vdagent</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>dbus</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </console>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <gic supported='no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <vmcoreinfo supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <genid supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <backingStoreInput supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <backup supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <async-teardown supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <ps2 supported='yes'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <sev supported='no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <sgx supported='no'/>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <hyperv supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='features'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>relaxed</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>vapic</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>spinlocks</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>vpindex</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>runtime</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>synic</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>stimer</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>reset</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>vendor_id</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>frequencies</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>reenlightenment</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>tlbflush</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>ipi</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>avic</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>emsr_bitmap</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>xmm_input</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <defaults>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <spinlocks>4095</spinlocks>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <stimer_direct>on</stimer_direct>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <tlbflush_direct>on</tlbflush_direct>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <tlbflush_extended>on</tlbflush_extended>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </defaults>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </hyperv>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    <launchSecurity supported='yes'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      <enum name='sectype'>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:        <value>tdx</value>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:      </enum>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:    </launchSecurity>
Dec  2 06:09:29 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:09:29 np0005542249 nova_compute[254900]: </domainCapabilities>
Dec  2 06:09:29 np0005542249 nova_compute[254900]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.677 254904 DEBUG nova.virt.libvirt.host [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.678 254904 INFO nova.virt.libvirt.host [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Secure Boot support detected#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.680 254904 INFO nova.virt.libvirt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.680 254904 INFO nova.virt.libvirt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.688 254904 DEBUG nova.virt.libvirt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.713 254904 INFO nova.virt.node [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Determined node identity 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 from /var/lib/nova/compute_id#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.732 254904 WARNING nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Compute nodes ['02b9b0a3-ac9d-4426-baf4-5ebd782a4062'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.765 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.782 254904 WARNING nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.782 254904 DEBUG oslo_concurrency.lockutils [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.782 254904 DEBUG oslo_concurrency.lockutils [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.783 254904 DEBUG oslo_concurrency.lockutils [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.783 254904 DEBUG nova.compute.resource_tracker [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:09:29 np0005542249 nova_compute[254900]: 2025-12-02 11:09:29.783 254904 DEBUG oslo_concurrency.processutils [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:09:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:09:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1967079838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:09:30 np0005542249 nova_compute[254900]: 2025-12-02 11:09:30.255 254904 DEBUG oslo_concurrency.processutils [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:09:30 np0005542249 systemd[1]: Starting libvirt nodedev daemon...
Dec  2 06:09:30 np0005542249 systemd[1]: Started libvirt nodedev daemon.
Dec  2 06:09:30 np0005542249 nova_compute[254900]: 2025-12-02 11:09:30.545 254904 WARNING nova.virt.libvirt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:09:30 np0005542249 nova_compute[254900]: 2025-12-02 11:09:30.546 254904 DEBUG nova.compute.resource_tracker [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5182MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:09:30 np0005542249 nova_compute[254900]: 2025-12-02 11:09:30.547 254904 DEBUG oslo_concurrency.lockutils [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:09:30 np0005542249 nova_compute[254900]: 2025-12-02 11:09:30.547 254904 DEBUG oslo_concurrency.lockutils [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:09:30 np0005542249 nova_compute[254900]: 2025-12-02 11:09:30.560 254904 WARNING nova.compute.resource_tracker [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] No compute node record for compute-0.ctlplane.example.com:02b9b0a3-ac9d-4426-baf4-5ebd782a4062: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 could not be found.#033[00m
Dec  2 06:09:30 np0005542249 nova_compute[254900]: 2025-12-02 11:09:30.578 254904 INFO nova.compute.resource_tracker [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062#033[00m
Dec  2 06:09:30 np0005542249 nova_compute[254900]: 2025-12-02 11:09:30.656 254904 DEBUG nova.compute.resource_tracker [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:09:30 np0005542249 nova_compute[254900]: 2025-12-02 11:09:30.657 254904 DEBUG nova.compute.resource_tracker [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:09:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:09:31 np0005542249 nova_compute[254900]: 2025-12-02 11:09:31.565 254904 INFO nova.scheduler.client.report [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [req-5aa6b19f-2efa-4d5a-b4c2-1c45cfadf102] Created resource provider record via placement API for resource provider with UUID 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 and name compute-0.ctlplane.example.com.#033[00m
Dec  2 06:09:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:31 np0005542249 nova_compute[254900]: 2025-12-02 11:09:31.968 254904 DEBUG oslo_concurrency.processutils [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:09:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:09:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1598547179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:09:32 np0005542249 nova_compute[254900]: 2025-12-02 11:09:32.433 254904 DEBUG oslo_concurrency.processutils [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:09:32 np0005542249 nova_compute[254900]: 2025-12-02 11:09:32.441 254904 DEBUG nova.virt.libvirt.host [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec  2 06:09:32 np0005542249 nova_compute[254900]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Dec  2 06:09:32 np0005542249 nova_compute[254900]: 2025-12-02 11:09:32.442 254904 INFO nova.virt.libvirt.host [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] kernel doesn't support AMD SEV#033[00m
Dec  2 06:09:32 np0005542249 nova_compute[254900]: 2025-12-02 11:09:32.443 254904 DEBUG nova.compute.provider_tree [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Updating inventory in ProviderTree for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  2 06:09:32 np0005542249 nova_compute[254900]: 2025-12-02 11:09:32.444 254904 DEBUG nova.virt.libvirt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:09:32 np0005542249 nova_compute[254900]: 2025-12-02 11:09:32.497 254904 DEBUG nova.scheduler.client.report [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Updated inventory for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec  2 06:09:32 np0005542249 nova_compute[254900]: 2025-12-02 11:09:32.497 254904 DEBUG nova.compute.provider_tree [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Updating resource provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  2 06:09:32 np0005542249 nova_compute[254900]: 2025-12-02 11:09:32.498 254904 DEBUG nova.compute.provider_tree [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Updating inventory in ProviderTree for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  2 06:09:32 np0005542249 nova_compute[254900]: 2025-12-02 11:09:32.594 254904 DEBUG nova.compute.provider_tree [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Updating resource provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  2 06:09:32 np0005542249 nova_compute[254900]: 2025-12-02 11:09:32.629 254904 DEBUG nova.compute.resource_tracker [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:09:32 np0005542249 nova_compute[254900]: 2025-12-02 11:09:32.629 254904 DEBUG oslo_concurrency.lockutils [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:09:32 np0005542249 nova_compute[254900]: 2025-12-02 11:09:32.630 254904 DEBUG nova.service [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Dec  2 06:09:32 np0005542249 nova_compute[254900]: 2025-12-02 11:09:32.721 254904 DEBUG nova.service [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Dec  2 06:09:32 np0005542249 nova_compute[254900]: 2025-12-02 11:09:32.722 254904 DEBUG nova.servicegroup.drivers.db [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Dec  2 06:09:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:09:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:09:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:09:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:09:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:09:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:09:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:09:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:09:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:09:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:09:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:09:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:09:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:09:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:09:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:09:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:09:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:09:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:09:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:09:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:09:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:09:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:09:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:09:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:09:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:09:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:42 np0005542249 podman[255267]: 2025-12-02 11:09:42.039433616 +0000 UTC m=+0.103256338 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  2 06:09:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:09:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:09:51 np0005542249 podman[255289]: 2025-12-02 11:09:51.005331799 +0000 UTC m=+0.082228727 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  2 06:09:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:09:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/585746809' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:09:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:09:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/585746809' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:09:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:09:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/616322222' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:09:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:09:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/616322222' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:09:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:09:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/986988339' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:09:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:09:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/986988339' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:09:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:54 np0005542249 podman[255315]: 2025-12-02 11:09:54.991282831 +0000 UTC m=+0.067371035 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:09:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:09:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:09:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:09:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:09:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:09:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:09:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:09:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:09:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:10:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:10:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:10 np0005542249 nova_compute[254900]: 2025-12-02 11:10:10.724 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:10:10 np0005542249 nova_compute[254900]: 2025-12-02 11:10:10.746 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:10:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:10:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:13 np0005542249 podman[255335]: 2025-12-02 11:10:13.036556942 +0000 UTC m=+0.106335841 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Dec  2 06:10:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:10:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:10:19.821 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:10:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:10:19.821 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:10:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:10:19.822 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:10:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:10:20 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:10:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:10:20 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:10:20 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:10:20 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:10:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:10:21 np0005542249 podman[255575]: 2025-12-02 11:10:21.168642839 +0000 UTC m=+0.118194971 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 06:10:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:10:21 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:10:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:10:21 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:10:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:10:21 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:10:21 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 9044d79e-ab8a-496f-b4a7-231ceb8e0e61 does not exist
Dec  2 06:10:21 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev f3499db5-1751-4a74-9178-51869355383d does not exist
Dec  2 06:10:21 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 5207217b-82cb-434f-ad2a-d5a7a2635daa does not exist
Dec  2 06:10:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:10:21 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:10:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:10:21 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:10:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:10:21 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:10:21 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:10:21 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:10:21 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:10:22 np0005542249 podman[255775]: 2025-12-02 11:10:22.468567679 +0000 UTC m=+0.074726156 container create dab9a82c77b4908216481fef96d7ca94e35229e87e9349d06c88960da642f064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:10:22 np0005542249 systemd[1]: Started libpod-conmon-dab9a82c77b4908216481fef96d7ca94e35229e87e9349d06c88960da642f064.scope.
Dec  2 06:10:22 np0005542249 podman[255775]: 2025-12-02 11:10:22.436761717 +0000 UTC m=+0.042920284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:10:22 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:10:22 np0005542249 podman[255775]: 2025-12-02 11:10:22.574560279 +0000 UTC m=+0.180718766 container init dab9a82c77b4908216481fef96d7ca94e35229e87e9349d06c88960da642f064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_noyce, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:10:22 np0005542249 podman[255775]: 2025-12-02 11:10:22.58382524 +0000 UTC m=+0.189983737 container start dab9a82c77b4908216481fef96d7ca94e35229e87e9349d06c88960da642f064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:10:22 np0005542249 podman[255775]: 2025-12-02 11:10:22.588507877 +0000 UTC m=+0.194666384 container attach dab9a82c77b4908216481fef96d7ca94e35229e87e9349d06c88960da642f064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  2 06:10:22 np0005542249 heuristic_noyce[255792]: 167 167
Dec  2 06:10:22 np0005542249 podman[255775]: 2025-12-02 11:10:22.590559873 +0000 UTC m=+0.196718360 container died dab9a82c77b4908216481fef96d7ca94e35229e87e9349d06c88960da642f064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_noyce, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:10:22 np0005542249 systemd[1]: libpod-dab9a82c77b4908216481fef96d7ca94e35229e87e9349d06c88960da642f064.scope: Deactivated successfully.
Dec  2 06:10:22 np0005542249 systemd[1]: var-lib-containers-storage-overlay-2ce8338cbbefe585f7ef3fa9a280a3e0acade4671b97407da3f375211555be00-merged.mount: Deactivated successfully.
Dec  2 06:10:22 np0005542249 podman[255775]: 2025-12-02 11:10:22.634206335 +0000 UTC m=+0.240364812 container remove dab9a82c77b4908216481fef96d7ca94e35229e87e9349d06c88960da642f064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_noyce, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  2 06:10:22 np0005542249 systemd[1]: libpod-conmon-dab9a82c77b4908216481fef96d7ca94e35229e87e9349d06c88960da642f064.scope: Deactivated successfully.
Dec  2 06:10:22 np0005542249 podman[255814]: 2025-12-02 11:10:22.818422965 +0000 UTC m=+0.054001104 container create 1124bf2e2c0b9080bdb376fe5fe1c2e7c84b7f635ab60ebfd695d9a530439f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_albattani, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec  2 06:10:22 np0005542249 systemd[1]: Started libpod-conmon-1124bf2e2c0b9080bdb376fe5fe1c2e7c84b7f635ab60ebfd695d9a530439f54.scope.
Dec  2 06:10:22 np0005542249 podman[255814]: 2025-12-02 11:10:22.790143508 +0000 UTC m=+0.025721647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:10:22 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:10:22 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39cfb85645b2869c74c066a32be91fc6a8a8929338f5f386a7e2af837c125adf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:10:22 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39cfb85645b2869c74c066a32be91fc6a8a8929338f5f386a7e2af837c125adf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:10:22 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39cfb85645b2869c74c066a32be91fc6a8a8929338f5f386a7e2af837c125adf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:10:22 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39cfb85645b2869c74c066a32be91fc6a8a8929338f5f386a7e2af837c125adf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:10:22 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39cfb85645b2869c74c066a32be91fc6a8a8929338f5f386a7e2af837c125adf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:10:22 np0005542249 podman[255814]: 2025-12-02 11:10:22.940677246 +0000 UTC m=+0.176255435 container init 1124bf2e2c0b9080bdb376fe5fe1c2e7c84b7f635ab60ebfd695d9a530439f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_albattani, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:10:22 np0005542249 podman[255814]: 2025-12-02 11:10:22.958705815 +0000 UTC m=+0.194283954 container start 1124bf2e2c0b9080bdb376fe5fe1c2e7c84b7f635ab60ebfd695d9a530439f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:10:22 np0005542249 podman[255814]: 2025-12-02 11:10:22.963639648 +0000 UTC m=+0.199217787 container attach 1124bf2e2c0b9080bdb376fe5fe1c2e7c84b7f635ab60ebfd695d9a530439f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:10:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:24 np0005542249 angry_albattani[255830]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:10:24 np0005542249 angry_albattani[255830]: --> relative data size: 1.0
Dec  2 06:10:24 np0005542249 angry_albattani[255830]: --> All data devices are unavailable
Dec  2 06:10:24 np0005542249 systemd[1]: libpod-1124bf2e2c0b9080bdb376fe5fe1c2e7c84b7f635ab60ebfd695d9a530439f54.scope: Deactivated successfully.
Dec  2 06:10:24 np0005542249 podman[255814]: 2025-12-02 11:10:24.171996576 +0000 UTC m=+1.407574745 container died 1124bf2e2c0b9080bdb376fe5fe1c2e7c84b7f635ab60ebfd695d9a530439f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:10:24 np0005542249 systemd[1]: libpod-1124bf2e2c0b9080bdb376fe5fe1c2e7c84b7f635ab60ebfd695d9a530439f54.scope: Consumed 1.178s CPU time.
Dec  2 06:10:24 np0005542249 systemd[1]: var-lib-containers-storage-overlay-39cfb85645b2869c74c066a32be91fc6a8a8929338f5f386a7e2af837c125adf-merged.mount: Deactivated successfully.
Dec  2 06:10:24 np0005542249 podman[255814]: 2025-12-02 11:10:24.252152358 +0000 UTC m=+1.487730457 container remove 1124bf2e2c0b9080bdb376fe5fe1c2e7c84b7f635ab60ebfd695d9a530439f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_albattani, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  2 06:10:24 np0005542249 systemd[1]: libpod-conmon-1124bf2e2c0b9080bdb376fe5fe1c2e7c84b7f635ab60ebfd695d9a530439f54.scope: Deactivated successfully.
Dec  2 06:10:24 np0005542249 ceph-osd[88961]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  2 06:10:24 np0005542249 ceph-osd[88961]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 5623 writes, 23K keys, 5623 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5623 writes, 874 syncs, 6.43 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55628344d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55628344d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowd
Dec  2 06:10:25 np0005542249 podman[256012]: 2025-12-02 11:10:25.12136831 +0000 UTC m=+0.057262521 container create fc73cb519c0661347adc13f0a44161d5f73aff9151a1c4f33230e9067317edcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_perlman, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:10:25 np0005542249 systemd[1]: Started libpod-conmon-fc73cb519c0661347adc13f0a44161d5f73aff9151a1c4f33230e9067317edcd.scope.
Dec  2 06:10:25 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:10:25 np0005542249 podman[256012]: 2025-12-02 11:10:25.100720302 +0000 UTC m=+0.036614553 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:10:25 np0005542249 podman[256012]: 2025-12-02 11:10:25.21625025 +0000 UTC m=+0.152144491 container init fc73cb519c0661347adc13f0a44161d5f73aff9151a1c4f33230e9067317edcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  2 06:10:25 np0005542249 podman[256012]: 2025-12-02 11:10:25.229792197 +0000 UTC m=+0.165686408 container start fc73cb519c0661347adc13f0a44161d5f73aff9151a1c4f33230e9067317edcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_perlman, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:10:25 np0005542249 podman[256012]: 2025-12-02 11:10:25.233283111 +0000 UTC m=+0.169177322 container attach fc73cb519c0661347adc13f0a44161d5f73aff9151a1c4f33230e9067317edcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  2 06:10:25 np0005542249 bold_perlman[256029]: 167 167
Dec  2 06:10:25 np0005542249 systemd[1]: libpod-fc73cb519c0661347adc13f0a44161d5f73aff9151a1c4f33230e9067317edcd.scope: Deactivated successfully.
Dec  2 06:10:25 np0005542249 podman[256012]: 2025-12-02 11:10:25.239773828 +0000 UTC m=+0.175668059 container died fc73cb519c0661347adc13f0a44161d5f73aff9151a1c4f33230e9067317edcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_perlman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:10:25 np0005542249 systemd[1]: var-lib-containers-storage-overlay-c3a915dac4c94dc440024905b8049fe0fa0137b5893816baf5bf4df628174250-merged.mount: Deactivated successfully.
Dec  2 06:10:25 np0005542249 podman[256012]: 2025-12-02 11:10:25.288678822 +0000 UTC m=+0.224573033 container remove fc73cb519c0661347adc13f0a44161d5f73aff9151a1c4f33230e9067317edcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_perlman, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  2 06:10:25 np0005542249 systemd[1]: libpod-conmon-fc73cb519c0661347adc13f0a44161d5f73aff9151a1c4f33230e9067317edcd.scope: Deactivated successfully.
Dec  2 06:10:25 np0005542249 podman[256026]: 2025-12-02 11:10:25.29968151 +0000 UTC m=+0.124796321 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  2 06:10:25 np0005542249 podman[256068]: 2025-12-02 11:10:25.509644997 +0000 UTC m=+0.067204361 container create 84349bea86a72be0a798c5b9ad2dba8e6c0af8681c943a56118ce4ac4087ade5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  2 06:10:25 np0005542249 systemd[1]: Started libpod-conmon-84349bea86a72be0a798c5b9ad2dba8e6c0af8681c943a56118ce4ac4087ade5.scope.
Dec  2 06:10:25 np0005542249 podman[256068]: 2025-12-02 11:10:25.489015438 +0000 UTC m=+0.046574822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:10:25 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:10:25 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a8c743208981fc3fb35c7a408059126f3d82db2481f7621b29ee5634aea32e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:10:25 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a8c743208981fc3fb35c7a408059126f3d82db2481f7621b29ee5634aea32e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:10:25 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a8c743208981fc3fb35c7a408059126f3d82db2481f7621b29ee5634aea32e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:10:25 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a8c743208981fc3fb35c7a408059126f3d82db2481f7621b29ee5634aea32e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:10:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:25 np0005542249 podman[256068]: 2025-12-02 11:10:25.623286005 +0000 UTC m=+0.180845479 container init 84349bea86a72be0a798c5b9ad2dba8e6c0af8681c943a56118ce4ac4087ade5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:10:25 np0005542249 podman[256068]: 2025-12-02 11:10:25.636866513 +0000 UTC m=+0.194425917 container start 84349bea86a72be0a798c5b9ad2dba8e6c0af8681c943a56118ce4ac4087ade5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:10:25 np0005542249 podman[256068]: 2025-12-02 11:10:25.641396386 +0000 UTC m=+0.198955790 container attach 84349bea86a72be0a798c5b9ad2dba8e6c0af8681c943a56118ce4ac4087ade5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatterjee, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  2 06:10:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:10:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:10:26
Dec  2 06:10:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:10:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:10:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'vms', '.rgw.root', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta']
Dec  2 06:10:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:10:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:10:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:10:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:10:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:10:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:10:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]: {
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:    "0": [
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:        {
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "devices": [
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "/dev/loop3"
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            ],
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "lv_name": "ceph_lv0",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "lv_size": "21470642176",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "name": "ceph_lv0",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "tags": {
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.cluster_name": "ceph",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.crush_device_class": "",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.encrypted": "0",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.osd_id": "0",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.type": "block",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.vdo": "0"
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            },
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "type": "block",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "vg_name": "ceph_vg0"
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:        }
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:    ],
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:    "1": [
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:        {
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "devices": [
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "/dev/loop4"
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            ],
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "lv_name": "ceph_lv1",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "lv_size": "21470642176",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "name": "ceph_lv1",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "tags": {
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.cluster_name": "ceph",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.crush_device_class": "",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.encrypted": "0",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.osd_id": "1",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.type": "block",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.vdo": "0"
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            },
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "type": "block",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "vg_name": "ceph_vg1"
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:        }
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:    ],
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:    "2": [
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:        {
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "devices": [
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "/dev/loop5"
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            ],
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "lv_name": "ceph_lv2",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "lv_size": "21470642176",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "name": "ceph_lv2",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "tags": {
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.cluster_name": "ceph",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.crush_device_class": "",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.encrypted": "0",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.osd_id": "2",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.type": "block",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:                "ceph.vdo": "0"
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            },
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "type": "block",
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:            "vg_name": "ceph_vg2"
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:        }
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]:    ]
Dec  2 06:10:26 np0005542249 crazy_chatterjee[256084]: }
Dec  2 06:10:26 np0005542249 systemd[1]: libpod-84349bea86a72be0a798c5b9ad2dba8e6c0af8681c943a56118ce4ac4087ade5.scope: Deactivated successfully.
Dec  2 06:10:26 np0005542249 podman[256068]: 2025-12-02 11:10:26.472717053 +0000 UTC m=+1.030276457 container died 84349bea86a72be0a798c5b9ad2dba8e6c0af8681c943a56118ce4ac4087ade5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:10:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:10:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:10:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:10:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:10:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:10:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:10:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:10:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:10:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:10:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:10:26 np0005542249 systemd[1]: var-lib-containers-storage-overlay-c5a8c743208981fc3fb35c7a408059126f3d82db2481f7621b29ee5634aea32e-merged.mount: Deactivated successfully.
Dec  2 06:10:26 np0005542249 podman[256068]: 2025-12-02 11:10:26.540523949 +0000 UTC m=+1.098083323 container remove 84349bea86a72be0a798c5b9ad2dba8e6c0af8681c943a56118ce4ac4087ade5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatterjee, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:10:26 np0005542249 systemd[1]: libpod-conmon-84349bea86a72be0a798c5b9ad2dba8e6c0af8681c943a56118ce4ac4087ade5.scope: Deactivated successfully.
Dec  2 06:10:27 np0005542249 podman[256244]: 2025-12-02 11:10:27.34246222 +0000 UTC m=+0.072713831 container create 967d60542fae5ae9030e122243e35b274ca6159d9588642fd6dd6ccbee2a254f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:10:27 np0005542249 systemd[1]: Started libpod-conmon-967d60542fae5ae9030e122243e35b274ca6159d9588642fd6dd6ccbee2a254f.scope.
Dec  2 06:10:27 np0005542249 podman[256244]: 2025-12-02 11:10:27.311593524 +0000 UTC m=+0.041845225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:10:27 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:10:27 np0005542249 podman[256244]: 2025-12-02 11:10:27.43441718 +0000 UTC m=+0.164668831 container init 967d60542fae5ae9030e122243e35b274ca6159d9588642fd6dd6ccbee2a254f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kepler, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 06:10:27 np0005542249 podman[256244]: 2025-12-02 11:10:27.443435004 +0000 UTC m=+0.173686605 container start 967d60542fae5ae9030e122243e35b274ca6159d9588642fd6dd6ccbee2a254f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:10:27 np0005542249 podman[256244]: 2025-12-02 11:10:27.44699766 +0000 UTC m=+0.177249281 container attach 967d60542fae5ae9030e122243e35b274ca6159d9588642fd6dd6ccbee2a254f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  2 06:10:27 np0005542249 hungry_kepler[256260]: 167 167
Dec  2 06:10:27 np0005542249 systemd[1]: libpod-967d60542fae5ae9030e122243e35b274ca6159d9588642fd6dd6ccbee2a254f.scope: Deactivated successfully.
Dec  2 06:10:27 np0005542249 podman[256244]: 2025-12-02 11:10:27.452319395 +0000 UTC m=+0.182571046 container died 967d60542fae5ae9030e122243e35b274ca6159d9588642fd6dd6ccbee2a254f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kepler, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  2 06:10:27 np0005542249 systemd[1]: var-lib-containers-storage-overlay-44899256b615f1b427c72060f4f5bbcc685eb94bec629d44f6f4cbf184086cbd-merged.mount: Deactivated successfully.
Dec  2 06:10:27 np0005542249 podman[256244]: 2025-12-02 11:10:27.490596852 +0000 UTC m=+0.220848453 container remove 967d60542fae5ae9030e122243e35b274ca6159d9588642fd6dd6ccbee2a254f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:10:27 np0005542249 systemd[1]: libpod-conmon-967d60542fae5ae9030e122243e35b274ca6159d9588642fd6dd6ccbee2a254f.scope: Deactivated successfully.
Dec  2 06:10:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:27 np0005542249 podman[256284]: 2025-12-02 11:10:27.717889038 +0000 UTC m=+0.052283787 container create dc440cd62f209e1ba7a207046d15e8474e7558705d6c58ae5e688cec79962197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_matsumoto, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  2 06:10:27 np0005542249 systemd[1]: Started libpod-conmon-dc440cd62f209e1ba7a207046d15e8474e7558705d6c58ae5e688cec79962197.scope.
Dec  2 06:10:27 np0005542249 podman[256284]: 2025-12-02 11:10:27.698856152 +0000 UTC m=+0.033250911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:10:27 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:10:27 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b837abf3b5a089fc0ef51da105b66ed6384924e2ee61383fca94cfbaf572b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:10:27 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b837abf3b5a089fc0ef51da105b66ed6384924e2ee61383fca94cfbaf572b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:10:27 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b837abf3b5a089fc0ef51da105b66ed6384924e2ee61383fca94cfbaf572b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:10:27 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b837abf3b5a089fc0ef51da105b66ed6384924e2ee61383fca94cfbaf572b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:10:27 np0005542249 podman[256284]: 2025-12-02 11:10:27.815665627 +0000 UTC m=+0.150060446 container init dc440cd62f209e1ba7a207046d15e8474e7558705d6c58ae5e688cec79962197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  2 06:10:27 np0005542249 podman[256284]: 2025-12-02 11:10:27.822637575 +0000 UTC m=+0.157032354 container start dc440cd62f209e1ba7a207046d15e8474e7558705d6c58ae5e688cec79962197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_matsumoto, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:10:27 np0005542249 podman[256284]: 2025-12-02 11:10:27.826394976 +0000 UTC m=+0.160789765 container attach dc440cd62f209e1ba7a207046d15e8474e7558705d6c58ae5e688cec79962197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_matsumoto, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:10:28 np0005542249 nova_compute[254900]: 2025-12-02 11:10:28.384 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:10:28 np0005542249 nova_compute[254900]: 2025-12-02 11:10:28.386 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:10:28 np0005542249 nova_compute[254900]: 2025-12-02 11:10:28.387 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:10:28 np0005542249 nova_compute[254900]: 2025-12-02 11:10:28.387 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 06:10:28 np0005542249 nova_compute[254900]: 2025-12-02 11:10:28.409 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 06:10:28 np0005542249 nova_compute[254900]: 2025-12-02 11:10:28.410 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:10:28 np0005542249 nova_compute[254900]: 2025-12-02 11:10:28.411 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:10:28 np0005542249 nova_compute[254900]: 2025-12-02 11:10:28.411 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:10:28 np0005542249 nova_compute[254900]: 2025-12-02 11:10:28.411 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:10:28 np0005542249 nova_compute[254900]: 2025-12-02 11:10:28.412 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:10:28 np0005542249 nova_compute[254900]: 2025-12-02 11:10:28.412 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:10:28 np0005542249 nova_compute[254900]: 2025-12-02 11:10:28.412 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:10:28 np0005542249 nova_compute[254900]: 2025-12-02 11:10:28.413 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:10:28 np0005542249 nova_compute[254900]: 2025-12-02 11:10:28.439 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:10:28 np0005542249 nova_compute[254900]: 2025-12-02 11:10:28.440 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:10:28 np0005542249 nova_compute[254900]: 2025-12-02 11:10:28.440 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:10:28 np0005542249 nova_compute[254900]: 2025-12-02 11:10:28.441 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:10:28 np0005542249 nova_compute[254900]: 2025-12-02 11:10:28.442 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:10:28 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:10:28 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1076753384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:10:28 np0005542249 fervent_matsumoto[256301]: {
Dec  2 06:10:28 np0005542249 fervent_matsumoto[256301]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:10:28 np0005542249 fervent_matsumoto[256301]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:10:28 np0005542249 fervent_matsumoto[256301]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:10:28 np0005542249 fervent_matsumoto[256301]:        "osd_id": 0,
Dec  2 06:10:28 np0005542249 fervent_matsumoto[256301]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:10:28 np0005542249 fervent_matsumoto[256301]:        "type": "bluestore"
Dec  2 06:10:28 np0005542249 fervent_matsumoto[256301]:    },
Dec  2 06:10:28 np0005542249 fervent_matsumoto[256301]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:10:28 np0005542249 fervent_matsumoto[256301]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:10:28 np0005542249 fervent_matsumoto[256301]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:10:28 np0005542249 fervent_matsumoto[256301]:        "osd_id": 2,
Dec  2 06:10:28 np0005542249 fervent_matsumoto[256301]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:10:28 np0005542249 fervent_matsumoto[256301]:        "type": "bluestore"
Dec  2 06:10:28 np0005542249 fervent_matsumoto[256301]:    },
Dec  2 06:10:28 np0005542249 fervent_matsumoto[256301]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:10:28 np0005542249 fervent_matsumoto[256301]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:10:28 np0005542249 fervent_matsumoto[256301]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:10:28 np0005542249 fervent_matsumoto[256301]:        "osd_id": 1,
Dec  2 06:10:28 np0005542249 fervent_matsumoto[256301]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:10:28 np0005542249 fervent_matsumoto[256301]:        "type": "bluestore"
Dec  2 06:10:28 np0005542249 fervent_matsumoto[256301]:    }
Dec  2 06:10:28 np0005542249 fervent_matsumoto[256301]: }
Dec  2 06:10:28 np0005542249 nova_compute[254900]: 2025-12-02 11:10:28.879 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:10:28 np0005542249 systemd[1]: libpod-dc440cd62f209e1ba7a207046d15e8474e7558705d6c58ae5e688cec79962197.scope: Deactivated successfully.
Dec  2 06:10:28 np0005542249 podman[256284]: 2025-12-02 11:10:28.907307213 +0000 UTC m=+1.241701992 container died dc440cd62f209e1ba7a207046d15e8474e7558705d6c58ae5e688cec79962197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_matsumoto, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:10:28 np0005542249 systemd[1]: libpod-dc440cd62f209e1ba7a207046d15e8474e7558705d6c58ae5e688cec79962197.scope: Consumed 1.090s CPU time.
Dec  2 06:10:28 np0005542249 systemd[1]: var-lib-containers-storage-overlay-b5b837abf3b5a089fc0ef51da105b66ed6384924e2ee61383fca94cfbaf572b2-merged.mount: Deactivated successfully.
Dec  2 06:10:28 np0005542249 podman[256284]: 2025-12-02 11:10:28.968114542 +0000 UTC m=+1.302509281 container remove dc440cd62f209e1ba7a207046d15e8474e7558705d6c58ae5e688cec79962197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  2 06:10:28 np0005542249 systemd[1]: libpod-conmon-dc440cd62f209e1ba7a207046d15e8474e7558705d6c58ae5e688cec79962197.scope: Deactivated successfully.
Dec  2 06:10:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:10:29 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:10:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:10:29 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:10:29 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 71458efb-fabc-489b-9030-edeab6840cad does not exist
Dec  2 06:10:29 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 18714ec7-3417-40b0-a1ad-40eb98021b83 does not exist
Dec  2 06:10:29 np0005542249 nova_compute[254900]: 2025-12-02 11:10:29.099 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:10:29 np0005542249 nova_compute[254900]: 2025-12-02 11:10:29.101 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5145MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:10:29 np0005542249 nova_compute[254900]: 2025-12-02 11:10:29.102 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:10:29 np0005542249 nova_compute[254900]: 2025-12-02 11:10:29.102 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:10:29 np0005542249 nova_compute[254900]: 2025-12-02 11:10:29.183 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:10:29 np0005542249 nova_compute[254900]: 2025-12-02 11:10:29.184 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:10:29 np0005542249 nova_compute[254900]: 2025-12-02 11:10:29.205 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:10:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:10:29 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1905353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:10:29 np0005542249 nova_compute[254900]: 2025-12-02 11:10:29.665 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:10:29 np0005542249 nova_compute[254900]: 2025-12-02 11:10:29.674 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:10:29 np0005542249 nova_compute[254900]: 2025-12-02 11:10:29.916 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:10:29 np0005542249 nova_compute[254900]: 2025-12-02 11:10:29.919 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:10:29 np0005542249 nova_compute[254900]: 2025-12-02 11:10:29.919 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:10:30 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:10:30 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:10:30 np0005542249 ceph-osd[89966]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  2 06:10:30 np0005542249 ceph-osd[89966]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 6787 writes, 27K keys, 6787 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6787 writes, 1246 syncs, 5.45 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556dc92b11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556dc92b11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec  2 06:10:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:10:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Dec  2 06:10:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/829467965' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec  2 06:10:33 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec  2 06:10:33 np0005542249 ceph-mgr[75372]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  2 06:10:33 np0005542249 ceph-mgr[75372]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  2 06:10:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:10:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:10:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:10:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:10:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:10:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:10:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:10:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:10:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:10:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:10:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:10:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:10:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:10:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:10:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:10:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:10:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:10:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:10:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:10:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:10:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:10:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:10:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:10:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:10:36 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  2 06:10:36 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5684 writes, 23K keys, 5684 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5684 writes, 881 syncs, 6.45 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c083b291f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c083b291f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdo
Dec  2 06:10:37 np0005542249 ceph-mgr[75372]: [devicehealth INFO root] Check health
Dec  2 06:10:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:10:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:44 np0005542249 podman[256442]: 2025-12-02 11:10:44.010968114 +0000 UTC m=+0.083839523 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Dec  2 06:10:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:10:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Dec  2 06:10:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3711026725' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec  2 06:10:48 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.14353 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec  2 06:10:48 np0005542249 ceph-mgr[75372]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  2 06:10:48 np0005542249 ceph-mgr[75372]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  2 06:10:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:10:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:52 np0005542249 podman[256462]: 2025-12-02 11:10:52.084312401 +0000 UTC m=+0.156632576 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller)
Dec  2 06:10:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:10:56 np0005542249 podman[256488]: 2025-12-02 11:10:56.009707172 +0000 UTC m=+0.083401331 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  2 06:10:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:10:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:10:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:10:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:10:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:10:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:10:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:10:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:11:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:11:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:11:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:11:13.708168) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673873708197, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1386, "num_deletes": 251, "total_data_size": 2179820, "memory_usage": 2214216, "flush_reason": "Manual Compaction"}
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673873719663, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2137864, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15012, "largest_seqno": 16397, "table_properties": {"data_size": 2131382, "index_size": 3681, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13459, "raw_average_key_size": 19, "raw_value_size": 2118344, "raw_average_value_size": 3096, "num_data_blocks": 169, "num_entries": 684, "num_filter_entries": 684, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764673730, "oldest_key_time": 1764673730, "file_creation_time": 1764673873, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 11552 microseconds, and 4774 cpu microseconds.
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:11:13.719711) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2137864 bytes OK
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:11:13.719742) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:11:13.721482) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:11:13.721523) EVENT_LOG_v1 {"time_micros": 1764673873721513, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:11:13.721551) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2173659, prev total WAL file size 2173659, number of live WAL files 2.
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:11:13.722753) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2087KB)], [35(7295KB)]
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673873722791, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9608423, "oldest_snapshot_seqno": -1}
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4034 keys, 7835080 bytes, temperature: kUnknown
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673873767088, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7835080, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7805660, "index_size": 18238, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 98649, "raw_average_key_size": 24, "raw_value_size": 7730165, "raw_average_value_size": 1916, "num_data_blocks": 772, "num_entries": 4034, "num_filter_entries": 4034, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672515, "oldest_key_time": 0, "file_creation_time": 1764673873, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:11:13.767405) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7835080 bytes
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:11:13.768985) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 216.4 rd, 176.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 7.1 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(8.2) write-amplify(3.7) OK, records in: 4548, records dropped: 514 output_compression: NoCompression
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:11:13.769040) EVENT_LOG_v1 {"time_micros": 1764673873769025, "job": 16, "event": "compaction_finished", "compaction_time_micros": 44392, "compaction_time_cpu_micros": 18817, "output_level": 6, "num_output_files": 1, "total_output_size": 7835080, "num_input_records": 4548, "num_output_records": 4034, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673873769852, "job": 16, "event": "table_file_deletion", "file_number": 37}
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764673873772454, "job": 16, "event": "table_file_deletion", "file_number": 35}
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:11:13.722683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:11:13.772610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:11:13.772619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:11:13.772622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:11:13.772624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:11:13 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:11:13.772626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:11:15 np0005542249 podman[256506]: 2025-12-02 11:11:15.015557356 +0000 UTC m=+0.086731400 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  2 06:11:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:11:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:11:19.822 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:11:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:11:19.823 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:11:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:11:19.823 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:11:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:11:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:23 np0005542249 podman[256528]: 2025-12-02 11:11:23.035170557 +0000 UTC m=+0.104360119 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  2 06:11:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:11:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:11:26
Dec  2 06:11:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:11:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:11:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['default.rgw.log', 'images', 'backups', 'vms', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data']
Dec  2 06:11:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:11:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:11:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:11:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:11:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:11:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:11:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:11:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:11:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:11:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:11:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:11:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:11:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:11:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:11:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:11:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:11:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:11:26 np0005542249 podman[256554]: 2025-12-02 11:11:26.994733305 +0000 UTC m=+0.070870850 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  2 06:11:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:29 np0005542249 nova_compute[254900]: 2025-12-02 11:11:29.910 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:11:29 np0005542249 nova_compute[254900]: 2025-12-02 11:11:29.911 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:11:29 np0005542249 nova_compute[254900]: 2025-12-02 11:11:29.930 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:11:29 np0005542249 nova_compute[254900]: 2025-12-02 11:11:29.931 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:11:29 np0005542249 nova_compute[254900]: 2025-12-02 11:11:29.931 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 06:11:29 np0005542249 nova_compute[254900]: 2025-12-02 11:11:29.944 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 06:11:29 np0005542249 nova_compute[254900]: 2025-12-02 11:11:29.944 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:11:29 np0005542249 nova_compute[254900]: 2025-12-02 11:11:29.947 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:11:29 np0005542249 nova_compute[254900]: 2025-12-02 11:11:29.947 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:11:29 np0005542249 nova_compute[254900]: 2025-12-02 11:11:29.948 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:11:29 np0005542249 nova_compute[254900]: 2025-12-02 11:11:29.948 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:11:29 np0005542249 nova_compute[254900]: 2025-12-02 11:11:29.949 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:11:29 np0005542249 nova_compute[254900]: 2025-12-02 11:11:29.949 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:11:29 np0005542249 nova_compute[254900]: 2025-12-02 11:11:29.950 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:11:29 np0005542249 nova_compute[254900]: 2025-12-02 11:11:29.970 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:11:29 np0005542249 nova_compute[254900]: 2025-12-02 11:11:29.971 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:11:29 np0005542249 nova_compute[254900]: 2025-12-02 11:11:29.972 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:11:29 np0005542249 nova_compute[254900]: 2025-12-02 11:11:29.972 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:11:29 np0005542249 nova_compute[254900]: 2025-12-02 11:11:29.973 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:11:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec  2 06:11:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  2 06:11:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:11:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:11:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:11:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:11:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:11:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:11:30 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev d691f514-3d03-4b99-acbe-b40d833ed698 does not exist
Dec  2 06:11:30 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 621e4e9e-5293-45a2-941f-6636f9b5358a does not exist
Dec  2 06:11:30 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 34318f53-0494-4dc0-9a18-eb29e71230ad does not exist
Dec  2 06:11:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:11:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:11:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:11:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:11:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:11:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:11:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:11:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/550897185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:11:30 np0005542249 nova_compute[254900]: 2025-12-02 11:11:30.427 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:11:30 np0005542249 nova_compute[254900]: 2025-12-02 11:11:30.578 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:11:30 np0005542249 nova_compute[254900]: 2025-12-02 11:11:30.580 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5176MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:11:30 np0005542249 nova_compute[254900]: 2025-12-02 11:11:30.580 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:11:30 np0005542249 nova_compute[254900]: 2025-12-02 11:11:30.580 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:11:30 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  2 06:11:30 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:11:30 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:11:30 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:11:30 np0005542249 nova_compute[254900]: 2025-12-02 11:11:30.803 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:11:30 np0005542249 nova_compute[254900]: 2025-12-02 11:11:30.804 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:11:30 np0005542249 podman[256865]: 2025-12-02 11:11:30.806314802 +0000 UTC m=+0.061524598 container create f5b17d188deaa49b9c4c866ec5db514f840ab02f41f82cb71d3631ec77e0879c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hopper, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  2 06:11:30 np0005542249 nova_compute[254900]: 2025-12-02 11:11:30.819 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:11:30 np0005542249 systemd[1]: Started libpod-conmon-f5b17d188deaa49b9c4c866ec5db514f840ab02f41f82cb71d3631ec77e0879c.scope.
Dec  2 06:11:30 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:11:30 np0005542249 podman[256865]: 2025-12-02 11:11:30.78741488 +0000 UTC m=+0.042624656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:11:30 np0005542249 podman[256865]: 2025-12-02 11:11:30.901158122 +0000 UTC m=+0.156367918 container init f5b17d188deaa49b9c4c866ec5db514f840ab02f41f82cb71d3631ec77e0879c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hopper, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  2 06:11:30 np0005542249 podman[256865]: 2025-12-02 11:11:30.910486125 +0000 UTC m=+0.165695891 container start f5b17d188deaa49b9c4c866ec5db514f840ab02f41f82cb71d3631ec77e0879c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hopper, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:11:30 np0005542249 podman[256865]: 2025-12-02 11:11:30.914377711 +0000 UTC m=+0.169587487 container attach f5b17d188deaa49b9c4c866ec5db514f840ab02f41f82cb71d3631ec77e0879c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  2 06:11:30 np0005542249 laughing_hopper[256882]: 167 167
Dec  2 06:11:30 np0005542249 systemd[1]: libpod-f5b17d188deaa49b9c4c866ec5db514f840ab02f41f82cb71d3631ec77e0879c.scope: Deactivated successfully.
Dec  2 06:11:30 np0005542249 conmon[256882]: conmon f5b17d188deaa49b9c4c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f5b17d188deaa49b9c4c866ec5db514f840ab02f41f82cb71d3631ec77e0879c.scope/container/memory.events
Dec  2 06:11:30 np0005542249 podman[256865]: 2025-12-02 11:11:30.923129788 +0000 UTC m=+0.178339644 container died f5b17d188deaa49b9c4c866ec5db514f840ab02f41f82cb71d3631ec77e0879c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:11:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:11:30 np0005542249 systemd[1]: var-lib-containers-storage-overlay-6b49ff3f53c47290aa76a742c2d25a7357ab4c4d391a1034d11acfabaa40490d-merged.mount: Deactivated successfully.
Dec  2 06:11:30 np0005542249 podman[256865]: 2025-12-02 11:11:30.983173605 +0000 UTC m=+0.238383381 container remove f5b17d188deaa49b9c4c866ec5db514f840ab02f41f82cb71d3631ec77e0879c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  2 06:11:30 np0005542249 systemd[1]: libpod-conmon-f5b17d188deaa49b9c4c866ec5db514f840ab02f41f82cb71d3631ec77e0879c.scope: Deactivated successfully.
Dec  2 06:11:31 np0005542249 podman[256924]: 2025-12-02 11:11:31.156677896 +0000 UTC m=+0.040581980 container create b7b3abae54ecc86a36ea0f35f708a62376aa7ba082806646069f841dd9c9b69a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:11:31 np0005542249 systemd[1]: Started libpod-conmon-b7b3abae54ecc86a36ea0f35f708a62376aa7ba082806646069f841dd9c9b69a.scope.
Dec  2 06:11:31 np0005542249 podman[256924]: 2025-12-02 11:11:31.139326626 +0000 UTC m=+0.023230730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:11:31 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:11:31 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b10b0b962f6632dd840c0214cb588b7ec617079d753733e86d32c35f6a41901a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:11:31 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b10b0b962f6632dd840c0214cb588b7ec617079d753733e86d32c35f6a41901a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:11:31 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b10b0b962f6632dd840c0214cb588b7ec617079d753733e86d32c35f6a41901a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:11:31 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b10b0b962f6632dd840c0214cb588b7ec617079d753733e86d32c35f6a41901a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:11:31 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b10b0b962f6632dd840c0214cb588b7ec617079d753733e86d32c35f6a41901a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:11:31 np0005542249 podman[256924]: 2025-12-02 11:11:31.25942699 +0000 UTC m=+0.143331094 container init b7b3abae54ecc86a36ea0f35f708a62376aa7ba082806646069f841dd9c9b69a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bardeen, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:11:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:11:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1270866197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:11:31 np0005542249 podman[256924]: 2025-12-02 11:11:31.267311314 +0000 UTC m=+0.151215398 container start b7b3abae54ecc86a36ea0f35f708a62376aa7ba082806646069f841dd9c9b69a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bardeen, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  2 06:11:31 np0005542249 podman[256924]: 2025-12-02 11:11:31.271222389 +0000 UTC m=+0.155126473 container attach b7b3abae54ecc86a36ea0f35f708a62376aa7ba082806646069f841dd9c9b69a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bardeen, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  2 06:11:31 np0005542249 nova_compute[254900]: 2025-12-02 11:11:31.285 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:11:31 np0005542249 nova_compute[254900]: 2025-12-02 11:11:31.292 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:11:31 np0005542249 nova_compute[254900]: 2025-12-02 11:11:31.394 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:11:31 np0005542249 nova_compute[254900]: 2025-12-02 11:11:31.395 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:11:31 np0005542249 nova_compute[254900]: 2025-12-02 11:11:31.396 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:11:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:32 np0005542249 amazing_bardeen[256941]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:11:32 np0005542249 amazing_bardeen[256941]: --> relative data size: 1.0
Dec  2 06:11:32 np0005542249 amazing_bardeen[256941]: --> All data devices are unavailable
Dec  2 06:11:32 np0005542249 systemd[1]: libpod-b7b3abae54ecc86a36ea0f35f708a62376aa7ba082806646069f841dd9c9b69a.scope: Deactivated successfully.
Dec  2 06:11:32 np0005542249 systemd[1]: libpod-b7b3abae54ecc86a36ea0f35f708a62376aa7ba082806646069f841dd9c9b69a.scope: Consumed 1.077s CPU time.
Dec  2 06:11:32 np0005542249 podman[256972]: 2025-12-02 11:11:32.43895063 +0000 UTC m=+0.032586854 container died b7b3abae54ecc86a36ea0f35f708a62376aa7ba082806646069f841dd9c9b69a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  2 06:11:32 np0005542249 systemd[1]: var-lib-containers-storage-overlay-b10b0b962f6632dd840c0214cb588b7ec617079d753733e86d32c35f6a41901a-merged.mount: Deactivated successfully.
Dec  2 06:11:32 np0005542249 podman[256972]: 2025-12-02 11:11:32.492144191 +0000 UTC m=+0.085780415 container remove b7b3abae54ecc86a36ea0f35f708a62376aa7ba082806646069f841dd9c9b69a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:11:32 np0005542249 systemd[1]: libpod-conmon-b7b3abae54ecc86a36ea0f35f708a62376aa7ba082806646069f841dd9c9b69a.scope: Deactivated successfully.
Dec  2 06:11:33 np0005542249 podman[257128]: 2025-12-02 11:11:33.182429766 +0000 UTC m=+0.037773685 container create f65c0524639ec28babc34ee2f961e772e873781ba67ddf375e65cc031e70f3ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_haslett, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:11:33 np0005542249 systemd[1]: Started libpod-conmon-f65c0524639ec28babc34ee2f961e772e873781ba67ddf375e65cc031e70f3ae.scope.
Dec  2 06:11:33 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:11:33 np0005542249 podman[257128]: 2025-12-02 11:11:33.242568375 +0000 UTC m=+0.097912304 container init f65c0524639ec28babc34ee2f961e772e873781ba67ddf375e65cc031e70f3ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_haslett, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  2 06:11:33 np0005542249 podman[257128]: 2025-12-02 11:11:33.250645874 +0000 UTC m=+0.105989783 container start f65c0524639ec28babc34ee2f961e772e873781ba67ddf375e65cc031e70f3ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_haslett, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:11:33 np0005542249 amazing_haslett[257144]: 167 167
Dec  2 06:11:33 np0005542249 systemd[1]: libpod-f65c0524639ec28babc34ee2f961e772e873781ba67ddf375e65cc031e70f3ae.scope: Deactivated successfully.
Dec  2 06:11:33 np0005542249 podman[257128]: 2025-12-02 11:11:33.254837048 +0000 UTC m=+0.110181017 container attach f65c0524639ec28babc34ee2f961e772e873781ba67ddf375e65cc031e70f3ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:11:33 np0005542249 podman[257128]: 2025-12-02 11:11:33.256068181 +0000 UTC m=+0.111412110 container died f65c0524639ec28babc34ee2f961e772e873781ba67ddf375e65cc031e70f3ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec  2 06:11:33 np0005542249 podman[257128]: 2025-12-02 11:11:33.165426415 +0000 UTC m=+0.020770364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:11:33 np0005542249 systemd[1]: var-lib-containers-storage-overlay-de4e904f140409b471560a4e31f792f27188d31b2b8cc85a7d92ac7cb477b235-merged.mount: Deactivated successfully.
Dec  2 06:11:33 np0005542249 podman[257128]: 2025-12-02 11:11:33.292693003 +0000 UTC m=+0.148036922 container remove f65c0524639ec28babc34ee2f961e772e873781ba67ddf375e65cc031e70f3ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  2 06:11:33 np0005542249 systemd[1]: libpod-conmon-f65c0524639ec28babc34ee2f961e772e873781ba67ddf375e65cc031e70f3ae.scope: Deactivated successfully.
Dec  2 06:11:33 np0005542249 podman[257169]: 2025-12-02 11:11:33.463926594 +0000 UTC m=+0.044365574 container create 5460c292897df9b902772e5238472a995c82f2013b3e23f4232e4a198d865376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_archimedes, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:11:33 np0005542249 systemd[1]: Started libpod-conmon-5460c292897df9b902772e5238472a995c82f2013b3e23f4232e4a198d865376.scope.
Dec  2 06:11:33 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:11:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/522fc1db66fcca63ee36b50de82641b1b667b7b83bea8e165fd5752bf51256f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:11:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/522fc1db66fcca63ee36b50de82641b1b667b7b83bea8e165fd5752bf51256f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:11:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/522fc1db66fcca63ee36b50de82641b1b667b7b83bea8e165fd5752bf51256f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:11:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/522fc1db66fcca63ee36b50de82641b1b667b7b83bea8e165fd5752bf51256f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:11:33 np0005542249 podman[257169]: 2025-12-02 11:11:33.537563599 +0000 UTC m=+0.118002599 container init 5460c292897df9b902772e5238472a995c82f2013b3e23f4232e4a198d865376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_archimedes, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:11:33 np0005542249 podman[257169]: 2025-12-02 11:11:33.444947139 +0000 UTC m=+0.025386149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:11:33 np0005542249 podman[257169]: 2025-12-02 11:11:33.544027224 +0000 UTC m=+0.124466204 container start 5460c292897df9b902772e5238472a995c82f2013b3e23f4232e4a198d865376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:11:33 np0005542249 podman[257169]: 2025-12-02 11:11:33.547380725 +0000 UTC m=+0.127819705 container attach 5460c292897df9b902772e5238472a995c82f2013b3e23f4232e4a198d865376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_archimedes, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:11:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]: {
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:    "0": [
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:        {
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "devices": [
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "/dev/loop3"
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            ],
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "lv_name": "ceph_lv0",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "lv_size": "21470642176",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "name": "ceph_lv0",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "tags": {
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.cluster_name": "ceph",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.crush_device_class": "",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.encrypted": "0",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.osd_id": "0",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.type": "block",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.vdo": "0"
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            },
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "type": "block",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "vg_name": "ceph_vg0"
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:        }
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:    ],
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:    "1": [
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:        {
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "devices": [
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "/dev/loop4"
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            ],
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "lv_name": "ceph_lv1",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "lv_size": "21470642176",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "name": "ceph_lv1",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "tags": {
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.cluster_name": "ceph",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.crush_device_class": "",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.encrypted": "0",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.osd_id": "1",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.type": "block",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.vdo": "0"
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            },
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "type": "block",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "vg_name": "ceph_vg1"
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:        }
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:    ],
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:    "2": [
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:        {
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "devices": [
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "/dev/loop5"
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            ],
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "lv_name": "ceph_lv2",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "lv_size": "21470642176",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "name": "ceph_lv2",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "tags": {
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.cluster_name": "ceph",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.crush_device_class": "",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.encrypted": "0",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.osd_id": "2",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.type": "block",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:                "ceph.vdo": "0"
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            },
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "type": "block",
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:            "vg_name": "ceph_vg2"
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:        }
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]:    ]
Dec  2 06:11:34 np0005542249 condescending_archimedes[257185]: }
Dec  2 06:11:34 np0005542249 systemd[1]: libpod-5460c292897df9b902772e5238472a995c82f2013b3e23f4232e4a198d865376.scope: Deactivated successfully.
Dec  2 06:11:34 np0005542249 podman[257169]: 2025-12-02 11:11:34.36281038 +0000 UTC m=+0.943249360 container died 5460c292897df9b902772e5238472a995c82f2013b3e23f4232e4a198d865376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_archimedes, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  2 06:11:34 np0005542249 systemd[1]: var-lib-containers-storage-overlay-522fc1db66fcca63ee36b50de82641b1b667b7b83bea8e165fd5752bf51256f7-merged.mount: Deactivated successfully.
Dec  2 06:11:34 np0005542249 podman[257169]: 2025-12-02 11:11:34.424333766 +0000 UTC m=+1.004772756 container remove 5460c292897df9b902772e5238472a995c82f2013b3e23f4232e4a198d865376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:11:34 np0005542249 systemd[1]: libpod-conmon-5460c292897df9b902772e5238472a995c82f2013b3e23f4232e4a198d865376.scope: Deactivated successfully.
Dec  2 06:11:35 np0005542249 podman[257347]: 2025-12-02 11:11:35.240264185 +0000 UTC m=+0.045397591 container create 88ac36f9c300a97bdcf952436ec11e09e36e6a3f8a743ae82a0f4628ded2f4c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mcnulty, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  2 06:11:35 np0005542249 systemd[1]: Started libpod-conmon-88ac36f9c300a97bdcf952436ec11e09e36e6a3f8a743ae82a0f4628ded2f4c8.scope.
Dec  2 06:11:35 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:11:35 np0005542249 podman[257347]: 2025-12-02 11:11:35.222232627 +0000 UTC m=+0.027366053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:11:35 np0005542249 podman[257347]: 2025-12-02 11:11:35.327691705 +0000 UTC m=+0.132825131 container init 88ac36f9c300a97bdcf952436ec11e09e36e6a3f8a743ae82a0f4628ded2f4c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:11:35 np0005542249 podman[257347]: 2025-12-02 11:11:35.336335379 +0000 UTC m=+0.141468805 container start 88ac36f9c300a97bdcf952436ec11e09e36e6a3f8a743ae82a0f4628ded2f4c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mcnulty, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  2 06:11:35 np0005542249 podman[257347]: 2025-12-02 11:11:35.340498592 +0000 UTC m=+0.145632018 container attach 88ac36f9c300a97bdcf952436ec11e09e36e6a3f8a743ae82a0f4628ded2f4c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mcnulty, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:11:35 np0005542249 laughing_mcnulty[257364]: 167 167
Dec  2 06:11:35 np0005542249 systemd[1]: libpod-88ac36f9c300a97bdcf952436ec11e09e36e6a3f8a743ae82a0f4628ded2f4c8.scope: Deactivated successfully.
Dec  2 06:11:35 np0005542249 podman[257347]: 2025-12-02 11:11:35.343954035 +0000 UTC m=+0.149087441 container died 88ac36f9c300a97bdcf952436ec11e09e36e6a3f8a743ae82a0f4628ded2f4c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:11:35 np0005542249 systemd[1]: var-lib-containers-storage-overlay-4b3bd732366d63be47827c91d3d8139857e735dfb886ec0bbe71204805e28d46-merged.mount: Deactivated successfully.
Dec  2 06:11:35 np0005542249 podman[257347]: 2025-12-02 11:11:35.37953804 +0000 UTC m=+0.184671436 container remove 88ac36f9c300a97bdcf952436ec11e09e36e6a3f8a743ae82a0f4628ded2f4c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  2 06:11:35 np0005542249 systemd[1]: libpod-conmon-88ac36f9c300a97bdcf952436ec11e09e36e6a3f8a743ae82a0f4628ded2f4c8.scope: Deactivated successfully.
Dec  2 06:11:35 np0005542249 podman[257388]: 2025-12-02 11:11:35.575470159 +0000 UTC m=+0.049090991 container create ed3b0fd9f074e7fa973b3d6ac38b1f0f6130d856bce0a4f2152626a2f2425bf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:11:35 np0005542249 systemd[1]: Started libpod-conmon-ed3b0fd9f074e7fa973b3d6ac38b1f0f6130d856bce0a4f2152626a2f2425bf2.scope.
Dec  2 06:11:35 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:11:35 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d047be6560688e8041712ddcd1414e114c4b0d0ce435de4cd826f1c94bb20047/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:11:35 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d047be6560688e8041712ddcd1414e114c4b0d0ce435de4cd826f1c94bb20047/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:11:35 np0005542249 podman[257388]: 2025-12-02 11:11:35.556929346 +0000 UTC m=+0.030550208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:11:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:35 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d047be6560688e8041712ddcd1414e114c4b0d0ce435de4cd826f1c94bb20047/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:11:35 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d047be6560688e8041712ddcd1414e114c4b0d0ce435de4cd826f1c94bb20047/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:11:35 np0005542249 podman[257388]: 2025-12-02 11:11:35.664782778 +0000 UTC m=+0.138403630 container init ed3b0fd9f074e7fa973b3d6ac38b1f0f6130d856bce0a4f2152626a2f2425bf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldberg, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  2 06:11:35 np0005542249 podman[257388]: 2025-12-02 11:11:35.673252447 +0000 UTC m=+0.146873279 container start ed3b0fd9f074e7fa973b3d6ac38b1f0f6130d856bce0a4f2152626a2f2425bf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldberg, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:11:35 np0005542249 podman[257388]: 2025-12-02 11:11:35.676355772 +0000 UTC m=+0.149976624 container attach ed3b0fd9f074e7fa973b3d6ac38b1f0f6130d856bce0a4f2152626a2f2425bf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  2 06:11:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:11:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:11:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:11:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:11:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:11:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:11:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:11:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:11:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:11:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:11:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:11:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:11:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:11:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:11:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:11:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:11:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:11:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:11:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:11:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:11:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:11:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:11:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:11:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:11:36 np0005542249 flamboyant_goldberg[257405]: {
Dec  2 06:11:36 np0005542249 flamboyant_goldberg[257405]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:11:36 np0005542249 flamboyant_goldberg[257405]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:11:36 np0005542249 flamboyant_goldberg[257405]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:11:36 np0005542249 flamboyant_goldberg[257405]:        "osd_id": 0,
Dec  2 06:11:36 np0005542249 flamboyant_goldberg[257405]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:11:36 np0005542249 flamboyant_goldberg[257405]:        "type": "bluestore"
Dec  2 06:11:36 np0005542249 flamboyant_goldberg[257405]:    },
Dec  2 06:11:36 np0005542249 flamboyant_goldberg[257405]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:11:36 np0005542249 flamboyant_goldberg[257405]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:11:36 np0005542249 flamboyant_goldberg[257405]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:11:36 np0005542249 flamboyant_goldberg[257405]:        "osd_id": 2,
Dec  2 06:11:36 np0005542249 flamboyant_goldberg[257405]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:11:36 np0005542249 flamboyant_goldberg[257405]:        "type": "bluestore"
Dec  2 06:11:36 np0005542249 flamboyant_goldberg[257405]:    },
Dec  2 06:11:36 np0005542249 flamboyant_goldberg[257405]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:11:36 np0005542249 flamboyant_goldberg[257405]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:11:36 np0005542249 flamboyant_goldberg[257405]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:11:36 np0005542249 flamboyant_goldberg[257405]:        "osd_id": 1,
Dec  2 06:11:36 np0005542249 flamboyant_goldberg[257405]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:11:36 np0005542249 flamboyant_goldberg[257405]:        "type": "bluestore"
Dec  2 06:11:36 np0005542249 flamboyant_goldberg[257405]:    }
Dec  2 06:11:36 np0005542249 flamboyant_goldberg[257405]: }
Dec  2 06:11:36 np0005542249 systemd[1]: libpod-ed3b0fd9f074e7fa973b3d6ac38b1f0f6130d856bce0a4f2152626a2f2425bf2.scope: Deactivated successfully.
Dec  2 06:11:36 np0005542249 systemd[1]: libpod-ed3b0fd9f074e7fa973b3d6ac38b1f0f6130d856bce0a4f2152626a2f2425bf2.scope: Consumed 1.079s CPU time.
Dec  2 06:11:36 np0005542249 podman[257388]: 2025-12-02 11:11:36.746586231 +0000 UTC m=+1.220207103 container died ed3b0fd9f074e7fa973b3d6ac38b1f0f6130d856bce0a4f2152626a2f2425bf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldberg, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:11:36 np0005542249 systemd[1]: var-lib-containers-storage-overlay-d047be6560688e8041712ddcd1414e114c4b0d0ce435de4cd826f1c94bb20047-merged.mount: Deactivated successfully.
Dec  2 06:11:36 np0005542249 podman[257388]: 2025-12-02 11:11:36.812736473 +0000 UTC m=+1.286357305 container remove ed3b0fd9f074e7fa973b3d6ac38b1f0f6130d856bce0a4f2152626a2f2425bf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  2 06:11:36 np0005542249 systemd[1]: libpod-conmon-ed3b0fd9f074e7fa973b3d6ac38b1f0f6130d856bce0a4f2152626a2f2425bf2.scope: Deactivated successfully.
Dec  2 06:11:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:11:36 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:11:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:11:36 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:11:36 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 40c7cdb6-0ff5-4bfb-a0f5-4cc4211b5e30 does not exist
Dec  2 06:11:36 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 0e9244cb-50a2-4725-9bef-9dada1bc4f00 does not exist
Dec  2 06:11:36 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:11:36 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:11:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:11:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:43 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:11:43.694 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:23:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:25:d0:a1:75:ed'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:11:43 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:11:43.697 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 06:11:43 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:11:43.699 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:11:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:11:46 np0005542249 podman[257502]: 2025-12-02 11:11:46.04235955 +0000 UTC m=+0.103369712 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec  2 06:11:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:11:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2744320311' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:11:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:11:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2744320311' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:11:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:11:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:54 np0005542249 podman[257523]: 2025-12-02 11:11:54.037510106 +0000 UTC m=+0.107812342 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:11:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:11:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:11:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:11:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:11:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:11:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:11:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:11:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:11:58 np0005542249 podman[257550]: 2025-12-02 11:11:58.013465008 +0000 UTC m=+0.081170590 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 06:11:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:12:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:12:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:12:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Dec  2 06:12:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  2 06:12:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:12:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  2 06:12:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  2 06:12:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:12:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  2 06:12:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  2 06:12:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Dec  2 06:12:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:12:16 np0005542249 podman[257571]: 2025-12-02 11:12:16.990183856 +0000 UTC m=+0.064707444 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 06:12:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:12:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:12:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:12:19.823 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:12:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:12:19.823 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:12:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:12:19.823 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:12:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:12:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:12:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:12:25 np0005542249 podman[257592]: 2025-12-02 11:12:25.078108377 +0000 UTC m=+0.149761659 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  2 06:12:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:12:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:12:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:12:26
Dec  2 06:12:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:12:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:12:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'default.rgw.log', 'volumes', '.rgw.root', 'vms', '.mgr', 'cephfs.cephfs.data', 'backups', 'default.rgw.control']
Dec  2 06:12:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:12:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:12:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:12:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:12:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:12:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:12:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:12:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:12:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:12:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:12:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:12:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:12:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:12:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:12:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:12:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:12:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:12:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:12:29 np0005542249 podman[257618]: 2025-12-02 11:12:29.010870169 +0000 UTC m=+0.085356054 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  2 06:12:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:12:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:12:31 np0005542249 nova_compute[254900]: 2025-12-02 11:12:31.397 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:12:31 np0005542249 nova_compute[254900]: 2025-12-02 11:12:31.398 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:12:31 np0005542249 nova_compute[254900]: 2025-12-02 11:12:31.398 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:12:31 np0005542249 nova_compute[254900]: 2025-12-02 11:12:31.398 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 06:12:31 np0005542249 nova_compute[254900]: 2025-12-02 11:12:31.428 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 06:12:31 np0005542249 nova_compute[254900]: 2025-12-02 11:12:31.428 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:12:31 np0005542249 nova_compute[254900]: 2025-12-02 11:12:31.429 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:12:31 np0005542249 nova_compute[254900]: 2025-12-02 11:12:31.429 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:12:31 np0005542249 nova_compute[254900]: 2025-12-02 11:12:31.429 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:12:31 np0005542249 nova_compute[254900]: 2025-12-02 11:12:31.429 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:12:31 np0005542249 nova_compute[254900]: 2025-12-02 11:12:31.429 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:12:31 np0005542249 nova_compute[254900]: 2025-12-02 11:12:31.430 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:12:31 np0005542249 nova_compute[254900]: 2025-12-02 11:12:31.430 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:12:31 np0005542249 nova_compute[254900]: 2025-12-02 11:12:31.461 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:12:31 np0005542249 nova_compute[254900]: 2025-12-02 11:12:31.461 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:12:31 np0005542249 nova_compute[254900]: 2025-12-02 11:12:31.461 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:12:31 np0005542249 nova_compute[254900]: 2025-12-02 11:12:31.462 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:12:31 np0005542249 nova_compute[254900]: 2025-12-02 11:12:31.462 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:12:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:12:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:12:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/207031500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:12:31 np0005542249 nova_compute[254900]: 2025-12-02 11:12:31.945 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:12:32 np0005542249 nova_compute[254900]: 2025-12-02 11:12:32.132 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:12:32 np0005542249 nova_compute[254900]: 2025-12-02 11:12:32.133 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5190MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:12:32 np0005542249 nova_compute[254900]: 2025-12-02 11:12:32.134 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:12:32 np0005542249 nova_compute[254900]: 2025-12-02 11:12:32.134 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:12:32 np0005542249 nova_compute[254900]: 2025-12-02 11:12:32.212 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:12:32 np0005542249 nova_compute[254900]: 2025-12-02 11:12:32.213 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:12:32 np0005542249 nova_compute[254900]: 2025-12-02 11:12:32.230 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:12:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:12:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2927776162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:12:32 np0005542249 nova_compute[254900]: 2025-12-02 11:12:32.692 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:12:32 np0005542249 nova_compute[254900]: 2025-12-02 11:12:32.700 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:12:32 np0005542249 nova_compute[254900]: 2025-12-02 11:12:32.733 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:12:32 np0005542249 nova_compute[254900]: 2025-12-02 11:12:32.736 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:12:32 np0005542249 nova_compute[254900]: 2025-12-02 11:12:32.736 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:12:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:12:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:12:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:12:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:12:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:12:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:12:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:12:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:12:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:12:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:12:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:12:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:12:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:12:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:12:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:12:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:12:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:12:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:12:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:12:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:12:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:12:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:12:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:12:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:12:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:12:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:12:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:12:37 np0005542249 podman[257852]: 2025-12-02 11:12:37.951440664 +0000 UTC m=+0.091490319 container exec cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:12:38 np0005542249 podman[257852]: 2025-12-02 11:12:38.074538348 +0000 UTC m=+0.214588013 container exec_died cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  2 06:12:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:12:38 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:12:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:12:38 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:12:39 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:12:39 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:12:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:12:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:12:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:12:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:12:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:12:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:12:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:12:40 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 9d1680aa-4ce9-46c1-928c-157a781a6739 does not exist
Dec  2 06:12:40 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 106c1d95-195f-4201-b800-49a5271ed121 does not exist
Dec  2 06:12:40 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev cdb80258-f63d-4821-b7e9-94a8b815eca6 does not exist
Dec  2 06:12:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:12:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:12:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:12:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:12:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:12:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:12:40 np0005542249 podman[258288]: 2025-12-02 11:12:40.872947135 +0000 UTC m=+0.070531042 container create d919ff0c1b09972d9f22ce21629adcf3a8e49920e71f96a490f8fa04602e3586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_nightingale, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  2 06:12:40 np0005542249 systemd[1]: Started libpod-conmon-d919ff0c1b09972d9f22ce21629adcf3a8e49920e71f96a490f8fa04602e3586.scope.
Dec  2 06:12:40 np0005542249 podman[258288]: 2025-12-02 11:12:40.844166735 +0000 UTC m=+0.041750702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:12:40 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:12:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:12:40 np0005542249 podman[258288]: 2025-12-02 11:12:40.970475186 +0000 UTC m=+0.168059143 container init d919ff0c1b09972d9f22ce21629adcf3a8e49920e71f96a490f8fa04602e3586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_nightingale, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  2 06:12:40 np0005542249 podman[258288]: 2025-12-02 11:12:40.984243609 +0000 UTC m=+0.181827486 container start d919ff0c1b09972d9f22ce21629adcf3a8e49920e71f96a490f8fa04602e3586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_nightingale, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:12:40 np0005542249 podman[258288]: 2025-12-02 11:12:40.988074493 +0000 UTC m=+0.185658410 container attach d919ff0c1b09972d9f22ce21629adcf3a8e49920e71f96a490f8fa04602e3586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Dec  2 06:12:40 np0005542249 hopeful_nightingale[258304]: 167 167
Dec  2 06:12:40 np0005542249 systemd[1]: libpod-d919ff0c1b09972d9f22ce21629adcf3a8e49920e71f96a490f8fa04602e3586.scope: Deactivated successfully.
Dec  2 06:12:40 np0005542249 podman[258288]: 2025-12-02 11:12:40.992846832 +0000 UTC m=+0.190430729 container died d919ff0c1b09972d9f22ce21629adcf3a8e49920e71f96a490f8fa04602e3586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:12:41 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:12:41 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:12:41 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:12:41 np0005542249 systemd[1]: var-lib-containers-storage-overlay-08fa4ea534b8d320fd57bf277332c49559a04849e85d9212fa7868fdbae52747-merged.mount: Deactivated successfully.
Dec  2 06:12:41 np0005542249 podman[258288]: 2025-12-02 11:12:41.051585663 +0000 UTC m=+0.249169550 container remove d919ff0c1b09972d9f22ce21629adcf3a8e49920e71f96a490f8fa04602e3586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_nightingale, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:12:41 np0005542249 systemd[1]: libpod-conmon-d919ff0c1b09972d9f22ce21629adcf3a8e49920e71f96a490f8fa04602e3586.scope: Deactivated successfully.
Dec  2 06:12:41 np0005542249 podman[258328]: 2025-12-02 11:12:41.215867593 +0000 UTC m=+0.044178788 container create 1e3b7f01a301cd97ad6d4b697e8bd8d502ad1fb8b4838b14b93ef631e9b0917a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_napier, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:12:41 np0005542249 systemd[1]: Started libpod-conmon-1e3b7f01a301cd97ad6d4b697e8bd8d502ad1fb8b4838b14b93ef631e9b0917a.scope.
Dec  2 06:12:41 np0005542249 podman[258328]: 2025-12-02 11:12:41.196614122 +0000 UTC m=+0.024925357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:12:41 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:12:41 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ba0426ea6048dad0025ece585f4befd233ece6d3d4ab51c1c7113bbaf8eb3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:12:41 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ba0426ea6048dad0025ece585f4befd233ece6d3d4ab51c1c7113bbaf8eb3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:12:41 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ba0426ea6048dad0025ece585f4befd233ece6d3d4ab51c1c7113bbaf8eb3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:12:41 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ba0426ea6048dad0025ece585f4befd233ece6d3d4ab51c1c7113bbaf8eb3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:12:41 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ba0426ea6048dad0025ece585f4befd233ece6d3d4ab51c1c7113bbaf8eb3c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:12:41 np0005542249 podman[258328]: 2025-12-02 11:12:41.328930196 +0000 UTC m=+0.157241461 container init 1e3b7f01a301cd97ad6d4b697e8bd8d502ad1fb8b4838b14b93ef631e9b0917a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_napier, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  2 06:12:41 np0005542249 podman[258328]: 2025-12-02 11:12:41.339590204 +0000 UTC m=+0.167901409 container start 1e3b7f01a301cd97ad6d4b697e8bd8d502ad1fb8b4838b14b93ef631e9b0917a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_napier, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:12:41 np0005542249 podman[258328]: 2025-12-02 11:12:41.343279804 +0000 UTC m=+0.171591049 container attach 1e3b7f01a301cd97ad6d4b697e8bd8d502ad1fb8b4838b14b93ef631e9b0917a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  2 06:12:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:12:42 np0005542249 jovial_napier[258345]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:12:42 np0005542249 jovial_napier[258345]: --> relative data size: 1.0
Dec  2 06:12:42 np0005542249 jovial_napier[258345]: --> All data devices are unavailable
Dec  2 06:12:42 np0005542249 systemd[1]: libpod-1e3b7f01a301cd97ad6d4b697e8bd8d502ad1fb8b4838b14b93ef631e9b0917a.scope: Deactivated successfully.
Dec  2 06:12:42 np0005542249 podman[258328]: 2025-12-02 11:12:42.473308532 +0000 UTC m=+1.301619727 container died 1e3b7f01a301cd97ad6d4b697e8bd8d502ad1fb8b4838b14b93ef631e9b0917a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_napier, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:12:42 np0005542249 systemd[1]: libpod-1e3b7f01a301cd97ad6d4b697e8bd8d502ad1fb8b4838b14b93ef631e9b0917a.scope: Consumed 1.076s CPU time.
Dec  2 06:12:42 np0005542249 systemd[1]: var-lib-containers-storage-overlay-a8ba0426ea6048dad0025ece585f4befd233ece6d3d4ab51c1c7113bbaf8eb3c-merged.mount: Deactivated successfully.
Dec  2 06:12:42 np0005542249 podman[258328]: 2025-12-02 11:12:42.543761589 +0000 UTC m=+1.372072784 container remove 1e3b7f01a301cd97ad6d4b697e8bd8d502ad1fb8b4838b14b93ef631e9b0917a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_napier, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:12:42 np0005542249 systemd[1]: libpod-conmon-1e3b7f01a301cd97ad6d4b697e8bd8d502ad1fb8b4838b14b93ef631e9b0917a.scope: Deactivated successfully.
Dec  2 06:12:43 np0005542249 podman[258526]: 2025-12-02 11:12:43.294350849 +0000 UTC m=+0.055654248 container create de28916ceb9c91a56bcd3931ecbcd3d3dd198380da88ca357554e083d013818c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ishizaka, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Dec  2 06:12:43 np0005542249 systemd[1]: Started libpod-conmon-de28916ceb9c91a56bcd3931ecbcd3d3dd198380da88ca357554e083d013818c.scope.
Dec  2 06:12:43 np0005542249 podman[258526]: 2025-12-02 11:12:43.266056643 +0000 UTC m=+0.027360052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:12:43 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:12:43 np0005542249 podman[258526]: 2025-12-02 11:12:43.401829731 +0000 UTC m=+0.163133130 container init de28916ceb9c91a56bcd3931ecbcd3d3dd198380da88ca357554e083d013818c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:12:43 np0005542249 podman[258526]: 2025-12-02 11:12:43.409438526 +0000 UTC m=+0.170741945 container start de28916ceb9c91a56bcd3931ecbcd3d3dd198380da88ca357554e083d013818c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ishizaka, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  2 06:12:43 np0005542249 podman[258526]: 2025-12-02 11:12:43.413400764 +0000 UTC m=+0.174704143 container attach de28916ceb9c91a56bcd3931ecbcd3d3dd198380da88ca357554e083d013818c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  2 06:12:43 np0005542249 youthful_ishizaka[258543]: 167 167
Dec  2 06:12:43 np0005542249 systemd[1]: libpod-de28916ceb9c91a56bcd3931ecbcd3d3dd198380da88ca357554e083d013818c.scope: Deactivated successfully.
Dec  2 06:12:43 np0005542249 podman[258526]: 2025-12-02 11:12:43.415909152 +0000 UTC m=+0.177212551 container died de28916ceb9c91a56bcd3931ecbcd3d3dd198380da88ca357554e083d013818c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:12:43 np0005542249 systemd[1]: var-lib-containers-storage-overlay-76e622b58b200c2336de58a10d46d16c43a41b596bfb6579106b15e3ba3cff95-merged.mount: Deactivated successfully.
Dec  2 06:12:43 np0005542249 podman[258526]: 2025-12-02 11:12:43.465836605 +0000 UTC m=+0.227139984 container remove de28916ceb9c91a56bcd3931ecbcd3d3dd198380da88ca357554e083d013818c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ishizaka, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:12:43 np0005542249 systemd[1]: libpod-conmon-de28916ceb9c91a56bcd3931ecbcd3d3dd198380da88ca357554e083d013818c.scope: Deactivated successfully.
Dec  2 06:12:43 np0005542249 podman[258569]: 2025-12-02 11:12:43.665081281 +0000 UTC m=+0.039544222 container create 2e0b894de4eb266e8270367bdfeb0a912e0c30ee97deb56ed5b43372258630b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_carson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:12:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:12:43 np0005542249 systemd[1]: Started libpod-conmon-2e0b894de4eb266e8270367bdfeb0a912e0c30ee97deb56ed5b43372258630b3.scope.
Dec  2 06:12:43 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:12:43 np0005542249 podman[258569]: 2025-12-02 11:12:43.648862791 +0000 UTC m=+0.023325762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:12:43 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c619f697e06a0becb2d80a71b157a0d6b145621d5c9d799ad36f6365b8bb0e1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:12:43 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c619f697e06a0becb2d80a71b157a0d6b145621d5c9d799ad36f6365b8bb0e1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:12:43 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c619f697e06a0becb2d80a71b157a0d6b145621d5c9d799ad36f6365b8bb0e1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:12:43 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c619f697e06a0becb2d80a71b157a0d6b145621d5c9d799ad36f6365b8bb0e1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:12:43 np0005542249 podman[258569]: 2025-12-02 11:12:43.765215864 +0000 UTC m=+0.139678855 container init 2e0b894de4eb266e8270367bdfeb0a912e0c30ee97deb56ed5b43372258630b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_carson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:12:43 np0005542249 podman[258569]: 2025-12-02 11:12:43.774504275 +0000 UTC m=+0.148967226 container start 2e0b894de4eb266e8270367bdfeb0a912e0c30ee97deb56ed5b43372258630b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  2 06:12:43 np0005542249 podman[258569]: 2025-12-02 11:12:43.778236846 +0000 UTC m=+0.152699817 container attach 2e0b894de4eb266e8270367bdfeb0a912e0c30ee97deb56ed5b43372258630b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_carson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]: {
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:    "0": [
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:        {
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "devices": [
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "/dev/loop3"
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            ],
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "lv_name": "ceph_lv0",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "lv_size": "21470642176",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "name": "ceph_lv0",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "tags": {
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.cluster_name": "ceph",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.crush_device_class": "",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.encrypted": "0",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.osd_id": "0",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.type": "block",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.vdo": "0"
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            },
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "type": "block",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "vg_name": "ceph_vg0"
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:        }
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:    ],
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:    "1": [
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:        {
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "devices": [
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "/dev/loop4"
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            ],
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "lv_name": "ceph_lv1",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "lv_size": "21470642176",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "name": "ceph_lv1",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "tags": {
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.cluster_name": "ceph",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.crush_device_class": "",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.encrypted": "0",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.osd_id": "1",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.type": "block",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.vdo": "0"
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            },
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "type": "block",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "vg_name": "ceph_vg1"
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:        }
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:    ],
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:    "2": [
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:        {
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "devices": [
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "/dev/loop5"
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            ],
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "lv_name": "ceph_lv2",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "lv_size": "21470642176",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "name": "ceph_lv2",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "tags": {
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.cluster_name": "ceph",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.crush_device_class": "",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.encrypted": "0",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.osd_id": "2",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.type": "block",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:                "ceph.vdo": "0"
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            },
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "type": "block",
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:            "vg_name": "ceph_vg2"
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:        }
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]:    ]
Dec  2 06:12:44 np0005542249 wonderful_carson[258586]: }
Dec  2 06:12:44 np0005542249 systemd[1]: libpod-2e0b894de4eb266e8270367bdfeb0a912e0c30ee97deb56ed5b43372258630b3.scope: Deactivated successfully.
Dec  2 06:12:44 np0005542249 podman[258569]: 2025-12-02 11:12:44.579980511 +0000 UTC m=+0.954443472 container died 2e0b894de4eb266e8270367bdfeb0a912e0c30ee97deb56ed5b43372258630b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_carson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:12:44 np0005542249 systemd[1]: var-lib-containers-storage-overlay-c619f697e06a0becb2d80a71b157a0d6b145621d5c9d799ad36f6365b8bb0e1f-merged.mount: Deactivated successfully.
Dec  2 06:12:44 np0005542249 podman[258569]: 2025-12-02 11:12:44.638092716 +0000 UTC m=+1.012555667 container remove 2e0b894de4eb266e8270367bdfeb0a912e0c30ee97deb56ed5b43372258630b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_carson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  2 06:12:44 np0005542249 systemd[1]: libpod-conmon-2e0b894de4eb266e8270367bdfeb0a912e0c30ee97deb56ed5b43372258630b3.scope: Deactivated successfully.
Dec  2 06:12:45 np0005542249 podman[258748]: 2025-12-02 11:12:45.517374411 +0000 UTC m=+0.048500605 container create 19ccf746bf5ad9615c611933054be6400bc185103172464ad8ec17465e3eb40d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatterjee, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:12:45 np0005542249 systemd[1]: Started libpod-conmon-19ccf746bf5ad9615c611933054be6400bc185103172464ad8ec17465e3eb40d.scope.
Dec  2 06:12:45 np0005542249 podman[258748]: 2025-12-02 11:12:45.497674768 +0000 UTC m=+0.028800952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:12:45 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:12:45 np0005542249 podman[258748]: 2025-12-02 11:12:45.612485377 +0000 UTC m=+0.143611581 container init 19ccf746bf5ad9615c611933054be6400bc185103172464ad8ec17465e3eb40d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:12:45 np0005542249 podman[258748]: 2025-12-02 11:12:45.623229059 +0000 UTC m=+0.154355243 container start 19ccf746bf5ad9615c611933054be6400bc185103172464ad8ec17465e3eb40d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  2 06:12:45 np0005542249 podman[258748]: 2025-12-02 11:12:45.627472394 +0000 UTC m=+0.158598608 container attach 19ccf746bf5ad9615c611933054be6400bc185103172464ad8ec17465e3eb40d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  2 06:12:45 np0005542249 vibrant_chatterjee[258764]: 167 167
Dec  2 06:12:45 np0005542249 systemd[1]: libpod-19ccf746bf5ad9615c611933054be6400bc185103172464ad8ec17465e3eb40d.scope: Deactivated successfully.
Dec  2 06:12:45 np0005542249 conmon[258764]: conmon 19ccf746bf5ad9615c61 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-19ccf746bf5ad9615c611933054be6400bc185103172464ad8ec17465e3eb40d.scope/container/memory.events
Dec  2 06:12:45 np0005542249 podman[258748]: 2025-12-02 11:12:45.63364428 +0000 UTC m=+0.164770554 container died 19ccf746bf5ad9615c611933054be6400bc185103172464ad8ec17465e3eb40d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatterjee, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  2 06:12:45 np0005542249 systemd[1]: var-lib-containers-storage-overlay-31ef68e75c3ec795f17bb49223206b7d3330beb328b79de74404c6235d78a10a-merged.mount: Deactivated successfully.
Dec  2 06:12:45 np0005542249 podman[258748]: 2025-12-02 11:12:45.680397086 +0000 UTC m=+0.211523290 container remove 19ccf746bf5ad9615c611933054be6400bc185103172464ad8ec17465e3eb40d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatterjee, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:12:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:12:45 np0005542249 systemd[1]: libpod-conmon-19ccf746bf5ad9615c611933054be6400bc185103172464ad8ec17465e3eb40d.scope: Deactivated successfully.
Dec  2 06:12:45 np0005542249 podman[258788]: 2025-12-02 11:12:45.882063179 +0000 UTC m=+0.066796370 container create 148e5edd3d93bb41fa9b5f4219dac75c1935144ce74efcfe0471c17b8a1a1a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  2 06:12:45 np0005542249 systemd[1]: Started libpod-conmon-148e5edd3d93bb41fa9b5f4219dac75c1935144ce74efcfe0471c17b8a1a1a71.scope.
Dec  2 06:12:45 np0005542249 podman[258788]: 2025-12-02 11:12:45.854134712 +0000 UTC m=+0.038867953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:12:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:12:45 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:12:45 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c954a3c3d1b0cc7b9ad63044729bd6cc9f9748783e78c4f8759d3e31e28ce90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:12:45 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c954a3c3d1b0cc7b9ad63044729bd6cc9f9748783e78c4f8759d3e31e28ce90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:12:45 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c954a3c3d1b0cc7b9ad63044729bd6cc9f9748783e78c4f8759d3e31e28ce90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:12:45 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c954a3c3d1b0cc7b9ad63044729bd6cc9f9748783e78c4f8759d3e31e28ce90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:12:45 np0005542249 podman[258788]: 2025-12-02 11:12:45.989245322 +0000 UTC m=+0.173978513 container init 148e5edd3d93bb41fa9b5f4219dac75c1935144ce74efcfe0471c17b8a1a1a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_goldwasser, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  2 06:12:46 np0005542249 podman[258788]: 2025-12-02 11:12:46.002988365 +0000 UTC m=+0.187721526 container start 148e5edd3d93bb41fa9b5f4219dac75c1935144ce74efcfe0471c17b8a1a1a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:12:46 np0005542249 podman[258788]: 2025-12-02 11:12:46.006469908 +0000 UTC m=+0.191203119 container attach 148e5edd3d93bb41fa9b5f4219dac75c1935144ce74efcfe0471c17b8a1a1a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  2 06:12:47 np0005542249 vigilant_goldwasser[258804]: {
Dec  2 06:12:47 np0005542249 vigilant_goldwasser[258804]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:12:47 np0005542249 vigilant_goldwasser[258804]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:12:47 np0005542249 vigilant_goldwasser[258804]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:12:47 np0005542249 vigilant_goldwasser[258804]:        "osd_id": 0,
Dec  2 06:12:47 np0005542249 vigilant_goldwasser[258804]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:12:47 np0005542249 vigilant_goldwasser[258804]:        "type": "bluestore"
Dec  2 06:12:47 np0005542249 vigilant_goldwasser[258804]:    },
Dec  2 06:12:47 np0005542249 vigilant_goldwasser[258804]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:12:47 np0005542249 vigilant_goldwasser[258804]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:12:47 np0005542249 vigilant_goldwasser[258804]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:12:47 np0005542249 vigilant_goldwasser[258804]:        "osd_id": 2,
Dec  2 06:12:47 np0005542249 vigilant_goldwasser[258804]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:12:47 np0005542249 vigilant_goldwasser[258804]:        "type": "bluestore"
Dec  2 06:12:47 np0005542249 vigilant_goldwasser[258804]:    },
Dec  2 06:12:47 np0005542249 vigilant_goldwasser[258804]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:12:47 np0005542249 vigilant_goldwasser[258804]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:12:47 np0005542249 vigilant_goldwasser[258804]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:12:47 np0005542249 vigilant_goldwasser[258804]:        "osd_id": 1,
Dec  2 06:12:47 np0005542249 vigilant_goldwasser[258804]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:12:47 np0005542249 vigilant_goldwasser[258804]:        "type": "bluestore"
Dec  2 06:12:47 np0005542249 vigilant_goldwasser[258804]:    }
Dec  2 06:12:47 np0005542249 vigilant_goldwasser[258804]: }
Dec  2 06:12:47 np0005542249 systemd[1]: libpod-148e5edd3d93bb41fa9b5f4219dac75c1935144ce74efcfe0471c17b8a1a1a71.scope: Deactivated successfully.
Dec  2 06:12:47 np0005542249 podman[258788]: 2025-12-02 11:12:47.070544819 +0000 UTC m=+1.255278000 container died 148e5edd3d93bb41fa9b5f4219dac75c1935144ce74efcfe0471c17b8a1a1a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_goldwasser, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:12:47 np0005542249 systemd[1]: libpod-148e5edd3d93bb41fa9b5f4219dac75c1935144ce74efcfe0471c17b8a1a1a71.scope: Consumed 1.068s CPU time.
Dec  2 06:12:47 np0005542249 systemd[1]: var-lib-containers-storage-overlay-5c954a3c3d1b0cc7b9ad63044729bd6cc9f9748783e78c4f8759d3e31e28ce90-merged.mount: Deactivated successfully.
Dec  2 06:12:47 np0005542249 podman[258788]: 2025-12-02 11:12:47.151061251 +0000 UTC m=+1.335794422 container remove 148e5edd3d93bb41fa9b5f4219dac75c1935144ce74efcfe0471c17b8a1a1a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_goldwasser, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  2 06:12:47 np0005542249 systemd[1]: libpod-conmon-148e5edd3d93bb41fa9b5f4219dac75c1935144ce74efcfe0471c17b8a1a1a71.scope: Deactivated successfully.
Dec  2 06:12:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:12:47 np0005542249 podman[258838]: 2025-12-02 11:12:47.21123122 +0000 UTC m=+0.100682588 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible)
Dec  2 06:12:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:12:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:12:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:12:47 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 255b7239-831f-4853-a7d5-79fe788ba469 does not exist
Dec  2 06:12:47 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 2b1c8be7-be15-43d9-96ce-3c5c4a32d791 does not exist
Dec  2 06:12:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:12:48 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:12:48 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:12:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:12:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:12:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/283709017' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:12:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:12:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/283709017' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:12:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:12:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:12:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:12:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:12:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:12:56 np0005542249 podman[258920]: 2025-12-02 11:12:56.064435603 +0000 UTC m=+0.129528990 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:12:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:12:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:12:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:12:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:12:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:12:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:12:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:12:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:00 np0005542249 podman[258949]: 2025-12-02 11:13:00.019393015 +0000 UTC m=+0.096082802 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:13:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:13:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:13:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:13:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:13:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v832: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:18 np0005542249 podman[258968]: 2025-12-02 11:13:18.027249617 +0000 UTC m=+0.090154954 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:13:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v833: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:13:19.823 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:13:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:13:19.824 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:13:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:13:19.824 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:13:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:13:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v834: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v835: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v836: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:13:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:13:26
Dec  2 06:13:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:13:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:13:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['.mgr', 'backups', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'volumes']
Dec  2 06:13:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:13:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:13:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:13:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:13:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:13:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:13:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:13:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:13:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:13:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:13:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:13:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:13:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:13:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:13:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:13:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:13:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:13:27 np0005542249 podman[258988]: 2025-12-02 11:13:27.057579086 +0000 UTC m=+0.134179304 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 06:13:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v837: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v838: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:13:30 np0005542249 podman[259014]: 2025-12-02 11:13:30.982801073 +0000 UTC m=+0.059421281 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  2 06:13:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v839: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:32 np0005542249 nova_compute[254900]: 2025-12-02 11:13:32.716 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:13:32 np0005542249 nova_compute[254900]: 2025-12-02 11:13:32.716 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:13:32 np0005542249 nova_compute[254900]: 2025-12-02 11:13:32.732 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:13:32 np0005542249 nova_compute[254900]: 2025-12-02 11:13:32.732 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:13:32 np0005542249 nova_compute[254900]: 2025-12-02 11:13:32.732 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 06:13:32 np0005542249 nova_compute[254900]: 2025-12-02 11:13:32.742 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 06:13:32 np0005542249 nova_compute[254900]: 2025-12-02 11:13:32.743 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:13:32 np0005542249 nova_compute[254900]: 2025-12-02 11:13:32.743 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:13:32 np0005542249 nova_compute[254900]: 2025-12-02 11:13:32.743 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:13:32 np0005542249 nova_compute[254900]: 2025-12-02 11:13:32.743 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:13:32 np0005542249 nova_compute[254900]: 2025-12-02 11:13:32.743 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:13:32 np0005542249 nova_compute[254900]: 2025-12-02 11:13:32.743 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:13:32 np0005542249 nova_compute[254900]: 2025-12-02 11:13:32.744 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:13:32 np0005542249 nova_compute[254900]: 2025-12-02 11:13:32.744 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:13:32 np0005542249 nova_compute[254900]: 2025-12-02 11:13:32.765 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:13:32 np0005542249 nova_compute[254900]: 2025-12-02 11:13:32.765 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:13:32 np0005542249 nova_compute[254900]: 2025-12-02 11:13:32.766 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:13:32 np0005542249 nova_compute[254900]: 2025-12-02 11:13:32.766 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:13:32 np0005542249 nova_compute[254900]: 2025-12-02 11:13:32.766 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:13:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:13:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3730160250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:13:33 np0005542249 nova_compute[254900]: 2025-12-02 11:13:33.234 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:13:33 np0005542249 nova_compute[254900]: 2025-12-02 11:13:33.415 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:13:33 np0005542249 nova_compute[254900]: 2025-12-02 11:13:33.417 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5172MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:13:33 np0005542249 nova_compute[254900]: 2025-12-02 11:13:33.417 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:13:33 np0005542249 nova_compute[254900]: 2025-12-02 11:13:33.418 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:13:33 np0005542249 nova_compute[254900]: 2025-12-02 11:13:33.489 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:13:33 np0005542249 nova_compute[254900]: 2025-12-02 11:13:33.490 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:13:33 np0005542249 nova_compute[254900]: 2025-12-02 11:13:33.506 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:13:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v840: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:13:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1324781754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:13:33 np0005542249 nova_compute[254900]: 2025-12-02 11:13:33.934 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:13:33 np0005542249 nova_compute[254900]: 2025-12-02 11:13:33.940 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:13:33 np0005542249 nova_compute[254900]: 2025-12-02 11:13:33.954 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:13:33 np0005542249 nova_compute[254900]: 2025-12-02 11:13:33.955 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:13:33 np0005542249 nova_compute[254900]: 2025-12-02 11:13:33.956 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.538s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:13:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v841: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:13:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:13:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:13:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:13:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:13:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:13:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:13:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:13:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:13:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:13:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:13:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:13:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:13:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:13:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:13:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:13:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:13:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:13:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:13:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:13:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:13:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:13:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:13:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:13:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v842: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v843: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:13:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v844: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v845: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v846: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:13:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v847: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:13:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:13:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:13:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:13:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:13:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:13:48 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev ad16757c-75f3-4a5c-a8aa-165ad77149d2 does not exist
Dec  2 06:13:48 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev de39a727-9628-4a9f-a8ef-aae8fc8c66a0 does not exist
Dec  2 06:13:48 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 3768f5a2-5d5a-41d9-8aa6-b156cc79d373 does not exist
Dec  2 06:13:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:13:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:13:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:13:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:13:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:13:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:13:48 np0005542249 podman[259234]: 2025-12-02 11:13:48.662136547 +0000 UTC m=+0.095355944 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  2 06:13:48 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:13:48 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:13:48 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:13:49 np0005542249 podman[259372]: 2025-12-02 11:13:49.286497768 +0000 UTC m=+0.070962754 container create 53a0585cc5e84e951c2547ada3e47e540d2cbc7805135205594f651fefa5b840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_burnell, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:13:49 np0005542249 systemd[1]: Started libpod-conmon-53a0585cc5e84e951c2547ada3e47e540d2cbc7805135205594f651fefa5b840.scope.
Dec  2 06:13:49 np0005542249 podman[259372]: 2025-12-02 11:13:49.25447743 +0000 UTC m=+0.038942506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:13:49 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:13:49 np0005542249 podman[259372]: 2025-12-02 11:13:49.395126039 +0000 UTC m=+0.179591045 container init 53a0585cc5e84e951c2547ada3e47e540d2cbc7805135205594f651fefa5b840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_burnell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  2 06:13:49 np0005542249 podman[259372]: 2025-12-02 11:13:49.408222555 +0000 UTC m=+0.192687561 container start 53a0585cc5e84e951c2547ada3e47e540d2cbc7805135205594f651fefa5b840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  2 06:13:49 np0005542249 podman[259372]: 2025-12-02 11:13:49.412396738 +0000 UTC m=+0.196861734 container attach 53a0585cc5e84e951c2547ada3e47e540d2cbc7805135205594f651fefa5b840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:13:49 np0005542249 epic_burnell[259388]: 167 167
Dec  2 06:13:49 np0005542249 systemd[1]: libpod-53a0585cc5e84e951c2547ada3e47e540d2cbc7805135205594f651fefa5b840.scope: Deactivated successfully.
Dec  2 06:13:49 np0005542249 conmon[259388]: conmon 53a0585cc5e84e951c25 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-53a0585cc5e84e951c2547ada3e47e540d2cbc7805135205594f651fefa5b840.scope/container/memory.events
Dec  2 06:13:49 np0005542249 podman[259393]: 2025-12-02 11:13:49.46826575 +0000 UTC m=+0.036131789 container died 53a0585cc5e84e951c2547ada3e47e540d2cbc7805135205594f651fefa5b840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:13:49 np0005542249 systemd[1]: var-lib-containers-storage-overlay-8b356cd56ed07c84c539603b256bd3adf96313e49e1a4a0d56e0219a1cb6c35b-merged.mount: Deactivated successfully.
Dec  2 06:13:49 np0005542249 podman[259393]: 2025-12-02 11:13:49.518251294 +0000 UTC m=+0.086117343 container remove 53a0585cc5e84e951c2547ada3e47e540d2cbc7805135205594f651fefa5b840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 06:13:49 np0005542249 systemd[1]: libpod-conmon-53a0585cc5e84e951c2547ada3e47e540d2cbc7805135205594f651fefa5b840.scope: Deactivated successfully.
Dec  2 06:13:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v848: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:49 np0005542249 podman[259416]: 2025-12-02 11:13:49.797521949 +0000 UTC m=+0.072942938 container create dc6b6388844acc98ae06c559d8c0f149d5f2ff0abcf24e745e5782d1bfc3a536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_gates, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:13:49 np0005542249 systemd[1]: Started libpod-conmon-dc6b6388844acc98ae06c559d8c0f149d5f2ff0abcf24e745e5782d1bfc3a536.scope.
Dec  2 06:13:49 np0005542249 podman[259416]: 2025-12-02 11:13:49.770329012 +0000 UTC m=+0.045750051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:13:49 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:13:49 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b867f548f159e3151125497d09f648403af05a91c4e0f723e4300e5c454a96e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:13:49 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b867f548f159e3151125497d09f648403af05a91c4e0f723e4300e5c454a96e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:13:49 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b867f548f159e3151125497d09f648403af05a91c4e0f723e4300e5c454a96e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:13:49 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b867f548f159e3151125497d09f648403af05a91c4e0f723e4300e5c454a96e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:13:49 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b867f548f159e3151125497d09f648403af05a91c4e0f723e4300e5c454a96e2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:13:49 np0005542249 podman[259416]: 2025-12-02 11:13:49.913335375 +0000 UTC m=+0.188756424 container init dc6b6388844acc98ae06c559d8c0f149d5f2ff0abcf24e745e5782d1bfc3a536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_gates, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  2 06:13:49 np0005542249 podman[259416]: 2025-12-02 11:13:49.924825477 +0000 UTC m=+0.200246466 container start dc6b6388844acc98ae06c559d8c0f149d5f2ff0abcf24e745e5782d1bfc3a536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  2 06:13:49 np0005542249 podman[259416]: 2025-12-02 11:13:49.929037721 +0000 UTC m=+0.204458770 container attach dc6b6388844acc98ae06c559d8c0f149d5f2ff0abcf24e745e5782d1bfc3a536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:13:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:13:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1589194200' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:13:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:13:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1589194200' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:13:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:13:51 np0005542249 interesting_gates[259432]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:13:51 np0005542249 interesting_gates[259432]: --> relative data size: 1.0
Dec  2 06:13:51 np0005542249 interesting_gates[259432]: --> All data devices are unavailable
Dec  2 06:13:51 np0005542249 systemd[1]: libpod-dc6b6388844acc98ae06c559d8c0f149d5f2ff0abcf24e745e5782d1bfc3a536.scope: Deactivated successfully.
Dec  2 06:13:51 np0005542249 podman[259416]: 2025-12-02 11:13:51.102799853 +0000 UTC m=+1.378220832 container died dc6b6388844acc98ae06c559d8c0f149d5f2ff0abcf24e745e5782d1bfc3a536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_gates, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:13:51 np0005542249 systemd[1]: libpod-dc6b6388844acc98ae06c559d8c0f149d5f2ff0abcf24e745e5782d1bfc3a536.scope: Consumed 1.135s CPU time.
Dec  2 06:13:51 np0005542249 systemd[1]: var-lib-containers-storage-overlay-b867f548f159e3151125497d09f648403af05a91c4e0f723e4300e5c454a96e2-merged.mount: Deactivated successfully.
Dec  2 06:13:51 np0005542249 podman[259416]: 2025-12-02 11:13:51.174389762 +0000 UTC m=+1.449810731 container remove dc6b6388844acc98ae06c559d8c0f149d5f2ff0abcf24e745e5782d1bfc3a536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  2 06:13:51 np0005542249 systemd[1]: libpod-conmon-dc6b6388844acc98ae06c559d8c0f149d5f2ff0abcf24e745e5782d1bfc3a536.scope: Deactivated successfully.
Dec  2 06:13:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v849: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:52 np0005542249 podman[259612]: 2025-12-02 11:13:52.086651601 +0000 UTC m=+0.058513956 container create 6a11d20f0f60c6e648bfce2c2076e18afa99ba5735ab5d44284189b95174296c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hofstadter, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  2 06:13:52 np0005542249 systemd[1]: Started libpod-conmon-6a11d20f0f60c6e648bfce2c2076e18afa99ba5735ab5d44284189b95174296c.scope.
Dec  2 06:13:52 np0005542249 podman[259612]: 2025-12-02 11:13:52.060989295 +0000 UTC m=+0.032851680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:13:52 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:13:52 np0005542249 podman[259612]: 2025-12-02 11:13:52.203917747 +0000 UTC m=+0.175780172 container init 6a11d20f0f60c6e648bfce2c2076e18afa99ba5735ab5d44284189b95174296c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  2 06:13:52 np0005542249 podman[259612]: 2025-12-02 11:13:52.217150926 +0000 UTC m=+0.189013301 container start 6a11d20f0f60c6e648bfce2c2076e18afa99ba5735ab5d44284189b95174296c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:13:52 np0005542249 podman[259612]: 2025-12-02 11:13:52.221832792 +0000 UTC m=+0.193695277 container attach 6a11d20f0f60c6e648bfce2c2076e18afa99ba5735ab5d44284189b95174296c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  2 06:13:52 np0005542249 heuristic_hofstadter[259629]: 167 167
Dec  2 06:13:52 np0005542249 systemd[1]: libpod-6a11d20f0f60c6e648bfce2c2076e18afa99ba5735ab5d44284189b95174296c.scope: Deactivated successfully.
Dec  2 06:13:52 np0005542249 podman[259612]: 2025-12-02 11:13:52.226237471 +0000 UTC m=+0.198099816 container died 6a11d20f0f60c6e648bfce2c2076e18afa99ba5735ab5d44284189b95174296c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  2 06:13:52 np0005542249 systemd[1]: var-lib-containers-storage-overlay-dad5319be666312fe446b0e7cd460685bfcd21cf5365480ae66a679cd7e02812-merged.mount: Deactivated successfully.
Dec  2 06:13:52 np0005542249 podman[259612]: 2025-12-02 11:13:52.275962089 +0000 UTC m=+0.247824444 container remove 6a11d20f0f60c6e648bfce2c2076e18afa99ba5735ab5d44284189b95174296c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  2 06:13:52 np0005542249 systemd[1]: libpod-conmon-6a11d20f0f60c6e648bfce2c2076e18afa99ba5735ab5d44284189b95174296c.scope: Deactivated successfully.
Dec  2 06:13:52 np0005542249 podman[259654]: 2025-12-02 11:13:52.465578664 +0000 UTC m=+0.052114472 container create 4727c0b149a137b0fc8fb9d91658660238ce980cc6cb9220dd14d04e60f592d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chaum, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:13:52 np0005542249 systemd[1]: Started libpod-conmon-4727c0b149a137b0fc8fb9d91658660238ce980cc6cb9220dd14d04e60f592d8.scope.
Dec  2 06:13:52 np0005542249 podman[259654]: 2025-12-02 11:13:52.444464142 +0000 UTC m=+0.030999970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:13:52 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:13:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4027f2b543404b0b7db7eab839a8de64bbc87fe1b9753a0db38045a5a0eb3ec3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:13:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4027f2b543404b0b7db7eab839a8de64bbc87fe1b9753a0db38045a5a0eb3ec3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:13:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4027f2b543404b0b7db7eab839a8de64bbc87fe1b9753a0db38045a5a0eb3ec3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:13:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4027f2b543404b0b7db7eab839a8de64bbc87fe1b9753a0db38045a5a0eb3ec3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:13:52 np0005542249 podman[259654]: 2025-12-02 11:13:52.565581773 +0000 UTC m=+0.152117611 container init 4727c0b149a137b0fc8fb9d91658660238ce980cc6cb9220dd14d04e60f592d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chaum, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  2 06:13:52 np0005542249 podman[259654]: 2025-12-02 11:13:52.574139355 +0000 UTC m=+0.160675163 container start 4727c0b149a137b0fc8fb9d91658660238ce980cc6cb9220dd14d04e60f592d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chaum, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Dec  2 06:13:52 np0005542249 podman[259654]: 2025-12-02 11:13:52.578153084 +0000 UTC m=+0.164688952 container attach 4727c0b149a137b0fc8fb9d91658660238ce980cc6cb9220dd14d04e60f592d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chaum, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:13:53 np0005542249 confident_chaum[259671]: {
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:    "0": [
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:        {
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "devices": [
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "/dev/loop3"
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            ],
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "lv_name": "ceph_lv0",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "lv_size": "21470642176",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "name": "ceph_lv0",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "tags": {
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.cluster_name": "ceph",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.crush_device_class": "",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.encrypted": "0",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.osd_id": "0",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.type": "block",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.vdo": "0"
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            },
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "type": "block",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "vg_name": "ceph_vg0"
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:        }
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:    ],
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:    "1": [
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:        {
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "devices": [
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "/dev/loop4"
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            ],
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "lv_name": "ceph_lv1",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "lv_size": "21470642176",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "name": "ceph_lv1",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "tags": {
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.cluster_name": "ceph",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.crush_device_class": "",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.encrypted": "0",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.osd_id": "1",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.type": "block",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.vdo": "0"
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            },
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "type": "block",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "vg_name": "ceph_vg1"
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:        }
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:    ],
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:    "2": [
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:        {
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "devices": [
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "/dev/loop5"
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            ],
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "lv_name": "ceph_lv2",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "lv_size": "21470642176",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "name": "ceph_lv2",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "tags": {
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.cluster_name": "ceph",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.crush_device_class": "",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.encrypted": "0",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.osd_id": "2",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.type": "block",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:                "ceph.vdo": "0"
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            },
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "type": "block",
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:            "vg_name": "ceph_vg2"
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:        }
Dec  2 06:13:53 np0005542249 confident_chaum[259671]:    ]
Dec  2 06:13:53 np0005542249 confident_chaum[259671]: }
Dec  2 06:13:53 np0005542249 systemd[1]: libpod-4727c0b149a137b0fc8fb9d91658660238ce980cc6cb9220dd14d04e60f592d8.scope: Deactivated successfully.
Dec  2 06:13:53 np0005542249 podman[259680]: 2025-12-02 11:13:53.40024287 +0000 UTC m=+0.029781708 container died 4727c0b149a137b0fc8fb9d91658660238ce980cc6cb9220dd14d04e60f592d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chaum, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  2 06:13:53 np0005542249 systemd[1]: var-lib-containers-storage-overlay-4027f2b543404b0b7db7eab839a8de64bbc87fe1b9753a0db38045a5a0eb3ec3-merged.mount: Deactivated successfully.
Dec  2 06:13:53 np0005542249 podman[259680]: 2025-12-02 11:13:53.464712696 +0000 UTC m=+0.094251534 container remove 4727c0b149a137b0fc8fb9d91658660238ce980cc6cb9220dd14d04e60f592d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chaum, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:13:53 np0005542249 systemd[1]: libpod-conmon-4727c0b149a137b0fc8fb9d91658660238ce980cc6cb9220dd14d04e60f592d8.scope: Deactivated successfully.
Dec  2 06:13:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v850: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:54 np0005542249 podman[259833]: 2025-12-02 11:13:54.350450116 +0000 UTC m=+0.069208415 container create f4710aa23cb7253597267be25fd631bb7ae94068839e7c3e2a4ade9113afdab0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:13:54 np0005542249 systemd[1]: Started libpod-conmon-f4710aa23cb7253597267be25fd631bb7ae94068839e7c3e2a4ade9113afdab0.scope.
Dec  2 06:13:54 np0005542249 podman[259833]: 2025-12-02 11:13:54.323588189 +0000 UTC m=+0.042346548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:13:54 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:13:54 np0005542249 podman[259833]: 2025-12-02 11:13:54.44655298 +0000 UTC m=+0.165311279 container init f4710aa23cb7253597267be25fd631bb7ae94068839e7c3e2a4ade9113afdab0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lumiere, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  2 06:13:54 np0005542249 podman[259833]: 2025-12-02 11:13:54.454866815 +0000 UTC m=+0.173625094 container start f4710aa23cb7253597267be25fd631bb7ae94068839e7c3e2a4ade9113afdab0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:13:54 np0005542249 podman[259833]: 2025-12-02 11:13:54.458861043 +0000 UTC m=+0.177619352 container attach f4710aa23cb7253597267be25fd631bb7ae94068839e7c3e2a4ade9113afdab0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lumiere, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:13:54 np0005542249 brave_lumiere[259850]: 167 167
Dec  2 06:13:54 np0005542249 systemd[1]: libpod-f4710aa23cb7253597267be25fd631bb7ae94068839e7c3e2a4ade9113afdab0.scope: Deactivated successfully.
Dec  2 06:13:54 np0005542249 conmon[259850]: conmon f4710aa23cb725359726 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f4710aa23cb7253597267be25fd631bb7ae94068839e7c3e2a4ade9113afdab0.scope/container/memory.events
Dec  2 06:13:54 np0005542249 podman[259833]: 2025-12-02 11:13:54.463633732 +0000 UTC m=+0.182392011 container died f4710aa23cb7253597267be25fd631bb7ae94068839e7c3e2a4ade9113afdab0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  2 06:13:54 np0005542249 systemd[1]: var-lib-containers-storage-overlay-a4f73c0bc568ca024b2034014c76641a07c65a03a3c22d1fb416b34a35d90121-merged.mount: Deactivated successfully.
Dec  2 06:13:54 np0005542249 podman[259833]: 2025-12-02 11:13:54.498603769 +0000 UTC m=+0.217362058 container remove f4710aa23cb7253597267be25fd631bb7ae94068839e7c3e2a4ade9113afdab0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lumiere, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  2 06:13:54 np0005542249 systemd[1]: libpod-conmon-f4710aa23cb7253597267be25fd631bb7ae94068839e7c3e2a4ade9113afdab0.scope: Deactivated successfully.
Dec  2 06:13:54 np0005542249 podman[259875]: 2025-12-02 11:13:54.689273303 +0000 UTC m=+0.048093033 container create 7aaa4a63fa3056b08ef70efc639f3bc5749dafad2fbe5bcec2df8abc40824fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ptolemy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:13:54 np0005542249 systemd[1]: Started libpod-conmon-7aaa4a63fa3056b08ef70efc639f3bc5749dafad2fbe5bcec2df8abc40824fdc.scope.
Dec  2 06:13:54 np0005542249 podman[259875]: 2025-12-02 11:13:54.669624381 +0000 UTC m=+0.028444111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:13:54 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:13:54 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ab1cc21ffa4d324b7f2a0b87a106fe6a2ef48db55305fd76c76a5c58010ec4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:13:54 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ab1cc21ffa4d324b7f2a0b87a106fe6a2ef48db55305fd76c76a5c58010ec4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:13:54 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ab1cc21ffa4d324b7f2a0b87a106fe6a2ef48db55305fd76c76a5c58010ec4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:13:54 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ab1cc21ffa4d324b7f2a0b87a106fe6a2ef48db55305fd76c76a5c58010ec4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:13:54 np0005542249 podman[259875]: 2025-12-02 11:13:54.800085075 +0000 UTC m=+0.158904825 container init 7aaa4a63fa3056b08ef70efc639f3bc5749dafad2fbe5bcec2df8abc40824fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ptolemy, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  2 06:13:54 np0005542249 podman[259875]: 2025-12-02 11:13:54.81134267 +0000 UTC m=+0.170162420 container start 7aaa4a63fa3056b08ef70efc639f3bc5749dafad2fbe5bcec2df8abc40824fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:13:54 np0005542249 podman[259875]: 2025-12-02 11:13:54.815151193 +0000 UTC m=+0.173970943 container attach 7aaa4a63fa3056b08ef70efc639f3bc5749dafad2fbe5bcec2df8abc40824fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ptolemy, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:13:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v851: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:55 np0005542249 sleepy_ptolemy[259891]: {
Dec  2 06:13:55 np0005542249 sleepy_ptolemy[259891]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:13:55 np0005542249 sleepy_ptolemy[259891]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:13:55 np0005542249 sleepy_ptolemy[259891]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:13:55 np0005542249 sleepy_ptolemy[259891]:        "osd_id": 0,
Dec  2 06:13:55 np0005542249 sleepy_ptolemy[259891]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:13:55 np0005542249 sleepy_ptolemy[259891]:        "type": "bluestore"
Dec  2 06:13:55 np0005542249 sleepy_ptolemy[259891]:    },
Dec  2 06:13:55 np0005542249 sleepy_ptolemy[259891]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:13:55 np0005542249 sleepy_ptolemy[259891]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:13:55 np0005542249 sleepy_ptolemy[259891]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:13:55 np0005542249 sleepy_ptolemy[259891]:        "osd_id": 2,
Dec  2 06:13:55 np0005542249 sleepy_ptolemy[259891]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:13:55 np0005542249 sleepy_ptolemy[259891]:        "type": "bluestore"
Dec  2 06:13:55 np0005542249 sleepy_ptolemy[259891]:    },
Dec  2 06:13:55 np0005542249 sleepy_ptolemy[259891]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:13:55 np0005542249 sleepy_ptolemy[259891]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:13:55 np0005542249 sleepy_ptolemy[259891]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:13:55 np0005542249 sleepy_ptolemy[259891]:        "osd_id": 1,
Dec  2 06:13:55 np0005542249 sleepy_ptolemy[259891]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:13:55 np0005542249 sleepy_ptolemy[259891]:        "type": "bluestore"
Dec  2 06:13:55 np0005542249 sleepy_ptolemy[259891]:    }
Dec  2 06:13:55 np0005542249 sleepy_ptolemy[259891]: }
Dec  2 06:13:55 np0005542249 podman[259875]: 2025-12-02 11:13:55.883616883 +0000 UTC m=+1.242436633 container died 7aaa4a63fa3056b08ef70efc639f3bc5749dafad2fbe5bcec2df8abc40824fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ptolemy, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  2 06:13:55 np0005542249 systemd[1]: libpod-7aaa4a63fa3056b08ef70efc639f3bc5749dafad2fbe5bcec2df8abc40824fdc.scope: Deactivated successfully.
Dec  2 06:13:55 np0005542249 systemd[1]: libpod-7aaa4a63fa3056b08ef70efc639f3bc5749dafad2fbe5bcec2df8abc40824fdc.scope: Consumed 1.088s CPU time.
Dec  2 06:13:55 np0005542249 systemd[1]: var-lib-containers-storage-overlay-77ab1cc21ffa4d324b7f2a0b87a106fe6a2ef48db55305fd76c76a5c58010ec4-merged.mount: Deactivated successfully.
Dec  2 06:13:55 np0005542249 podman[259875]: 2025-12-02 11:13:55.960210028 +0000 UTC m=+1.319029748 container remove 7aaa4a63fa3056b08ef70efc639f3bc5749dafad2fbe5bcec2df8abc40824fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ptolemy, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:13:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:13:55 np0005542249 systemd[1]: libpod-conmon-7aaa4a63fa3056b08ef70efc639f3bc5749dafad2fbe5bcec2df8abc40824fdc.scope: Deactivated successfully.
Dec  2 06:13:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:13:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:13:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:13:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:13:56 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 39e3c96e-7f8e-4f24-b921-80a9fbe273b1 does not exist
Dec  2 06:13:56 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 9a8ac7bf-90e9-4f9c-81a4-2427c5b9eb9b does not exist
Dec  2 06:13:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:13:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:13:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:13:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:13:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:13:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:13:57 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:13:57 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:13:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v852: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:13:58 np0005542249 podman[259989]: 2025-12-02 11:13:58.09316125 +0000 UTC m=+0.159250134 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 06:13:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v853: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:14:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v854: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:02 np0005542249 podman[260018]: 2025-12-02 11:14:02.010330038 +0000 UTC m=+0.075571447 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:14:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v855: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v856: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:14:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v857: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v858: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:14:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v859: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v860: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v861: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:14:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v862: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:18 np0005542249 podman[260038]: 2025-12-02 11:14:18.997542769 +0000 UTC m=+0.073128982 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:14:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v863: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:14:19.825 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:14:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:14:19.825 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:14:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:14:19.826 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:14:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:14:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v864: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v865: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v866: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:14:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:14:26
Dec  2 06:14:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:14:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:14:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['volumes', '.mgr', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'backups', 'images', 'default.rgw.meta']
Dec  2 06:14:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:14:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:14:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:14:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:14:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:14:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:14:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:14:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:14:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:14:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:14:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:14:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:14:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:14:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:14:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:14:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:14:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:14:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v867: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:28 np0005542249 nova_compute[254900]: 2025-12-02 11:14:28.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:14:28 np0005542249 nova_compute[254900]: 2025-12-02 11:14:28.383 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  2 06:14:28 np0005542249 nova_compute[254900]: 2025-12-02 11:14:28.407 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  2 06:14:28 np0005542249 nova_compute[254900]: 2025-12-02 11:14:28.408 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:14:28 np0005542249 nova_compute[254900]: 2025-12-02 11:14:28.408 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  2 06:14:28 np0005542249 nova_compute[254900]: 2025-12-02 11:14:28.424 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:14:29 np0005542249 podman[260058]: 2025-12-02 11:14:29.07241364 +0000 UTC m=+0.145020258 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:14:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v868: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:30 np0005542249 nova_compute[254900]: 2025-12-02 11:14:30.430 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:14:30 np0005542249 nova_compute[254900]: 2025-12-02 11:14:30.431 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:14:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:14:31 np0005542249 nova_compute[254900]: 2025-12-02 11:14:31.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:14:31 np0005542249 nova_compute[254900]: 2025-12-02 11:14:31.383 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:14:31 np0005542249 nova_compute[254900]: 2025-12-02 11:14:31.383 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:14:31 np0005542249 nova_compute[254900]: 2025-12-02 11:14:31.384 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:14:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v869: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:32 np0005542249 nova_compute[254900]: 2025-12-02 11:14:32.384 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:14:32 np0005542249 nova_compute[254900]: 2025-12-02 11:14:32.385 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:14:32 np0005542249 nova_compute[254900]: 2025-12-02 11:14:32.385 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 06:14:32 np0005542249 nova_compute[254900]: 2025-12-02 11:14:32.401 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 06:14:33 np0005542249 podman[260085]: 2025-12-02 11:14:33.01727951 +0000 UTC m=+0.085182129 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  2 06:14:33 np0005542249 nova_compute[254900]: 2025-12-02 11:14:33.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:14:33 np0005542249 nova_compute[254900]: 2025-12-02 11:14:33.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:14:33 np0005542249 nova_compute[254900]: 2025-12-02 11:14:33.411 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:14:33 np0005542249 nova_compute[254900]: 2025-12-02 11:14:33.412 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:14:33 np0005542249 nova_compute[254900]: 2025-12-02 11:14:33.413 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:14:33 np0005542249 nova_compute[254900]: 2025-12-02 11:14:33.413 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:14:33 np0005542249 nova_compute[254900]: 2025-12-02 11:14:33.413 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:14:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v870: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:14:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/150371782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:14:33 np0005542249 nova_compute[254900]: 2025-12-02 11:14:33.908 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:14:34 np0005542249 nova_compute[254900]: 2025-12-02 11:14:34.114 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:14:34 np0005542249 nova_compute[254900]: 2025-12-02 11:14:34.117 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5183MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:14:34 np0005542249 nova_compute[254900]: 2025-12-02 11:14:34.117 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:14:34 np0005542249 nova_compute[254900]: 2025-12-02 11:14:34.117 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:14:34 np0005542249 nova_compute[254900]: 2025-12-02 11:14:34.303 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:14:34 np0005542249 nova_compute[254900]: 2025-12-02 11:14:34.304 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:14:34 np0005542249 nova_compute[254900]: 2025-12-02 11:14:34.383 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Refreshing inventories for resource provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  2 06:14:34 np0005542249 nova_compute[254900]: 2025-12-02 11:14:34.463 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Updating ProviderTree inventory for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  2 06:14:34 np0005542249 nova_compute[254900]: 2025-12-02 11:14:34.463 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Updating inventory in ProviderTree for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  2 06:14:34 np0005542249 nova_compute[254900]: 2025-12-02 11:14:34.480 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Refreshing aggregate associations for resource provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  2 06:14:34 np0005542249 nova_compute[254900]: 2025-12-02 11:14:34.503 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Refreshing trait associations for resource provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062, traits: HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SVM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_MMX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,HW_CPU_X86_CLMUL,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE41,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  2 06:14:34 np0005542249 nova_compute[254900]: 2025-12-02 11:14:34.517 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:14:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:14:34 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2906442750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:14:34 np0005542249 nova_compute[254900]: 2025-12-02 11:14:34.954 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:14:34 np0005542249 nova_compute[254900]: 2025-12-02 11:14:34.962 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:14:34 np0005542249 nova_compute[254900]: 2025-12-02 11:14:34.981 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:14:34 np0005542249 nova_compute[254900]: 2025-12-02 11:14:34.983 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:14:34 np0005542249 nova_compute[254900]: 2025-12-02 11:14:34.984 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:14:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v871: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:14:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:14:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:14:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:14:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:14:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:14:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:14:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:14:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:14:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:14:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:14:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:14:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:14:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:14:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:14:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:14:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:14:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:14:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:14:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:14:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:14:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:14:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:14:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:14:35 np0005542249 nova_compute[254900]: 2025-12-02 11:14:35.984 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:14:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v872: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v873: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:14:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v874: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v875: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v876: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:14:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v877: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v878: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:50 np0005542249 podman[260150]: 2025-12-02 11:14:50.026126154 +0000 UTC m=+0.094051346 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  2 06:14:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:14:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3141460318' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:14:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:14:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3141460318' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:14:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:14:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v879: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v880: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v881: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:14:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:14:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:14:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:14:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:14:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:14:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:14:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:14:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:14:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:14:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:14:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:14:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:14:57 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 77de0f41-a7cd-44a0-a1a5-a3283f230ccc does not exist
Dec  2 06:14:57 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 57a3056d-ea2b-4c77-88d8-80a9847e8dd8 does not exist
Dec  2 06:14:57 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev f0674b0b-22c7-408e-90ae-3a9f74eb2002 does not exist
Dec  2 06:14:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:14:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:14:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:14:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:14:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:14:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:14:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v882: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:57 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:14:57 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:14:57 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:14:58 np0005542249 podman[260445]: 2025-12-02 11:14:58.076159446 +0000 UTC m=+0.045450646 container create e2024fecaa341fa09e2cf8e7a1789067f345afefa4b971bead967f283ee3b3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kare, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:14:58 np0005542249 systemd[1]: Started libpod-conmon-e2024fecaa341fa09e2cf8e7a1789067f345afefa4b971bead967f283ee3b3a9.scope.
Dec  2 06:14:58 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:14:58 np0005542249 podman[260445]: 2025-12-02 11:14:58.058235927 +0000 UTC m=+0.027527147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:14:58 np0005542249 podman[260445]: 2025-12-02 11:14:58.171649588 +0000 UTC m=+0.140940858 container init e2024fecaa341fa09e2cf8e7a1789067f345afefa4b971bead967f283ee3b3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  2 06:14:58 np0005542249 podman[260445]: 2025-12-02 11:14:58.184916553 +0000 UTC m=+0.154207793 container start e2024fecaa341fa09e2cf8e7a1789067f345afefa4b971bead967f283ee3b3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:14:58 np0005542249 podman[260445]: 2025-12-02 11:14:58.189085874 +0000 UTC m=+0.158377094 container attach e2024fecaa341fa09e2cf8e7a1789067f345afefa4b971bead967f283ee3b3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  2 06:14:58 np0005542249 busy_kare[260462]: 167 167
Dec  2 06:14:58 np0005542249 systemd[1]: libpod-e2024fecaa341fa09e2cf8e7a1789067f345afefa4b971bead967f283ee3b3a9.scope: Deactivated successfully.
Dec  2 06:14:58 np0005542249 conmon[260462]: conmon e2024fecaa341fa09e2c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e2024fecaa341fa09e2cf8e7a1789067f345afefa4b971bead967f283ee3b3a9.scope/container/memory.events
Dec  2 06:14:58 np0005542249 podman[260445]: 2025-12-02 11:14:58.195127216 +0000 UTC m=+0.164418446 container died e2024fecaa341fa09e2cf8e7a1789067f345afefa4b971bead967f283ee3b3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  2 06:14:58 np0005542249 systemd[1]: var-lib-containers-storage-overlay-25bd6c145fcf80f53ea939d7c6c046555a7253b4482b2aadc0096a44a7d0b684-merged.mount: Deactivated successfully.
Dec  2 06:14:58 np0005542249 podman[260445]: 2025-12-02 11:14:58.251236735 +0000 UTC m=+0.220527975 container remove e2024fecaa341fa09e2cf8e7a1789067f345afefa4b971bead967f283ee3b3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:14:58 np0005542249 systemd[1]: libpod-conmon-e2024fecaa341fa09e2cf8e7a1789067f345afefa4b971bead967f283ee3b3a9.scope: Deactivated successfully.
Dec  2 06:14:58 np0005542249 podman[260486]: 2025-12-02 11:14:58.459600135 +0000 UTC m=+0.042217920 container create 5a20f946fec3a4cf86ba141f6fd3006930c86dc936373aa9e7ef019a63ed3495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wescoff, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  2 06:14:58 np0005542249 systemd[1]: Started libpod-conmon-5a20f946fec3a4cf86ba141f6fd3006930c86dc936373aa9e7ef019a63ed3495.scope.
Dec  2 06:14:58 np0005542249 podman[260486]: 2025-12-02 11:14:58.440605066 +0000 UTC m=+0.023222841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:14:58 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:14:58 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dbf6259a7857e465dd3a5ea1d85dc543f6cae90ebd9f320c648a2673a5c672f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:14:58 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dbf6259a7857e465dd3a5ea1d85dc543f6cae90ebd9f320c648a2673a5c672f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:14:58 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dbf6259a7857e465dd3a5ea1d85dc543f6cae90ebd9f320c648a2673a5c672f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:14:58 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dbf6259a7857e465dd3a5ea1d85dc543f6cae90ebd9f320c648a2673a5c672f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:14:58 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dbf6259a7857e465dd3a5ea1d85dc543f6cae90ebd9f320c648a2673a5c672f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:14:58 np0005542249 podman[260486]: 2025-12-02 11:14:58.58250594 +0000 UTC m=+0.165123715 container init 5a20f946fec3a4cf86ba141f6fd3006930c86dc936373aa9e7ef019a63ed3495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wescoff, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:14:58 np0005542249 podman[260486]: 2025-12-02 11:14:58.596620337 +0000 UTC m=+0.179238092 container start 5a20f946fec3a4cf86ba141f6fd3006930c86dc936373aa9e7ef019a63ed3495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  2 06:14:58 np0005542249 podman[260486]: 2025-12-02 11:14:58.600741127 +0000 UTC m=+0.183358922 container attach 5a20f946fec3a4cf86ba141f6fd3006930c86dc936373aa9e7ef019a63ed3495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:14:59 np0005542249 hopeful_wescoff[260502]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:14:59 np0005542249 hopeful_wescoff[260502]: --> relative data size: 1.0
Dec  2 06:14:59 np0005542249 hopeful_wescoff[260502]: --> All data devices are unavailable
Dec  2 06:14:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v883: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:14:59 np0005542249 systemd[1]: libpod-5a20f946fec3a4cf86ba141f6fd3006930c86dc936373aa9e7ef019a63ed3495.scope: Deactivated successfully.
Dec  2 06:14:59 np0005542249 podman[260486]: 2025-12-02 11:14:59.77492168 +0000 UTC m=+1.357539435 container died 5a20f946fec3a4cf86ba141f6fd3006930c86dc936373aa9e7ef019a63ed3495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  2 06:14:59 np0005542249 systemd[1]: libpod-5a20f946fec3a4cf86ba141f6fd3006930c86dc936373aa9e7ef019a63ed3495.scope: Consumed 1.142s CPU time.
Dec  2 06:14:59 np0005542249 systemd[1]: var-lib-containers-storage-overlay-7dbf6259a7857e465dd3a5ea1d85dc543f6cae90ebd9f320c648a2673a5c672f-merged.mount: Deactivated successfully.
Dec  2 06:14:59 np0005542249 podman[260486]: 2025-12-02 11:14:59.84523428 +0000 UTC m=+1.427852045 container remove 5a20f946fec3a4cf86ba141f6fd3006930c86dc936373aa9e7ef019a63ed3495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wescoff, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:14:59 np0005542249 systemd[1]: libpod-conmon-5a20f946fec3a4cf86ba141f6fd3006930c86dc936373aa9e7ef019a63ed3495.scope: Deactivated successfully.
Dec  2 06:15:00 np0005542249 podman[260532]: 2025-12-02 11:15:00.019267872 +0000 UTC m=+0.203356667 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec  2 06:15:00 np0005542249 podman[260712]: 2025-12-02 11:15:00.706277033 +0000 UTC m=+0.069881688 container create 05ad4ea34dce5a1059c8d62ff36be78f19e4f4ff70ba37ba0d503f851c565e9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ganguly, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:15:00 np0005542249 systemd[1]: Started libpod-conmon-05ad4ea34dce5a1059c8d62ff36be78f19e4f4ff70ba37ba0d503f851c565e9c.scope.
Dec  2 06:15:00 np0005542249 podman[260712]: 2025-12-02 11:15:00.677300299 +0000 UTC m=+0.040904994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:15:00 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:15:00 np0005542249 podman[260712]: 2025-12-02 11:15:00.804240052 +0000 UTC m=+0.167844747 container init 05ad4ea34dce5a1059c8d62ff36be78f19e4f4ff70ba37ba0d503f851c565e9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ganguly, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:15:00 np0005542249 podman[260712]: 2025-12-02 11:15:00.815830312 +0000 UTC m=+0.179434957 container start 05ad4ea34dce5a1059c8d62ff36be78f19e4f4ff70ba37ba0d503f851c565e9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Dec  2 06:15:00 np0005542249 podman[260712]: 2025-12-02 11:15:00.820287651 +0000 UTC m=+0.183892296 container attach 05ad4ea34dce5a1059c8d62ff36be78f19e4f4ff70ba37ba0d503f851c565e9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:15:00 np0005542249 inspiring_ganguly[260728]: 167 167
Dec  2 06:15:00 np0005542249 systemd[1]: libpod-05ad4ea34dce5a1059c8d62ff36be78f19e4f4ff70ba37ba0d503f851c565e9c.scope: Deactivated successfully.
Dec  2 06:15:00 np0005542249 podman[260712]: 2025-12-02 11:15:00.823872716 +0000 UTC m=+0.187477351 container died 05ad4ea34dce5a1059c8d62ff36be78f19e4f4ff70ba37ba0d503f851c565e9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ganguly, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:15:00 np0005542249 systemd[1]: var-lib-containers-storage-overlay-d0580d268e4c889b04bef57558f3165b778d02bfbb29fa36ec9ff5906393717d-merged.mount: Deactivated successfully.
Dec  2 06:15:00 np0005542249 podman[260712]: 2025-12-02 11:15:00.873195585 +0000 UTC m=+0.236800210 container remove 05ad4ea34dce5a1059c8d62ff36be78f19e4f4ff70ba37ba0d503f851c565e9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ganguly, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:15:00 np0005542249 systemd[1]: libpod-conmon-05ad4ea34dce5a1059c8d62ff36be78f19e4f4ff70ba37ba0d503f851c565e9c.scope: Deactivated successfully.
Dec  2 06:15:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:15:00 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Dec  2 06:15:00 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:00.982067) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  2 06:15:00 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Dec  2 06:15:00 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674100982155, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2005, "num_deletes": 250, "total_data_size": 3381360, "memory_usage": 3442112, "flush_reason": "Manual Compaction"}
Dec  2 06:15:00 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674101000498, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1912417, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16398, "largest_seqno": 18402, "table_properties": {"data_size": 1905970, "index_size": 3330, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16376, "raw_average_key_size": 20, "raw_value_size": 1891611, "raw_average_value_size": 2344, "num_data_blocks": 154, "num_entries": 807, "num_filter_entries": 807, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764673874, "oldest_key_time": 1764673874, "file_creation_time": 1764674100, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 18668 microseconds, and 10232 cpu microseconds.
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:01.000734) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1912417 bytes OK
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:01.000847) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:01.003223) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:01.003250) EVENT_LOG_v1 {"time_micros": 1764674101003240, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:01.003275) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3372931, prev total WAL file size 3372931, number of live WAL files 2.
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:01.005439) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353031' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1867KB)], [38(7651KB)]
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674101005515, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 9747497, "oldest_snapshot_seqno": -1}
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4436 keys, 7853630 bytes, temperature: kUnknown
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674101049110, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 7853630, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7823094, "index_size": 18330, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 107165, "raw_average_key_size": 24, "raw_value_size": 7742064, "raw_average_value_size": 1745, "num_data_blocks": 781, "num_entries": 4436, "num_filter_entries": 4436, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672515, "oldest_key_time": 0, "file_creation_time": 1764674101, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:01.049422) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 7853630 bytes
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:01.050942) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 223.2 rd, 179.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 7.5 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(9.2) write-amplify(4.1) OK, records in: 4841, records dropped: 405 output_compression: NoCompression
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:01.050971) EVENT_LOG_v1 {"time_micros": 1764674101050957, "job": 18, "event": "compaction_finished", "compaction_time_micros": 43681, "compaction_time_cpu_micros": 20706, "output_level": 6, "num_output_files": 1, "total_output_size": 7853630, "num_input_records": 4841, "num_output_records": 4436, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674101051698, "job": 18, "event": "table_file_deletion", "file_number": 40}
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674101054331, "job": 18, "event": "table_file_deletion", "file_number": 38}
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:01.005294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:01.054388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:01.054394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:01.054397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:01.054400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:15:01 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:01.054403) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:15:01 np0005542249 podman[260753]: 2025-12-02 11:15:01.087704029 +0000 UTC m=+0.056290116 container create bec35ace200b32827c230882d618d0c4854789b46be974e4d9576bd1b2b12f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bassi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:15:01 np0005542249 systemd[1]: Started libpod-conmon-bec35ace200b32827c230882d618d0c4854789b46be974e4d9576bd1b2b12f98.scope.
Dec  2 06:15:01 np0005542249 podman[260753]: 2025-12-02 11:15:01.062941206 +0000 UTC m=+0.031527313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:15:01 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:15:01 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/486b7c1657e37e84b7014d7c80498cf113799cb2a5ed5e9f7718860fac4a80d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:15:01 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/486b7c1657e37e84b7014d7c80498cf113799cb2a5ed5e9f7718860fac4a80d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:15:01 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/486b7c1657e37e84b7014d7c80498cf113799cb2a5ed5e9f7718860fac4a80d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:15:01 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/486b7c1657e37e84b7014d7c80498cf113799cb2a5ed5e9f7718860fac4a80d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:15:01 np0005542249 podman[260753]: 2025-12-02 11:15:01.190180777 +0000 UTC m=+0.158766874 container init bec35ace200b32827c230882d618d0c4854789b46be974e4d9576bd1b2b12f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bassi, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:15:01 np0005542249 podman[260753]: 2025-12-02 11:15:01.198537481 +0000 UTC m=+0.167123548 container start bec35ace200b32827c230882d618d0c4854789b46be974e4d9576bd1b2b12f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  2 06:15:01 np0005542249 podman[260753]: 2025-12-02 11:15:01.202727083 +0000 UTC m=+0.171313150 container attach bec35ace200b32827c230882d618d0c4854789b46be974e4d9576bd1b2b12f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:15:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v884: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:15:02 np0005542249 magical_bassi[260769]: {
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:    "0": [
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:        {
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "devices": [
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "/dev/loop3"
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            ],
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "lv_name": "ceph_lv0",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "lv_size": "21470642176",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "name": "ceph_lv0",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "tags": {
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.cluster_name": "ceph",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.crush_device_class": "",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.encrypted": "0",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.osd_id": "0",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.type": "block",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.vdo": "0"
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            },
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "type": "block",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "vg_name": "ceph_vg0"
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:        }
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:    ],
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:    "1": [
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:        {
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "devices": [
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "/dev/loop4"
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            ],
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "lv_name": "ceph_lv1",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "lv_size": "21470642176",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "name": "ceph_lv1",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "tags": {
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.cluster_name": "ceph",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.crush_device_class": "",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.encrypted": "0",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.osd_id": "1",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.type": "block",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.vdo": "0"
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            },
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "type": "block",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "vg_name": "ceph_vg1"
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:        }
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:    ],
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:    "2": [
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:        {
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "devices": [
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "/dev/loop5"
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            ],
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "lv_name": "ceph_lv2",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "lv_size": "21470642176",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "name": "ceph_lv2",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "tags": {
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.cluster_name": "ceph",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.crush_device_class": "",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.encrypted": "0",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.osd_id": "2",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.type": "block",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:                "ceph.vdo": "0"
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            },
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "type": "block",
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:            "vg_name": "ceph_vg2"
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:        }
Dec  2 06:15:02 np0005542249 magical_bassi[260769]:    ]
Dec  2 06:15:02 np0005542249 magical_bassi[260769]: }
Dec  2 06:15:02 np0005542249 systemd[1]: libpod-bec35ace200b32827c230882d618d0c4854789b46be974e4d9576bd1b2b12f98.scope: Deactivated successfully.
Dec  2 06:15:02 np0005542249 podman[260779]: 2025-12-02 11:15:02.101537516 +0000 UTC m=+0.031242685 container died bec35ace200b32827c230882d618d0c4854789b46be974e4d9576bd1b2b12f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bassi, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:15:02 np0005542249 systemd[1]: var-lib-containers-storage-overlay-486b7c1657e37e84b7014d7c80498cf113799cb2a5ed5e9f7718860fac4a80d8-merged.mount: Deactivated successfully.
Dec  2 06:15:02 np0005542249 podman[260779]: 2025-12-02 11:15:02.188394388 +0000 UTC m=+0.118099497 container remove bec35ace200b32827c230882d618d0c4854789b46be974e4d9576bd1b2b12f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bassi, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  2 06:15:02 np0005542249 systemd[1]: libpod-conmon-bec35ace200b32827c230882d618d0c4854789b46be974e4d9576bd1b2b12f98.scope: Deactivated successfully.
Dec  2 06:15:03 np0005542249 podman[260938]: 2025-12-02 11:15:03.046113333 +0000 UTC m=+0.054646971 container create 21578378d4e5e5c5171bbf00ef2d1e3aaea78d1028d50de45ce99bec54b60de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_noether, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  2 06:15:03 np0005542249 systemd[1]: Started libpod-conmon-21578378d4e5e5c5171bbf00ef2d1e3aaea78d1028d50de45ce99bec54b60de2.scope.
Dec  2 06:15:03 np0005542249 podman[260938]: 2025-12-02 11:15:03.017220171 +0000 UTC m=+0.025753859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:15:03 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:15:03 np0005542249 podman[260938]: 2025-12-02 11:15:03.143450934 +0000 UTC m=+0.151984662 container init 21578378d4e5e5c5171bbf00ef2d1e3aaea78d1028d50de45ce99bec54b60de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_noether, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  2 06:15:03 np0005542249 podman[260938]: 2025-12-02 11:15:03.1571242 +0000 UTC m=+0.165657868 container start 21578378d4e5e5c5171bbf00ef2d1e3aaea78d1028d50de45ce99bec54b60de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_noether, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:15:03 np0005542249 podman[260938]: 2025-12-02 11:15:03.16199996 +0000 UTC m=+0.170533608 container attach 21578378d4e5e5c5171bbf00ef2d1e3aaea78d1028d50de45ce99bec54b60de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:15:03 np0005542249 unruffled_noether[260956]: 167 167
Dec  2 06:15:03 np0005542249 systemd[1]: libpod-21578378d4e5e5c5171bbf00ef2d1e3aaea78d1028d50de45ce99bec54b60de2.scope: Deactivated successfully.
Dec  2 06:15:03 np0005542249 podman[260938]: 2025-12-02 11:15:03.165485444 +0000 UTC m=+0.174019082 container died 21578378d4e5e5c5171bbf00ef2d1e3aaea78d1028d50de45ce99bec54b60de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:15:03 np0005542249 systemd[1]: var-lib-containers-storage-overlay-36e51f4af1ff30363356ef2c800a3e46aaddb5108179e19a306dc46071982f01-merged.mount: Deactivated successfully.
Dec  2 06:15:03 np0005542249 podman[260952]: 2025-12-02 11:15:03.202638766 +0000 UTC m=+0.103975430 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec  2 06:15:03 np0005542249 podman[260938]: 2025-12-02 11:15:03.215325125 +0000 UTC m=+0.223858773 container remove 21578378d4e5e5c5171bbf00ef2d1e3aaea78d1028d50de45ce99bec54b60de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  2 06:15:03 np0005542249 systemd[1]: libpod-conmon-21578378d4e5e5c5171bbf00ef2d1e3aaea78d1028d50de45ce99bec54b60de2.scope: Deactivated successfully.
Dec  2 06:15:03 np0005542249 podman[260997]: 2025-12-02 11:15:03.427518297 +0000 UTC m=+0.054898588 container create fe31b5505c90a00cc992c87712f8a1327896d270ccd80c346dd4cdf3257c0e16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:15:03 np0005542249 systemd[1]: Started libpod-conmon-fe31b5505c90a00cc992c87712f8a1327896d270ccd80c346dd4cdf3257c0e16.scope.
Dec  2 06:15:03 np0005542249 podman[260997]: 2025-12-02 11:15:03.401679147 +0000 UTC m=+0.029059468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:15:03 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:15:03 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e9c9e7d417bfc404547f2eae6afb89db1cc061f2ea81a606363d1c0dcb85658/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:15:03 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e9c9e7d417bfc404547f2eae6afb89db1cc061f2ea81a606363d1c0dcb85658/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:15:03 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e9c9e7d417bfc404547f2eae6afb89db1cc061f2ea81a606363d1c0dcb85658/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:15:03 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e9c9e7d417bfc404547f2eae6afb89db1cc061f2ea81a606363d1c0dcb85658/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:15:03 np0005542249 podman[260997]: 2025-12-02 11:15:03.537629831 +0000 UTC m=+0.165010142 container init fe31b5505c90a00cc992c87712f8a1327896d270ccd80c346dd4cdf3257c0e16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mclaren, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  2 06:15:03 np0005542249 podman[260997]: 2025-12-02 11:15:03.551176322 +0000 UTC m=+0.178556653 container start fe31b5505c90a00cc992c87712f8a1327896d270ccd80c346dd4cdf3257c0e16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:15:03 np0005542249 podman[260997]: 2025-12-02 11:15:03.55559963 +0000 UTC m=+0.182979941 container attach fe31b5505c90a00cc992c87712f8a1327896d270ccd80c346dd4cdf3257c0e16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mclaren, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  2 06:15:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v885: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:15:04 np0005542249 silly_mclaren[261013]: {
Dec  2 06:15:04 np0005542249 silly_mclaren[261013]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:15:04 np0005542249 silly_mclaren[261013]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:15:04 np0005542249 silly_mclaren[261013]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:15:04 np0005542249 silly_mclaren[261013]:        "osd_id": 0,
Dec  2 06:15:04 np0005542249 silly_mclaren[261013]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:15:04 np0005542249 silly_mclaren[261013]:        "type": "bluestore"
Dec  2 06:15:04 np0005542249 silly_mclaren[261013]:    },
Dec  2 06:15:04 np0005542249 silly_mclaren[261013]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:15:04 np0005542249 silly_mclaren[261013]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:15:04 np0005542249 silly_mclaren[261013]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:15:04 np0005542249 silly_mclaren[261013]:        "osd_id": 2,
Dec  2 06:15:04 np0005542249 silly_mclaren[261013]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:15:04 np0005542249 silly_mclaren[261013]:        "type": "bluestore"
Dec  2 06:15:04 np0005542249 silly_mclaren[261013]:    },
Dec  2 06:15:04 np0005542249 silly_mclaren[261013]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:15:04 np0005542249 silly_mclaren[261013]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:15:04 np0005542249 silly_mclaren[261013]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:15:04 np0005542249 silly_mclaren[261013]:        "osd_id": 1,
Dec  2 06:15:04 np0005542249 silly_mclaren[261013]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:15:04 np0005542249 silly_mclaren[261013]:        "type": "bluestore"
Dec  2 06:15:04 np0005542249 silly_mclaren[261013]:    }
Dec  2 06:15:04 np0005542249 silly_mclaren[261013]: }
Dec  2 06:15:04 np0005542249 systemd[1]: libpod-fe31b5505c90a00cc992c87712f8a1327896d270ccd80c346dd4cdf3257c0e16.scope: Deactivated successfully.
Dec  2 06:15:04 np0005542249 podman[260997]: 2025-12-02 11:15:04.705288 +0000 UTC m=+1.332668311 container died fe31b5505c90a00cc992c87712f8a1327896d270ccd80c346dd4cdf3257c0e16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mclaren, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:15:04 np0005542249 systemd[1]: libpod-fe31b5505c90a00cc992c87712f8a1327896d270ccd80c346dd4cdf3257c0e16.scope: Consumed 1.162s CPU time.
Dec  2 06:15:04 np0005542249 systemd[1]: var-lib-containers-storage-overlay-2e9c9e7d417bfc404547f2eae6afb89db1cc061f2ea81a606363d1c0dcb85658-merged.mount: Deactivated successfully.
Dec  2 06:15:04 np0005542249 podman[260997]: 2025-12-02 11:15:04.792359797 +0000 UTC m=+1.419740098 container remove fe31b5505c90a00cc992c87712f8a1327896d270ccd80c346dd4cdf3257c0e16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mclaren, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 06:15:04 np0005542249 systemd[1]: libpod-conmon-fe31b5505c90a00cc992c87712f8a1327896d270ccd80c346dd4cdf3257c0e16.scope: Deactivated successfully.
Dec  2 06:15:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:15:04 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:15:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:15:04 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:15:04 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 3d85da81-6f37-466c-b6d3-80c9b00e2795 does not exist
Dec  2 06:15:04 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 4a0c5994-fa9d-43a1-8ee9-5304b2cd20ea does not exist
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:05.011392) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674105011422, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 300, "num_deletes": 251, "total_data_size": 113779, "memory_usage": 120856, "flush_reason": "Manual Compaction"}
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674105014110, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 113284, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18403, "largest_seqno": 18702, "table_properties": {"data_size": 111268, "index_size": 244, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5170, "raw_average_key_size": 18, "raw_value_size": 107257, "raw_average_value_size": 385, "num_data_blocks": 10, "num_entries": 278, "num_filter_entries": 278, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764674101, "oldest_key_time": 1764674101, "file_creation_time": 1764674105, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 2749 microseconds, and 806 cpu microseconds.
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:05.014143) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 113284 bytes OK
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:05.014156) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:05.015568) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:05.015580) EVENT_LOG_v1 {"time_micros": 1764674105015575, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:05.015593) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 111596, prev total WAL file size 111596, number of live WAL files 2.
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:05.015950) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(110KB)], [41(7669KB)]
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674105016147, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 7966914, "oldest_snapshot_seqno": -1}
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4202 keys, 6204718 bytes, temperature: kUnknown
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674105057168, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6204718, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6177390, "index_size": 15699, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10565, "raw_key_size": 102983, "raw_average_key_size": 24, "raw_value_size": 6102020, "raw_average_value_size": 1452, "num_data_blocks": 661, "num_entries": 4202, "num_filter_entries": 4202, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672515, "oldest_key_time": 0, "file_creation_time": 1764674105, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:05.057730) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6204718 bytes
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:05.059415) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.9 rd, 150.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 7.5 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(125.1) write-amplify(54.8) OK, records in: 4714, records dropped: 512 output_compression: NoCompression
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:05.059457) EVENT_LOG_v1 {"time_micros": 1764674105059438, "job": 20, "event": "compaction_finished", "compaction_time_micros": 41291, "compaction_time_cpu_micros": 20415, "output_level": 6, "num_output_files": 1, "total_output_size": 6204718, "num_input_records": 4714, "num_output_records": 4202, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674105059708, "job": 20, "event": "table_file_deletion", "file_number": 43}
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674105062807, "job": 20, "event": "table_file_deletion", "file_number": 41}
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:05.015897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:05.062921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:05.062932) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:05.062936) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:05.062940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:15:05.062945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:15:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v886: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:15:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:15:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v887: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:15:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v888: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:15:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:15:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Dec  2 06:15:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Dec  2 06:15:11 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Dec  2 06:15:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v890: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:15:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Dec  2 06:15:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Dec  2 06:15:12 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Dec  2 06:15:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Dec  2 06:15:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Dec  2 06:15:13 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Dec  2 06:15:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v893: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec  2 06:15:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Dec  2 06:15:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Dec  2 06:15:14 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Dec  2 06:15:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v895: 321 pgs: 321 active+clean; 21 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 4.3 MiB/s wr, 43 op/s
Dec  2 06:15:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:15:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v896: 321 pgs: 321 active+clean; 41 MiB data, 185 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 6.8 MiB/s wr, 62 op/s
Dec  2 06:15:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v897: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.3 MiB/s wr, 49 op/s
Dec  2 06:15:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:15:19.827 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:15:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:15:19.828 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:15:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:15:19.828 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:15:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:15:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Dec  2 06:15:21 np0005542249 podman[261109]: 2025-12-02 11:15:21.042749747 +0000 UTC m=+0.102592903 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  2 06:15:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Dec  2 06:15:21 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Dec  2 06:15:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v899: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 5.1 MiB/s wr, 43 op/s
Dec  2 06:15:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v900: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.2 MiB/s wr, 36 op/s
Dec  2 06:15:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v901: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 2.0 MiB/s wr, 17 op/s
Dec  2 06:15:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:15:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:15:26
Dec  2 06:15:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:15:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:15:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'vms', 'backups', 'default.rgw.meta', 'images', '.rgw.root', 'default.rgw.control']
Dec  2 06:15:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:15:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:15:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:15:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:15:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:15:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:15:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:15:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:15:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:15:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:15:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:15:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:15:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:15:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:15:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:15:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:15:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:15:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v902: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 102 B/s wr, 0 op/s
Dec  2 06:15:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v903: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:15:29 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:15:29.810 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:23:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:25:d0:a1:75:ed'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:15:29 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:15:29.816 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 06:15:31 np0005542249 podman[261129]: 2025-12-02 11:15:31.062486856 +0000 UTC m=+0.131293450 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec  2 06:15:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:15:31 np0005542249 nova_compute[254900]: 2025-12-02 11:15:31.378 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:15:31 np0005542249 nova_compute[254900]: 2025-12-02 11:15:31.397 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:15:31 np0005542249 nova_compute[254900]: 2025-12-02 11:15:31.398 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:15:31 np0005542249 nova_compute[254900]: 2025-12-02 11:15:31.398 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:15:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v904: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:15:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Dec  2 06:15:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Dec  2 06:15:32 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Dec  2 06:15:32 np0005542249 nova_compute[254900]: 2025-12-02 11:15:32.383 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:15:32 np0005542249 nova_compute[254900]: 2025-12-02 11:15:32.383 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:15:32 np0005542249 nova_compute[254900]: 2025-12-02 11:15:32.383 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:15:32 np0005542249 nova_compute[254900]: 2025-12-02 11:15:32.384 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 06:15:32 np0005542249 nova_compute[254900]: 2025-12-02 11:15:32.402 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 06:15:32 np0005542249 nova_compute[254900]: 2025-12-02 11:15:32.402 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:15:32 np0005542249 nova_compute[254900]: 2025-12-02 11:15:32.403 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:15:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Dec  2 06:15:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Dec  2 06:15:33 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Dec  2 06:15:33 np0005542249 nova_compute[254900]: 2025-12-02 11:15:33.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:15:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v907: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Dec  2 06:15:33 np0005542249 podman[261156]: 2025-12-02 11:15:33.986139469 +0000 UTC m=+0.067243628 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  2 06:15:34 np0005542249 nova_compute[254900]: 2025-12-02 11:15:34.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:15:34 np0005542249 nova_compute[254900]: 2025-12-02 11:15:34.414 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:15:34 np0005542249 nova_compute[254900]: 2025-12-02 11:15:34.415 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:15:34 np0005542249 nova_compute[254900]: 2025-12-02 11:15:34.415 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:15:34 np0005542249 nova_compute[254900]: 2025-12-02 11:15:34.416 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:15:34 np0005542249 nova_compute[254900]: 2025-12-02 11:15:34.416 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:15:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:15:34 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3284285678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:15:34 np0005542249 nova_compute[254900]: 2025-12-02 11:15:34.913 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:15:35 np0005542249 nova_compute[254900]: 2025-12-02 11:15:35.127 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:15:35 np0005542249 nova_compute[254900]: 2025-12-02 11:15:35.128 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5185MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:15:35 np0005542249 nova_compute[254900]: 2025-12-02 11:15:35.129 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:15:35 np0005542249 nova_compute[254900]: 2025-12-02 11:15:35.129 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:15:35 np0005542249 nova_compute[254900]: 2025-12-02 11:15:35.211 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:15:35 np0005542249 nova_compute[254900]: 2025-12-02 11:15:35.211 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:15:35 np0005542249 nova_compute[254900]: 2025-12-02 11:15:35.226 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:15:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Dec  2 06:15:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Dec  2 06:15:35 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Dec  2 06:15:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:15:35 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2689234904' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:15:35 np0005542249 nova_compute[254900]: 2025-12-02 11:15:35.750 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:15:35 np0005542249 nova_compute[254900]: 2025-12-02 11:15:35.758 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:15:35 np0005542249 nova_compute[254900]: 2025-12-02 11:15:35.775 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:15:35 np0005542249 nova_compute[254900]: 2025-12-02 11:15:35.777 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:15:35 np0005542249 nova_compute[254900]: 2025-12-02 11:15:35.778 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:15:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v909: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 511 B/s wr, 1 op/s
Dec  2 06:15:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:15:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:15:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:15:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:15:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:15:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:15:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Dec  2 06:15:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:15:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:15:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:15:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  2 06:15:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:15:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:15:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:15:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:15:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:15:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:15:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:15:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:15:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:15:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:15:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:15:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:15:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:15:36 np0005542249 nova_compute[254900]: 2025-12-02 11:15:36.778 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:15:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:15:36.819 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:15:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v910: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.7 KiB/s wr, 35 op/s
Dec  2 06:15:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:15:37 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3437437885' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:15:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:15:37 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3437437885' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:15:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:15:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2094274864' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:15:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:15:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2094274864' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:15:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v911: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 3.6 KiB/s wr, 66 op/s
Dec  2 06:15:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Dec  2 06:15:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Dec  2 06:15:40 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Dec  2 06:15:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:15:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Dec  2 06:15:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Dec  2 06:15:41 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Dec  2 06:15:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v914: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 3.8 KiB/s wr, 77 op/s
Dec  2 06:15:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:15:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2141880806' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:15:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:15:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2141880806' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:15:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v915: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 4.6 KiB/s wr, 89 op/s
Dec  2 06:15:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Dec  2 06:15:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Dec  2 06:15:45 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Dec  2 06:15:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v917: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.5 KiB/s wr, 42 op/s
Dec  2 06:15:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:15:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:15:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2611627378' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:15:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:15:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2611627378' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:15:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v918: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 4.9 KiB/s wr, 110 op/s
Dec  2 06:15:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v919: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 4.0 KiB/s wr, 89 op/s
Dec  2 06:15:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:15:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3630203054' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:15:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:15:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3630203054' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:15:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  2 06:15:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Dec  2 06:15:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Dec  2 06:15:51 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Dec  2 06:15:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v921: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 2.6 KiB/s wr, 67 op/s
Dec  2 06:15:52 np0005542249 podman[261220]: 2025-12-02 11:15:52.032705189 +0000 UTC m=+0.104927056 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:15:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Dec  2 06:15:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Dec  2 06:15:53 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Dec  2 06:15:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v923: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 3.2 KiB/s wr, 66 op/s
Dec  2 06:15:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Dec  2 06:15:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Dec  2 06:15:54 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Dec  2 06:15:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v925: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 44 KiB/s wr, 4 op/s
Dec  2 06:15:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:15:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Dec  2 06:15:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:15:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:15:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:15:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Dec  2 06:15:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:15:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:15:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:15:56 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Dec  2 06:15:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Dec  2 06:15:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Dec  2 06:15:57 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Dec  2 06:15:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v928: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 63 KiB/s wr, 93 op/s
Dec  2 06:15:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:15:59 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3897111352' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:15:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:15:59 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3897111352' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:15:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v929: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 47 KiB/s wr, 71 op/s
Dec  2 06:16:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:16:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Dec  2 06:16:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Dec  2 06:16:01 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Dec  2 06:16:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v931: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 4.2 KiB/s wr, 70 op/s
Dec  2 06:16:01 np0005542249 nova_compute[254900]: 2025-12-02 11:16:01.882 254904 DEBUG oslo_concurrency.lockutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Acquiring lock "65125ac7-42c5-4e84-8f5f-4ffef2e430dd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:16:01 np0005542249 nova_compute[254900]: 2025-12-02 11:16:01.882 254904 DEBUG oslo_concurrency.lockutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Lock "65125ac7-42c5-4e84-8f5f-4ffef2e430dd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:16:01 np0005542249 nova_compute[254900]: 2025-12-02 11:16:01.909 254904 DEBUG nova.compute.manager [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] [instance: 65125ac7-42c5-4e84-8f5f-4ffef2e430dd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:16:02 np0005542249 nova_compute[254900]: 2025-12-02 11:16:02.082 254904 DEBUG oslo_concurrency.lockutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:16:02 np0005542249 nova_compute[254900]: 2025-12-02 11:16:02.083 254904 DEBUG oslo_concurrency.lockutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:16:02 np0005542249 nova_compute[254900]: 2025-12-02 11:16:02.092 254904 DEBUG nova.virt.hardware [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:16:02 np0005542249 nova_compute[254900]: 2025-12-02 11:16:02.092 254904 INFO nova.compute.claims [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] [instance: 65125ac7-42c5-4e84-8f5f-4ffef2e430dd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:16:02 np0005542249 podman[261240]: 2025-12-02 11:16:02.105639458 +0000 UTC m=+0.171589858 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:16:02 np0005542249 nova_compute[254900]: 2025-12-02 11:16:02.229 254904 DEBUG oslo_concurrency.processutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:16:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:16:02 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3179684104' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:16:02 np0005542249 nova_compute[254900]: 2025-12-02 11:16:02.708 254904 DEBUG oslo_concurrency.processutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:16:02 np0005542249 nova_compute[254900]: 2025-12-02 11:16:02.719 254904 DEBUG nova.compute.provider_tree [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:16:02 np0005542249 nova_compute[254900]: 2025-12-02 11:16:02.737 254904 DEBUG nova.scheduler.client.report [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:16:02 np0005542249 nova_compute[254900]: 2025-12-02 11:16:02.764 254904 DEBUG oslo_concurrency.lockutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:16:02 np0005542249 nova_compute[254900]: 2025-12-02 11:16:02.765 254904 DEBUG nova.compute.manager [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] [instance: 65125ac7-42c5-4e84-8f5f-4ffef2e430dd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:16:02 np0005542249 nova_compute[254900]: 2025-12-02 11:16:02.814 254904 DEBUG nova.compute.manager [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] [instance: 65125ac7-42c5-4e84-8f5f-4ffef2e430dd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:16:02 np0005542249 nova_compute[254900]: 2025-12-02 11:16:02.815 254904 DEBUG nova.network.neutron [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] [instance: 65125ac7-42c5-4e84-8f5f-4ffef2e430dd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:16:02 np0005542249 nova_compute[254900]: 2025-12-02 11:16:02.847 254904 INFO nova.virt.libvirt.driver [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] [instance: 65125ac7-42c5-4e84-8f5f-4ffef2e430dd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:16:02 np0005542249 nova_compute[254900]: 2025-12-02 11:16:02.878 254904 DEBUG nova.compute.manager [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] [instance: 65125ac7-42c5-4e84-8f5f-4ffef2e430dd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:16:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:16:02 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1742048765' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:16:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:16:02 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1742048765' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:16:03 np0005542249 nova_compute[254900]: 2025-12-02 11:16:03.001 254904 DEBUG nova.compute.manager [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] [instance: 65125ac7-42c5-4e84-8f5f-4ffef2e430dd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:16:03 np0005542249 nova_compute[254900]: 2025-12-02 11:16:03.003 254904 DEBUG nova.virt.libvirt.driver [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] [instance: 65125ac7-42c5-4e84-8f5f-4ffef2e430dd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:16:03 np0005542249 nova_compute[254900]: 2025-12-02 11:16:03.004 254904 INFO nova.virt.libvirt.driver [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] [instance: 65125ac7-42c5-4e84-8f5f-4ffef2e430dd] Creating image(s)#033[00m
Dec  2 06:16:03 np0005542249 nova_compute[254900]: 2025-12-02 11:16:03.040 254904 DEBUG nova.storage.rbd_utils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] rbd image 65125ac7-42c5-4e84-8f5f-4ffef2e430dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:16:03 np0005542249 nova_compute[254900]: 2025-12-02 11:16:03.075 254904 DEBUG nova.storage.rbd_utils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] rbd image 65125ac7-42c5-4e84-8f5f-4ffef2e430dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:16:03 np0005542249 nova_compute[254900]: 2025-12-02 11:16:03.105 254904 DEBUG nova.storage.rbd_utils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] rbd image 65125ac7-42c5-4e84-8f5f-4ffef2e430dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:16:03 np0005542249 nova_compute[254900]: 2025-12-02 11:16:03.111 254904 DEBUG oslo_concurrency.lockutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Acquiring lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:16:03 np0005542249 nova_compute[254900]: 2025-12-02 11:16:03.113 254904 DEBUG oslo_concurrency.lockutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:16:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Dec  2 06:16:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Dec  2 06:16:03 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Dec  2 06:16:03 np0005542249 nova_compute[254900]: 2025-12-02 11:16:03.435 254904 WARNING oslo_policy.policy [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec  2 06:16:03 np0005542249 nova_compute[254900]: 2025-12-02 11:16:03.437 254904 WARNING oslo_policy.policy [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec  2 06:16:03 np0005542249 nova_compute[254900]: 2025-12-02 11:16:03.442 254904 DEBUG nova.policy [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9f85e0bb4bf409d9172585b3149a0eb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a61fc9ba27754824af1c3e45b9ffaea3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:16:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v933: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 3.9 KiB/s wr, 94 op/s
Dec  2 06:16:03 np0005542249 nova_compute[254900]: 2025-12-02 11:16:03.829 254904 DEBUG nova.virt.libvirt.imagebackend [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Image locations are: [{'url': 'rbd://95bc4eaa-1a14-59bf-acf2-4b3da055547d/images/5a40f66c-ab43-47dd-9880-e59f9fa2c60e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://95bc4eaa-1a14-59bf-acf2-4b3da055547d/images/5a40f66c-ab43-47dd-9880-e59f9fa2c60e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec  2 06:16:04 np0005542249 nova_compute[254900]: 2025-12-02 11:16:04.430 254904 DEBUG nova.network.neutron [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] [instance: 65125ac7-42c5-4e84-8f5f-4ffef2e430dd] Successfully created port: 6e499fa4-3814-4b75-8ed5-7abb670692b1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:16:05 np0005542249 podman[261342]: 2025-12-02 11:16:05.017945739 +0000 UTC m=+0.084314335 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:16:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Dec  2 06:16:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Dec  2 06:16:05 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Dec  2 06:16:05 np0005542249 nova_compute[254900]: 2025-12-02 11:16:05.438 254904 DEBUG oslo_concurrency.processutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:16:05 np0005542249 nova_compute[254900]: 2025-12-02 11:16:05.532 254904 DEBUG oslo_concurrency.processutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2.part --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:16:05 np0005542249 nova_compute[254900]: 2025-12-02 11:16:05.534 254904 DEBUG nova.virt.images [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] 5a40f66c-ab43-47dd-9880-e59f9fa2c60e was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  2 06:16:05 np0005542249 nova_compute[254900]: 2025-12-02 11:16:05.535 254904 DEBUG nova.privsep.utils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  2 06:16:05 np0005542249 nova_compute[254900]: 2025-12-02 11:16:05.536 254904 DEBUG oslo_concurrency.processutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2.part /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:16:05 np0005542249 nova_compute[254900]: 2025-12-02 11:16:05.746 254904 DEBUG oslo_concurrency.processutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2.part /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2.converted" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:16:05 np0005542249 nova_compute[254900]: 2025-12-02 11:16:05.757 254904 DEBUG oslo_concurrency.processutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:16:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v935: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 2.5 KiB/s wr, 68 op/s
Dec  2 06:16:05 np0005542249 nova_compute[254900]: 2025-12-02 11:16:05.831 254904 DEBUG oslo_concurrency.processutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2.converted --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:16:05 np0005542249 nova_compute[254900]: 2025-12-02 11:16:05.833 254904 DEBUG oslo_concurrency.lockutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:16:05 np0005542249 nova_compute[254900]: 2025-12-02 11:16:05.866 254904 DEBUG nova.storage.rbd_utils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] rbd image 65125ac7-42c5-4e84-8f5f-4ffef2e430dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:16:05 np0005542249 nova_compute[254900]: 2025-12-02 11:16:05.871 254904 DEBUG oslo_concurrency.processutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 65125ac7-42c5-4e84-8f5f-4ffef2e430dd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:16:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:16:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:16:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:16:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:16:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:16:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:16:06 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev c6ffda2c-0db2-4303-8cbd-bd3ce741e265 does not exist
Dec  2 06:16:06 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev d76dc2e5-3a6f-4b67-891d-674a0b495923 does not exist
Dec  2 06:16:06 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 8a4e58f3-5d70-48e7-9e83-72db1a985008 does not exist
Dec  2 06:16:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:16:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:16:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:16:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:16:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:16:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:16:06 np0005542249 nova_compute[254900]: 2025-12-02 11:16:06.116 254904 DEBUG nova.network.neutron [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] [instance: 65125ac7-42c5-4e84-8f5f-4ffef2e430dd] Successfully updated port: 6e499fa4-3814-4b75-8ed5-7abb670692b1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:16:06 np0005542249 nova_compute[254900]: 2025-12-02 11:16:06.133 254904 DEBUG oslo_concurrency.lockutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Acquiring lock "refresh_cache-65125ac7-42c5-4e84-8f5f-4ffef2e430dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:16:06 np0005542249 nova_compute[254900]: 2025-12-02 11:16:06.134 254904 DEBUG oslo_concurrency.lockutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Acquired lock "refresh_cache-65125ac7-42c5-4e84-8f5f-4ffef2e430dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:16:06 np0005542249 nova_compute[254900]: 2025-12-02 11:16:06.134 254904 DEBUG nova.network.neutron [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] [instance: 65125ac7-42c5-4e84-8f5f-4ffef2e430dd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:16:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:16:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Dec  2 06:16:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Dec  2 06:16:06 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Dec  2 06:16:06 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:16:06 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:16:06 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:16:06 np0005542249 nova_compute[254900]: 2025-12-02 11:16:06.437 254904 DEBUG nova.network.neutron [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] [instance: 65125ac7-42c5-4e84-8f5f-4ffef2e430dd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:16:06 np0005542249 nova_compute[254900]: 2025-12-02 11:16:06.670 254904 DEBUG nova.compute.manager [req-5908c32e-e69d-4c5a-b129-ba9d6f49acc1 req-4689c58e-b2e1-4877-9b22-32e7ef23d735 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 65125ac7-42c5-4e84-8f5f-4ffef2e430dd] Received event network-changed-6e499fa4-3814-4b75-8ed5-7abb670692b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:16:06 np0005542249 nova_compute[254900]: 2025-12-02 11:16:06.672 254904 DEBUG nova.compute.manager [req-5908c32e-e69d-4c5a-b129-ba9d6f49acc1 req-4689c58e-b2e1-4877-9b22-32e7ef23d735 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 65125ac7-42c5-4e84-8f5f-4ffef2e430dd] Refreshing instance network info cache due to event network-changed-6e499fa4-3814-4b75-8ed5-7abb670692b1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:16:06 np0005542249 nova_compute[254900]: 2025-12-02 11:16:06.673 254904 DEBUG oslo_concurrency.lockutils [req-5908c32e-e69d-4c5a-b129-ba9d6f49acc1 req-4689c58e-b2e1-4877-9b22-32e7ef23d735 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-65125ac7-42c5-4e84-8f5f-4ffef2e430dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:16:06 np0005542249 podman[261677]: 2025-12-02 11:16:06.889494991 +0000 UTC m=+0.071962784 container create 6edbf792b85eb5e706167620f78e163cf15881562c74ea6cccec46113c6f875c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_aryabhata, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  2 06:16:06 np0005542249 systemd[1]: Started libpod-conmon-6edbf792b85eb5e706167620f78e163cf15881562c74ea6cccec46113c6f875c.scope.
Dec  2 06:16:06 np0005542249 podman[261677]: 2025-12-02 11:16:06.858209255 +0000 UTC m=+0.040677088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:16:06 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:16:07 np0005542249 podman[261677]: 2025-12-02 11:16:07.011486642 +0000 UTC m=+0.193954485 container init 6edbf792b85eb5e706167620f78e163cf15881562c74ea6cccec46113c6f875c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_aryabhata, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec  2 06:16:07 np0005542249 podman[261677]: 2025-12-02 11:16:07.0544546 +0000 UTC m=+0.236922393 container start 6edbf792b85eb5e706167620f78e163cf15881562c74ea6cccec46113c6f875c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  2 06:16:07 np0005542249 podman[261677]: 2025-12-02 11:16:07.058707744 +0000 UTC m=+0.241175597 container attach 6edbf792b85eb5e706167620f78e163cf15881562c74ea6cccec46113c6f875c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_aryabhata, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:16:07 np0005542249 systemd[1]: libpod-6edbf792b85eb5e706167620f78e163cf15881562c74ea6cccec46113c6f875c.scope: Deactivated successfully.
Dec  2 06:16:07 np0005542249 quirky_aryabhata[261693]: 167 167
Dec  2 06:16:07 np0005542249 podman[261677]: 2025-12-02 11:16:07.067142779 +0000 UTC m=+0.249610632 container died 6edbf792b85eb5e706167620f78e163cf15881562c74ea6cccec46113c6f875c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_aryabhata, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  2 06:16:07 np0005542249 conmon[261693]: conmon 6edbf792b85eb5e70616 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6edbf792b85eb5e706167620f78e163cf15881562c74ea6cccec46113c6f875c.scope/container/memory.events
Dec  2 06:16:07 np0005542249 systemd[1]: var-lib-containers-storage-overlay-9afd7ee90bc1bef84699f7c8c320b81bdd900a097d27220d6f54fb9393c41e9b-merged.mount: Deactivated successfully.
Dec  2 06:16:07 np0005542249 podman[261677]: 2025-12-02 11:16:07.114497095 +0000 UTC m=+0.296964858 container remove 6edbf792b85eb5e706167620f78e163cf15881562c74ea6cccec46113c6f875c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_aryabhata, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:16:07 np0005542249 systemd[1]: libpod-conmon-6edbf792b85eb5e706167620f78e163cf15881562c74ea6cccec46113c6f875c.scope: Deactivated successfully.
Dec  2 06:16:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:16:07 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3617672424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:16:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:16:07 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3617672424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:16:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Dec  2 06:16:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Dec  2 06:16:07 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Dec  2 06:16:07 np0005542249 podman[261718]: 2025-12-02 11:16:07.360734087 +0000 UTC m=+0.062131341 container create b26f64234869dc7a67392744eccec77e6ed475719185a19b82afa3bcf2260018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  2 06:16:07 np0005542249 podman[261718]: 2025-12-02 11:16:07.327694653 +0000 UTC m=+0.029091917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:16:07 np0005542249 systemd[1]: Started libpod-conmon-b26f64234869dc7a67392744eccec77e6ed475719185a19b82afa3bcf2260018.scope.
Dec  2 06:16:07 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:16:07 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d4f4379a19f04ed4fffa1b931f8356923fe9b0d2049466640328e6fc2315d39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:16:07 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d4f4379a19f04ed4fffa1b931f8356923fe9b0d2049466640328e6fc2315d39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:16:07 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d4f4379a19f04ed4fffa1b931f8356923fe9b0d2049466640328e6fc2315d39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:16:07 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d4f4379a19f04ed4fffa1b931f8356923fe9b0d2049466640328e6fc2315d39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:16:07 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d4f4379a19f04ed4fffa1b931f8356923fe9b0d2049466640328e6fc2315d39/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:16:07 np0005542249 podman[261718]: 2025-12-02 11:16:07.50039169 +0000 UTC m=+0.201788944 container init b26f64234869dc7a67392744eccec77e6ed475719185a19b82afa3bcf2260018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  2 06:16:07 np0005542249 podman[261718]: 2025-12-02 11:16:07.515488733 +0000 UTC m=+0.216885967 container start b26f64234869dc7a67392744eccec77e6ed475719185a19b82afa3bcf2260018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mclean, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:16:07 np0005542249 podman[261718]: 2025-12-02 11:16:07.527502544 +0000 UTC m=+0.228899808 container attach b26f64234869dc7a67392744eccec77e6ed475719185a19b82afa3bcf2260018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mclean, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.582 254904 DEBUG oslo_concurrency.processutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 65125ac7-42c5-4e84-8f5f-4ffef2e430dd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.711s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.671 254904 DEBUG nova.storage.rbd_utils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] resizing rbd image 65125ac7-42c5-4e84-8f5f-4ffef2e430dd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.768 254904 DEBUG nova.objects.instance [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Lazy-loading 'migration_context' on Instance uuid 65125ac7-42c5-4e84-8f5f-4ffef2e430dd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:16:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v938: 321 pgs: 321 active+clean; 50 MiB data, 192 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 440 KiB/s wr, 108 op/s
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.857 254904 DEBUG nova.virt.libvirt.driver [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] [instance: 65125ac7-42c5-4e84-8f5f-4ffef2e430dd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.858 254904 DEBUG nova.virt.libvirt.driver [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] [instance: 65125ac7-42c5-4e84-8f5f-4ffef2e430dd] Ensure instance console log exists: /var/lib/nova/instances/65125ac7-42c5-4e84-8f5f-4ffef2e430dd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.859 254904 DEBUG oslo_concurrency.lockutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.859 254904 DEBUG oslo_concurrency.lockutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.860 254904 DEBUG oslo_concurrency.lockutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.885 254904 DEBUG nova.network.neutron [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] [instance: 65125ac7-42c5-4e84-8f5f-4ffef2e430dd] Updating instance_info_cache with network_info: [{"id": "6e499fa4-3814-4b75-8ed5-7abb670692b1", "address": "fa:16:3e:77:b2:c1", "network": {"id": "e8f9b839-d066-4644-8f4d-76185ac7bd5e", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-255346915-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a61fc9ba27754824af1c3e45b9ffaea3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e499fa4-38", "ovs_interfaceid": "6e499fa4-3814-4b75-8ed5-7abb670692b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.958 254904 DEBUG oslo_concurrency.lockutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Releasing lock "refresh_cache-65125ac7-42c5-4e84-8f5f-4ffef2e430dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.959 254904 DEBUG nova.compute.manager [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] [instance: 65125ac7-42c5-4e84-8f5f-4ffef2e430dd] Instance network_info: |[{"id": "6e499fa4-3814-4b75-8ed5-7abb670692b1", "address": "fa:16:3e:77:b2:c1", "network": {"id": "e8f9b839-d066-4644-8f4d-76185ac7bd5e", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-255346915-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a61fc9ba27754824af1c3e45b9ffaea3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e499fa4-38", "ovs_interfaceid": "6e499fa4-3814-4b75-8ed5-7abb670692b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.960 254904 DEBUG oslo_concurrency.lockutils [req-5908c32e-e69d-4c5a-b129-ba9d6f49acc1 req-4689c58e-b2e1-4877-9b22-32e7ef23d735 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-65125ac7-42c5-4e84-8f5f-4ffef2e430dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.960 254904 DEBUG nova.network.neutron [req-5908c32e-e69d-4c5a-b129-ba9d6f49acc1 req-4689c58e-b2e1-4877-9b22-32e7ef23d735 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 65125ac7-42c5-4e84-8f5f-4ffef2e430dd] Refreshing network info cache for port 6e499fa4-3814-4b75-8ed5-7abb670692b1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.964 254904 DEBUG nova.virt.libvirt.driver [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] [instance: 65125ac7-42c5-4e84-8f5f-4ffef2e430dd] Start _get_guest_xml network_info=[{"id": "6e499fa4-3814-4b75-8ed5-7abb670692b1", "address": "fa:16:3e:77:b2:c1", "network": {"id": "e8f9b839-d066-4644-8f4d-76185ac7bd5e", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-255346915-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a61fc9ba27754824af1c3e45b9ffaea3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e499fa4-38", "ovs_interfaceid": "6e499fa4-3814-4b75-8ed5-7abb670692b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'image_id': '5a40f66c-ab43-47dd-9880-e59f9fa2c60e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.971 254904 WARNING nova.virt.libvirt.driver [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.977 254904 DEBUG nova.virt.libvirt.host [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.979 254904 DEBUG nova.virt.libvirt.host [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.983 254904 DEBUG nova.virt.libvirt.host [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.984 254904 DEBUG nova.virt.libvirt.host [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.984 254904 DEBUG nova.virt.libvirt.driver [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.984 254904 DEBUG nova.virt.hardware [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.985 254904 DEBUG nova.virt.hardware [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.985 254904 DEBUG nova.virt.hardware [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.986 254904 DEBUG nova.virt.hardware [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.986 254904 DEBUG nova.virt.hardware [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.986 254904 DEBUG nova.virt.hardware [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.986 254904 DEBUG nova.virt.hardware [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.987 254904 DEBUG nova.virt.hardware [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.987 254904 DEBUG nova.virt.hardware [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.987 254904 DEBUG nova.virt.hardware [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.988 254904 DEBUG nova.virt.hardware [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.991 254904 DEBUG nova.privsep.utils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  2 06:16:07 np0005542249 nova_compute[254900]: 2025-12-02 11:16:07.992 254904 DEBUG oslo_concurrency.processutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:16:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:16:08 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/358763583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:16:08 np0005542249 nova_compute[254900]: 2025-12-02 11:16:08.499 254904 DEBUG oslo_concurrency.processutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:16:08 np0005542249 nova_compute[254900]: 2025-12-02 11:16:08.530 254904 DEBUG nova.storage.rbd_utils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] rbd image 65125ac7-42c5-4e84-8f5f-4ffef2e430dd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:16:08 np0005542249 nova_compute[254900]: 2025-12-02 11:16:08.536 254904 DEBUG oslo_concurrency.processutils [None req-812544f8-d875-47db-9348-2d93d40eff13 f9f85e0bb4bf409d9172585b3149a0eb a61fc9ba27754824af1c3e45b9ffaea3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:16:08 np0005542249 awesome_mclean[261737]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:16:08 np0005542249 awesome_mclean[261737]: --> relative data size: 1.0
Dec  2 06:16:08 np0005542249 awesome_mclean[261737]: --> All data devices are unavailable
Dec  2 06:16:08 np0005542249 systemd[1]: libpod-b26f64234869dc7a67392744eccec77e6ed475719185a19b82afa3bcf2260018.scope: Deactivated successfully.
Dec  2 06:16:08 np0005542249 systemd[1]: libpod-b26f64234869dc7a67392744eccec77e6ed475719185a19b82afa3bcf2260018.scope: Consumed 1.214s CPU time.
Dec  2 06:16:08 np0005542249 podman[261718]: 2025-12-02 11:16:08.80316213 +0000 UTC m=+1.504559374 container died b26f64234869dc7a67392744eccec77e6ed475719185a19b82afa3bcf2260018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mclean, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 06:16:08 np0005542249 systemd[1]: var-lib-containers-storage-overlay-9d4f4379a19f04ed4fffa1b931f8356923fe9b0d2049466640328e6fc2315d39-merged.mount: Deactivated successfully.
Dec  2 06:16:08 np0005542249 podman[261718]: 2025-12-02 11:16:08.894346257 +0000 UTC m=+1.595743491 container remove b26f64234869dc7a67392744eccec77e6ed475719185a19b82afa3bcf2260018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mclean, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:17:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v995: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 3.2 KiB/s wr, 82 op/s
Dec  2 06:17:26 np0005542249 rsyslogd[1005]: imjournal: 2130 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec  2 06:17:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:17:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Dec  2 06:17:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Dec  2 06:17:26 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Dec  2 06:17:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:17:26
Dec  2 06:17:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:17:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:17:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'volumes', 'default.rgw.meta', 'vms', 'images', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control']
Dec  2 06:17:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:17:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:17:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:17:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:17:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:17:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:17:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:17:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:17:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:17:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:17:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:17:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:17:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:17:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:17:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:17:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:17:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:17:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:17:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1929546791' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:17:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:17:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1929546791' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:17:26 np0005542249 nova_compute[254900]: 2025-12-02 11:17:26.978 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:17:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:17:27 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3683738039' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:17:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:17:27 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3683738039' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:17:27 np0005542249 nova_compute[254900]: 2025-12-02 11:17:27.803 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:17:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v997: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 4.2 KiB/s wr, 123 op/s
Dec  2 06:17:29 np0005542249 nova_compute[254900]: 2025-12-02 11:17:29.245 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:17:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v998: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 3.7 KiB/s wr, 108 op/s
Dec  2 06:17:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:17:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Dec  2 06:17:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Dec  2 06:17:31 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Dec  2 06:17:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1000: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.4 KiB/s wr, 53 op/s
Dec  2 06:17:32 np0005542249 nova_compute[254900]: 2025-12-02 11:17:32.378 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:17:32 np0005542249 nova_compute[254900]: 2025-12-02 11:17:32.398 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:17:32 np0005542249 nova_compute[254900]: 2025-12-02 11:17:32.805 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:17:33 np0005542249 nova_compute[254900]: 2025-12-02 11:17:33.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:17:33 np0005542249 nova_compute[254900]: 2025-12-02 11:17:33.383 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:17:33 np0005542249 nova_compute[254900]: 2025-12-02 11:17:33.383 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:17:33 np0005542249 nova_compute[254900]: 2025-12-02 11:17:33.384 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:17:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1001: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.2 KiB/s wr, 41 op/s
Dec  2 06:17:34 np0005542249 nova_compute[254900]: 2025-12-02 11:17:34.248 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:17:34 np0005542249 nova_compute[254900]: 2025-12-02 11:17:34.379 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:17:34 np0005542249 nova_compute[254900]: 2025-12-02 11:17:34.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:17:34 np0005542249 nova_compute[254900]: 2025-12-02 11:17:34.381 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:17:34 np0005542249 nova_compute[254900]: 2025-12-02 11:17:34.382 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 06:17:34 np0005542249 nova_compute[254900]: 2025-12-02 11:17:34.418 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 06:17:34 np0005542249 nova_compute[254900]: 2025-12-02 11:17:34.418 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:17:35 np0005542249 podman[264591]: 2025-12-02 11:17:35.059551839 +0000 UTC m=+0.126413619 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  2 06:17:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1002: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.0 KiB/s wr, 34 op/s
Dec  2 06:17:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:17:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:17:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:17:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:17:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  2 06:17:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:17:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:17:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:17:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:17:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:17:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  2 06:17:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:17:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:17:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:17:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:17:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:17:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:17:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:17:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:17:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:17:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:17:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:17:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:17:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:17:36 np0005542249 nova_compute[254900]: 2025-12-02 11:17:36.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:17:36 np0005542249 nova_compute[254900]: 2025-12-02 11:17:36.410 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:17:36 np0005542249 nova_compute[254900]: 2025-12-02 11:17:36.411 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:17:36 np0005542249 nova_compute[254900]: 2025-12-02 11:17:36.411 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:17:36 np0005542249 nova_compute[254900]: 2025-12-02 11:17:36.411 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:17:36 np0005542249 nova_compute[254900]: 2025-12-02 11:17:36.412 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:17:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:17:36 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3307670274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:17:36 np0005542249 nova_compute[254900]: 2025-12-02 11:17:36.879 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:17:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:17:37.080 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:23:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:25:d0:a1:75:ed'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:17:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:17:37.081 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 06:17:37 np0005542249 nova_compute[254900]: 2025-12-02 11:17:37.114 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:17:37 np0005542249 nova_compute[254900]: 2025-12-02 11:17:37.124 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:17:37 np0005542249 nova_compute[254900]: 2025-12-02 11:17:37.125 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4714MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:17:37 np0005542249 nova_compute[254900]: 2025-12-02 11:17:37.125 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:17:37 np0005542249 nova_compute[254900]: 2025-12-02 11:17:37.125 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:17:37 np0005542249 nova_compute[254900]: 2025-12-02 11:17:37.193 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:17:37 np0005542249 nova_compute[254900]: 2025-12-02 11:17:37.194 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:17:37 np0005542249 nova_compute[254900]: 2025-12-02 11:17:37.218 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:17:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:17:37 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/638710229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:17:37 np0005542249 nova_compute[254900]: 2025-12-02 11:17:37.688 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:17:37 np0005542249 nova_compute[254900]: 2025-12-02 11:17:37.697 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:17:37 np0005542249 nova_compute[254900]: 2025-12-02 11:17:37.714 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:17:37 np0005542249 nova_compute[254900]: 2025-12-02 11:17:37.740 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:17:37 np0005542249 nova_compute[254900]: 2025-12-02 11:17:37.741 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:17:37 np0005542249 nova_compute[254900]: 2025-12-02 11:17:37.806 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:17:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1003: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 307 B/s wr, 4 op/s
Dec  2 06:17:37 np0005542249 podman[264662]: 2025-12-02 11:17:37.990057356 +0000 UTC m=+0.068481891 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:17:39 np0005542249 nova_compute[254900]: 2025-12-02 11:17:39.251 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:17:39 np0005542249 nova_compute[254900]: 2025-12-02 11:17:39.742 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:17:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1004: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 307 B/s wr, 3 op/s
Dec  2 06:17:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:17:40.083 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:17:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:17:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1005: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 290 B/s wr, 3 op/s
Dec  2 06:17:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:17:42 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/352006227' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:17:42 np0005542249 nova_compute[254900]: 2025-12-02 11:17:42.808 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:17:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Dec  2 06:17:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Dec  2 06:17:42 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Dec  2 06:17:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1007: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 1.5 KiB/s wr, 4 op/s
Dec  2 06:17:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Dec  2 06:17:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Dec  2 06:17:43 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Dec  2 06:17:44 np0005542249 nova_compute[254900]: 2025-12-02 11:17:44.254 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:17:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:17:44 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2961629498' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:17:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:17:44 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2961629498' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:17:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:17:44 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1572425236' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:17:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:17:44 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1572425236' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:17:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Dec  2 06:17:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Dec  2 06:17:45 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Dec  2 06:17:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:17:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1839551697' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:17:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:17:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1839551697' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:17:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1010: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 3.5 KiB/s wr, 25 op/s
Dec  2 06:17:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:17:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2671345468' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:17:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:17:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2671345468' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:17:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Dec  2 06:17:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Dec  2 06:17:46 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Dec  2 06:17:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:17:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:17:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3434447087' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:17:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:17:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3434447087' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:17:47 np0005542249 nova_compute[254900]: 2025-12-02 11:17:47.811 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:17:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1012: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 8.2 KiB/s wr, 144 op/s
Dec  2 06:17:49 np0005542249 nova_compute[254900]: 2025-12-02 11:17:49.257 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:17:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:17:49 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3801843087' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:17:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:17:49 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/376885648' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:17:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:17:49 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/376885648' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:17:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1013: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 7.3 KiB/s wr, 151 op/s
Dec  2 06:17:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Dec  2 06:17:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Dec  2 06:17:50 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Dec  2 06:17:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:17:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/153410369' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:17:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:17:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/153410369' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:17:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:17:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Dec  2 06:17:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Dec  2 06:17:51 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Dec  2 06:17:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1016: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 5.8 KiB/s wr, 132 op/s
Dec  2 06:17:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Dec  2 06:17:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Dec  2 06:17:52 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Dec  2 06:17:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:17:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1121761667' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:17:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:17:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1121761667' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:17:52 np0005542249 nova_compute[254900]: 2025-12-02 11:17:52.813 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:17:53.312809) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674273312880, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1371, "num_deletes": 257, "total_data_size": 1780250, "memory_usage": 1807616, "flush_reason": "Manual Compaction"}
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674273330180, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1757772, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19946, "largest_seqno": 21316, "table_properties": {"data_size": 1751163, "index_size": 3684, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15099, "raw_average_key_size": 20, "raw_value_size": 1737511, "raw_average_value_size": 2409, "num_data_blocks": 164, "num_entries": 721, "num_filter_entries": 721, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764674181, "oldest_key_time": 1764674181, "file_creation_time": 1764674273, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 17435 microseconds, and 10043 cpu microseconds.
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:17:53.330245) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1757772 bytes OK
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:17:53.330278) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:17:53.331993) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:17:53.332114) EVENT_LOG_v1 {"time_micros": 1764674273332020, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:17:53.332144) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1773852, prev total WAL file size 1773852, number of live WAL files 2.
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:17:53.333070) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1716KB)], [47(7433KB)]
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674273333132, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9370104, "oldest_snapshot_seqno": -1}
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4513 keys, 7615912 bytes, temperature: kUnknown
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674273397887, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7615912, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7584052, "index_size": 19462, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11333, "raw_key_size": 112059, "raw_average_key_size": 24, "raw_value_size": 7500663, "raw_average_value_size": 1662, "num_data_blocks": 809, "num_entries": 4513, "num_filter_entries": 4513, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672515, "oldest_key_time": 0, "file_creation_time": 1764674273, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:17:53.398348) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7615912 bytes
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:17:53.400348) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 144.3 rd, 117.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.3 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(9.7) write-amplify(4.3) OK, records in: 5039, records dropped: 526 output_compression: NoCompression
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:17:53.400380) EVENT_LOG_v1 {"time_micros": 1764674273400364, "job": 24, "event": "compaction_finished", "compaction_time_micros": 64924, "compaction_time_cpu_micros": 35442, "output_level": 6, "num_output_files": 1, "total_output_size": 7615912, "num_input_records": 5039, "num_output_records": 4513, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674273401214, "job": 24, "event": "table_file_deletion", "file_number": 49}
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674273403919, "job": 24, "event": "table_file_deletion", "file_number": 47}
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:17:53.332891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:17:53.404034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:17:53.404043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:17:53.404046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:17:53.404049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:17:53 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:17:53.404053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:17:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1018: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 2.0 KiB/s wr, 107 op/s
Dec  2 06:17:54 np0005542249 nova_compute[254900]: 2025-12-02 11:17:54.262 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:17:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Dec  2 06:17:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Dec  2 06:17:54 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Dec  2 06:17:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1020: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 2.6 KiB/s wr, 131 op/s
Dec  2 06:17:56 np0005542249 podman[264683]: 2025-12-02 11:17:56.045664317 +0000 UTC m=+0.111913913 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  2 06:17:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:17:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Dec  2 06:17:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Dec  2 06:17:56 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Dec  2 06:17:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:17:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:17:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:17:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:17:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:17:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:17:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:17:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1956208189' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:17:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:17:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1956208189' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:17:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1022: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 909 KiB/s rd, 5.0 KiB/s wr, 200 op/s
Dec  2 06:17:57 np0005542249 nova_compute[254900]: 2025-12-02 11:17:57.870 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:17:59 np0005542249 nova_compute[254900]: 2025-12-02 11:17:59.301 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:17:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:17:59 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3708644686' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:17:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1023: 321 pgs: 321 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.4 KiB/s wr, 134 op/s
Dec  2 06:18:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Dec  2 06:18:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Dec  2 06:18:00 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Dec  2 06:18:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:18:00 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2572879206' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:18:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:18:00 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2572879206' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:18:00 np0005542249 ovn_controller[153849]: 2025-12-02T11:18:00Z|00043|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Dec  2 06:18:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:18:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Dec  2 06:18:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Dec  2 06:18:01 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Dec  2 06:18:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1026: 321 pgs: 321 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.3 KiB/s wr, 92 op/s
Dec  2 06:18:02 np0005542249 nova_compute[254900]: 2025-12-02 11:18:02.871 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Dec  2 06:18:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Dec  2 06:18:03 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Dec  2 06:18:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1028: 321 pgs: 321 active+clean; 69 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 75 op/s
Dec  2 06:18:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:18:04 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2322491266' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:18:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:18:04 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2322491266' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:18:04 np0005542249 nova_compute[254900]: 2025-12-02 11:18:04.304 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Dec  2 06:18:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Dec  2 06:18:05 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Dec  2 06:18:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1030: 321 pgs: 321 active+clean; 69 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 4.0 MiB/s wr, 176 op/s
Dec  2 06:18:06 np0005542249 podman[264708]: 2025-12-02 11:18:06.068557002 +0000 UTC m=+0.136964721 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  2 06:18:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:18:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:18:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2873807251' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:18:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:18:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2873807251' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:18:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1031: 321 pgs: 321 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 129 KiB/s rd, 3.2 MiB/s wr, 188 op/s
Dec  2 06:18:07 np0005542249 nova_compute[254900]: 2025-12-02 11:18:07.873 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:08 np0005542249 podman[264734]: 2025-12-02 11:18:08.973173478 +0000 UTC m=+0.054559269 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  2 06:18:09 np0005542249 nova_compute[254900]: 2025-12-02 11:18:09.308 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1032: 321 pgs: 321 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 2.7 MiB/s wr, 175 op/s
Dec  2 06:18:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:18:10 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1023294605' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:18:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:18:10 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1023294605' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:18:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Dec  2 06:18:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Dec  2 06:18:10 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Dec  2 06:18:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:18:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Dec  2 06:18:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Dec  2 06:18:11 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Dec  2 06:18:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1035: 321 pgs: 321 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 4.1 KiB/s wr, 78 op/s
Dec  2 06:18:12 np0005542249 nova_compute[254900]: 2025-12-02 11:18:12.876 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Dec  2 06:18:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Dec  2 06:18:13 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Dec  2 06:18:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1037: 321 pgs: 321 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.7 KiB/s wr, 56 op/s
Dec  2 06:18:14 np0005542249 nova_compute[254900]: 2025-12-02 11:18:14.312 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:18:15 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/126740195' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:18:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:18:15 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/126740195' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:18:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1038: 321 pgs: 321 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.0 KiB/s wr, 58 op/s
Dec  2 06:18:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:18:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1039: 321 pgs: 321 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.3 KiB/s wr, 59 op/s
Dec  2 06:18:17 np0005542249 nova_compute[254900]: 2025-12-02 11:18:17.878 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:19 np0005542249 nova_compute[254900]: 2025-12-02 11:18:19.352 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:19.832 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:18:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:19.832 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:18:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:19.833 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:18:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1040: 321 pgs: 321 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 2.8 KiB/s wr, 62 op/s
Dec  2 06:18:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:18:21 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3452373295' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:18:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:18:21 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3452373295' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:18:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:18:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Dec  2 06:18:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Dec  2 06:18:21 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Dec  2 06:18:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1042: 321 pgs: 321 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.8 KiB/s wr, 42 op/s
Dec  2 06:18:22 np0005542249 nova_compute[254900]: 2025-12-02 11:18:22.926 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Dec  2 06:18:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Dec  2 06:18:23 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Dec  2 06:18:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1044: 321 pgs: 321 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.5 KiB/s wr, 43 op/s
Dec  2 06:18:24 np0005542249 nova_compute[254900]: 2025-12-02 11:18:24.355 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Dec  2 06:18:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Dec  2 06:18:24 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Dec  2 06:18:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:18:24 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:18:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:18:24 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:18:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:18:24 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:18:24 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev a100b201-6449-4db2-ba94-59707647d0ae does not exist
Dec  2 06:18:24 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev ebc75edc-38ce-4fbc-8b57-f0aae3069112 does not exist
Dec  2 06:18:24 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 6643d4e5-0606-4e93-ad57-10ddd9903c8a does not exist
Dec  2 06:18:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:18:24 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:18:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:18:24 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:18:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:18:24 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:18:25 np0005542249 podman[265026]: 2025-12-02 11:18:25.332493614 +0000 UTC m=+0.080296047 container create 7debe9945ba1333804b8f7f40447dd3fccabe5124436d2479ca3ab44e3034484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_beaver, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  2 06:18:25 np0005542249 systemd[1]: Started libpod-conmon-7debe9945ba1333804b8f7f40447dd3fccabe5124436d2479ca3ab44e3034484.scope.
Dec  2 06:18:25 np0005542249 podman[265026]: 2025-12-02 11:18:25.299498732 +0000 UTC m=+0.047301225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:18:25 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:18:25 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:18:25 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:18:25 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:18:25 np0005542249 podman[265026]: 2025-12-02 11:18:25.440748017 +0000 UTC m=+0.188550450 container init 7debe9945ba1333804b8f7f40447dd3fccabe5124436d2479ca3ab44e3034484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_beaver, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:18:25 np0005542249 podman[265026]: 2025-12-02 11:18:25.453628192 +0000 UTC m=+0.201430595 container start 7debe9945ba1333804b8f7f40447dd3fccabe5124436d2479ca3ab44e3034484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_beaver, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  2 06:18:25 np0005542249 podman[265026]: 2025-12-02 11:18:25.457365152 +0000 UTC m=+0.205167595 container attach 7debe9945ba1333804b8f7f40447dd3fccabe5124436d2479ca3ab44e3034484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  2 06:18:25 np0005542249 youthful_beaver[265043]: 167 167
Dec  2 06:18:25 np0005542249 systemd[1]: libpod-7debe9945ba1333804b8f7f40447dd3fccabe5124436d2479ca3ab44e3034484.scope: Deactivated successfully.
Dec  2 06:18:25 np0005542249 podman[265026]: 2025-12-02 11:18:25.464598435 +0000 UTC m=+0.212400858 container died 7debe9945ba1333804b8f7f40447dd3fccabe5124436d2479ca3ab44e3034484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:18:25 np0005542249 systemd[1]: var-lib-containers-storage-overlay-a05155738ffc4f0e9da954634d020170f9c066f5f7652a2c711b0319ad91688b-merged.mount: Deactivated successfully.
Dec  2 06:18:25 np0005542249 podman[265026]: 2025-12-02 11:18:25.517255493 +0000 UTC m=+0.265057926 container remove 7debe9945ba1333804b8f7f40447dd3fccabe5124436d2479ca3ab44e3034484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_beaver, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:18:25 np0005542249 systemd[1]: libpod-conmon-7debe9945ba1333804b8f7f40447dd3fccabe5124436d2479ca3ab44e3034484.scope: Deactivated successfully.
Dec  2 06:18:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Dec  2 06:18:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Dec  2 06:18:25 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Dec  2 06:18:25 np0005542249 podman[265066]: 2025-12-02 11:18:25.756480477 +0000 UTC m=+0.072082158 container create 4b64c1ce6fc233b08935eb1735cc877eaf14ba9a46a85c7f41c1a64e4940d9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_antonelli, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  2 06:18:25 np0005542249 systemd[1]: Started libpod-conmon-4b64c1ce6fc233b08935eb1735cc877eaf14ba9a46a85c7f41c1a64e4940d9ab.scope.
Dec  2 06:18:25 np0005542249 podman[265066]: 2025-12-02 11:18:25.726377572 +0000 UTC m=+0.041979313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:18:25 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:18:25 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de35b2d85214555cbf24969ff9c9743994f3b3038b81d45d57518a7281803cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:18:25 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de35b2d85214555cbf24969ff9c9743994f3b3038b81d45d57518a7281803cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:18:25 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de35b2d85214555cbf24969ff9c9743994f3b3038b81d45d57518a7281803cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:18:25 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de35b2d85214555cbf24969ff9c9743994f3b3038b81d45d57518a7281803cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:18:25 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de35b2d85214555cbf24969ff9c9743994f3b3038b81d45d57518a7281803cb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:18:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1047: 321 pgs: 321 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 3.3 KiB/s wr, 76 op/s
Dec  2 06:18:25 np0005542249 podman[265066]: 2025-12-02 11:18:25.882667899 +0000 UTC m=+0.198269580 container init 4b64c1ce6fc233b08935eb1735cc877eaf14ba9a46a85c7f41c1a64e4940d9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_antonelli, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:18:25 np0005542249 podman[265066]: 2025-12-02 11:18:25.905658093 +0000 UTC m=+0.221259784 container start 4b64c1ce6fc233b08935eb1735cc877eaf14ba9a46a85c7f41c1a64e4940d9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_antonelli, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:18:25 np0005542249 podman[265066]: 2025-12-02 11:18:25.909862876 +0000 UTC m=+0.225464537 container attach 4b64c1ce6fc233b08935eb1735cc877eaf14ba9a46a85c7f41c1a64e4940d9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:18:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:18:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:18:26
Dec  2 06:18:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:18:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:18:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', '.rgw.root', 'vms', 'images', '.mgr', 'default.rgw.meta', 'backups', 'default.rgw.log']
Dec  2 06:18:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:18:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:18:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1669450595' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:18:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:18:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1669450595' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:18:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:18:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:18:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:18:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:18:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:18:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:18:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:18:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:18:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:18:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:18:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:18:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:18:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:18:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:18:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:18:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:18:27 np0005542249 podman[265102]: 2025-12-02 11:18:27.034516016 +0000 UTC m=+0.105398768 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 06:18:27 np0005542249 fervent_antonelli[265082]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:18:27 np0005542249 fervent_antonelli[265082]: --> relative data size: 1.0
Dec  2 06:18:27 np0005542249 fervent_antonelli[265082]: --> All data devices are unavailable
Dec  2 06:18:27 np0005542249 systemd[1]: libpod-4b64c1ce6fc233b08935eb1735cc877eaf14ba9a46a85c7f41c1a64e4940d9ab.scope: Deactivated successfully.
Dec  2 06:18:27 np0005542249 systemd[1]: libpod-4b64c1ce6fc233b08935eb1735cc877eaf14ba9a46a85c7f41c1a64e4940d9ab.scope: Consumed 1.249s CPU time.
Dec  2 06:18:27 np0005542249 podman[265132]: 2025-12-02 11:18:27.271691965 +0000 UTC m=+0.047634694 container died 4b64c1ce6fc233b08935eb1735cc877eaf14ba9a46a85c7f41c1a64e4940d9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec  2 06:18:27 np0005542249 systemd[1]: var-lib-containers-storage-overlay-6de35b2d85214555cbf24969ff9c9743994f3b3038b81d45d57518a7281803cb-merged.mount: Deactivated successfully.
Dec  2 06:18:27 np0005542249 podman[265132]: 2025-12-02 11:18:27.381944832 +0000 UTC m=+0.157887491 container remove 4b64c1ce6fc233b08935eb1735cc877eaf14ba9a46a85c7f41c1a64e4940d9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_antonelli, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:18:27 np0005542249 systemd[1]: libpod-conmon-4b64c1ce6fc233b08935eb1735cc877eaf14ba9a46a85c7f41c1a64e4940d9ab.scope: Deactivated successfully.
Dec  2 06:18:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1048: 321 pgs: 321 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 3.2 KiB/s wr, 66 op/s
Dec  2 06:18:27 np0005542249 nova_compute[254900]: 2025-12-02 11:18:27.929 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:28 np0005542249 podman[265286]: 2025-12-02 11:18:28.192115056 +0000 UTC m=+0.056760788 container create e1b1cc4c7274a6054e865f53127c6ae87a7bce63b5b926c9e7dfd373348d638c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  2 06:18:28 np0005542249 systemd[1]: Started libpod-conmon-e1b1cc4c7274a6054e865f53127c6ae87a7bce63b5b926c9e7dfd373348d638c.scope.
Dec  2 06:18:28 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:18:28 np0005542249 podman[265286]: 2025-12-02 11:18:28.17465001 +0000 UTC m=+0.039295722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:18:28 np0005542249 podman[265286]: 2025-12-02 11:18:28.286739585 +0000 UTC m=+0.151385297 container init e1b1cc4c7274a6054e865f53127c6ae87a7bce63b5b926c9e7dfd373348d638c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  2 06:18:28 np0005542249 podman[265286]: 2025-12-02 11:18:28.298945172 +0000 UTC m=+0.163590904 container start e1b1cc4c7274a6054e865f53127c6ae87a7bce63b5b926c9e7dfd373348d638c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_buck, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  2 06:18:28 np0005542249 podman[265286]: 2025-12-02 11:18:28.303726139 +0000 UTC m=+0.168371851 container attach e1b1cc4c7274a6054e865f53127c6ae87a7bce63b5b926c9e7dfd373348d638c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  2 06:18:28 np0005542249 blissful_buck[265302]: 167 167
Dec  2 06:18:28 np0005542249 systemd[1]: libpod-e1b1cc4c7274a6054e865f53127c6ae87a7bce63b5b926c9e7dfd373348d638c.scope: Deactivated successfully.
Dec  2 06:18:28 np0005542249 podman[265286]: 2025-12-02 11:18:28.307192203 +0000 UTC m=+0.171837935 container died e1b1cc4c7274a6054e865f53127c6ae87a7bce63b5b926c9e7dfd373348d638c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:18:28 np0005542249 systemd[1]: var-lib-containers-storage-overlay-fbff31348ddb07ab159977e9dfcb22b8a36d8c181293bf5a30f07e9251829faf-merged.mount: Deactivated successfully.
Dec  2 06:18:28 np0005542249 podman[265286]: 2025-12-02 11:18:28.357749544 +0000 UTC m=+0.222395276 container remove e1b1cc4c7274a6054e865f53127c6ae87a7bce63b5b926c9e7dfd373348d638c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  2 06:18:28 np0005542249 systemd[1]: libpod-conmon-e1b1cc4c7274a6054e865f53127c6ae87a7bce63b5b926c9e7dfd373348d638c.scope: Deactivated successfully.
Dec  2 06:18:28 np0005542249 podman[265326]: 2025-12-02 11:18:28.581485573 +0000 UTC m=+0.071519992 container create 0ae1752df5a94bf3ff3a7f32dd9f5eb95061c03d07e5643627aac97763f6e59a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_moore, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Dec  2 06:18:28 np0005542249 systemd[1]: Started libpod-conmon-0ae1752df5a94bf3ff3a7f32dd9f5eb95061c03d07e5643627aac97763f6e59a.scope.
Dec  2 06:18:28 np0005542249 podman[265326]: 2025-12-02 11:18:28.553241598 +0000 UTC m=+0.043276067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:18:28 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:18:28 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b0ca28efd5f35d0e10c440ffc33558299a4f8cc9b0867c0a0d1867d2624b746/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:18:28 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b0ca28efd5f35d0e10c440ffc33558299a4f8cc9b0867c0a0d1867d2624b746/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:18:28 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b0ca28efd5f35d0e10c440ffc33558299a4f8cc9b0867c0a0d1867d2624b746/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:18:28 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b0ca28efd5f35d0e10c440ffc33558299a4f8cc9b0867c0a0d1867d2624b746/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:18:28 np0005542249 podman[265326]: 2025-12-02 11:18:28.697283488 +0000 UTC m=+0.187317967 container init 0ae1752df5a94bf3ff3a7f32dd9f5eb95061c03d07e5643627aac97763f6e59a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_moore, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:18:28 np0005542249 podman[265326]: 2025-12-02 11:18:28.704203363 +0000 UTC m=+0.194237772 container start 0ae1752df5a94bf3ff3a7f32dd9f5eb95061c03d07e5643627aac97763f6e59a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  2 06:18:28 np0005542249 podman[265326]: 2025-12-02 11:18:28.707733238 +0000 UTC m=+0.197767627 container attach 0ae1752df5a94bf3ff3a7f32dd9f5eb95061c03d07e5643627aac97763f6e59a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_moore, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:18:29 np0005542249 nova_compute[254900]: 2025-12-02 11:18:29.359 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:29 np0005542249 agitated_moore[265342]: {
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:    "0": [
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:        {
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "devices": [
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "/dev/loop3"
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            ],
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "lv_name": "ceph_lv0",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "lv_size": "21470642176",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "name": "ceph_lv0",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "tags": {
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.cluster_name": "ceph",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.crush_device_class": "",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.encrypted": "0",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.osd_id": "0",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.type": "block",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.vdo": "0"
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            },
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "type": "block",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "vg_name": "ceph_vg0"
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:        }
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:    ],
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:    "1": [
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:        {
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "devices": [
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "/dev/loop4"
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            ],
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "lv_name": "ceph_lv1",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "lv_size": "21470642176",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "name": "ceph_lv1",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "tags": {
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.cluster_name": "ceph",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.crush_device_class": "",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.encrypted": "0",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.osd_id": "1",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.type": "block",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.vdo": "0"
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            },
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "type": "block",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "vg_name": "ceph_vg1"
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:        }
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:    ],
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:    "2": [
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:        {
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "devices": [
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "/dev/loop5"
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            ],
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "lv_name": "ceph_lv2",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "lv_size": "21470642176",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "name": "ceph_lv2",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "tags": {
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.cluster_name": "ceph",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.crush_device_class": "",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.encrypted": "0",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.osd_id": "2",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.type": "block",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:                "ceph.vdo": "0"
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            },
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "type": "block",
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:            "vg_name": "ceph_vg2"
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:        }
Dec  2 06:18:29 np0005542249 agitated_moore[265342]:    ]
Dec  2 06:18:29 np0005542249 agitated_moore[265342]: }
Dec  2 06:18:29 np0005542249 systemd[1]: libpod-0ae1752df5a94bf3ff3a7f32dd9f5eb95061c03d07e5643627aac97763f6e59a.scope: Deactivated successfully.
Dec  2 06:18:29 np0005542249 podman[265326]: 2025-12-02 11:18:29.591570541 +0000 UTC m=+1.081604970 container died 0ae1752df5a94bf3ff3a7f32dd9f5eb95061c03d07e5643627aac97763f6e59a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_moore, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:18:29 np0005542249 systemd[1]: var-lib-containers-storage-overlay-5b0ca28efd5f35d0e10c440ffc33558299a4f8cc9b0867c0a0d1867d2624b746-merged.mount: Deactivated successfully.
Dec  2 06:18:29 np0005542249 podman[265326]: 2025-12-02 11:18:29.656816885 +0000 UTC m=+1.146851284 container remove 0ae1752df5a94bf3ff3a7f32dd9f5eb95061c03d07e5643627aac97763f6e59a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:18:29 np0005542249 systemd[1]: libpod-conmon-0ae1752df5a94bf3ff3a7f32dd9f5eb95061c03d07e5643627aac97763f6e59a.scope: Deactivated successfully.
Dec  2 06:18:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1049: 321 pgs: 321 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 4.2 KiB/s wr, 91 op/s
Dec  2 06:18:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Dec  2 06:18:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Dec  2 06:18:29 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Dec  2 06:18:30 np0005542249 podman[265507]: 2025-12-02 11:18:30.566772026 +0000 UTC m=+0.059905782 container create 53de0d6837afb02dcfdf832dfc53429be2570c5b4d0e29bbfabaeed49147c367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  2 06:18:30 np0005542249 systemd[1]: Started libpod-conmon-53de0d6837afb02dcfdf832dfc53429be2570c5b4d0e29bbfabaeed49147c367.scope.
Dec  2 06:18:30 np0005542249 podman[265507]: 2025-12-02 11:18:30.539959189 +0000 UTC m=+0.033092985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:18:30 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:18:30 np0005542249 podman[265507]: 2025-12-02 11:18:30.67392622 +0000 UTC m=+0.167060016 container init 53de0d6837afb02dcfdf832dfc53429be2570c5b4d0e29bbfabaeed49147c367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:18:30 np0005542249 podman[265507]: 2025-12-02 11:18:30.686313322 +0000 UTC m=+0.179447088 container start 53de0d6837afb02dcfdf832dfc53429be2570c5b4d0e29bbfabaeed49147c367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  2 06:18:30 np0005542249 podman[265507]: 2025-12-02 11:18:30.6926403 +0000 UTC m=+0.185774066 container attach 53de0d6837afb02dcfdf832dfc53429be2570c5b4d0e29bbfabaeed49147c367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  2 06:18:30 np0005542249 thirsty_yonath[265523]: 167 167
Dec  2 06:18:30 np0005542249 systemd[1]: libpod-53de0d6837afb02dcfdf832dfc53429be2570c5b4d0e29bbfabaeed49147c367.scope: Deactivated successfully.
Dec  2 06:18:30 np0005542249 podman[265507]: 2025-12-02 11:18:30.697556132 +0000 UTC m=+0.190689888 container died 53de0d6837afb02dcfdf832dfc53429be2570c5b4d0e29bbfabaeed49147c367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  2 06:18:30 np0005542249 systemd[1]: var-lib-containers-storage-overlay-63eed7308562d32ff17b0baef678ba03a638b1b342a89edec283c73dd33760d1-merged.mount: Deactivated successfully.
Dec  2 06:18:30 np0005542249 podman[265507]: 2025-12-02 11:18:30.752766947 +0000 UTC m=+0.245900693 container remove 53de0d6837afb02dcfdf832dfc53429be2570c5b4d0e29bbfabaeed49147c367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:18:30 np0005542249 systemd[1]: libpod-conmon-53de0d6837afb02dcfdf832dfc53429be2570c5b4d0e29bbfabaeed49147c367.scope: Deactivated successfully.
Dec  2 06:18:30 np0005542249 podman[265546]: 2025-12-02 11:18:30.999114162 +0000 UTC m=+0.071165034 container create 29c10e11437c38c38299c26b0c3571b16aec614a263b151862f423231a8ac9f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:18:31 np0005542249 systemd[1]: Started libpod-conmon-29c10e11437c38c38299c26b0c3571b16aec614a263b151862f423231a8ac9f1.scope.
Dec  2 06:18:31 np0005542249 podman[265546]: 2025-12-02 11:18:30.970632721 +0000 UTC m=+0.042683643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:18:31 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:18:31 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07ed218f8c2ed5f7cd8500ad8dd4c6fd2ae8119b716fc03ec22ecc107373465e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:18:31 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07ed218f8c2ed5f7cd8500ad8dd4c6fd2ae8119b716fc03ec22ecc107373465e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:18:31 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07ed218f8c2ed5f7cd8500ad8dd4c6fd2ae8119b716fc03ec22ecc107373465e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:18:31 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07ed218f8c2ed5f7cd8500ad8dd4c6fd2ae8119b716fc03ec22ecc107373465e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:18:31 np0005542249 podman[265546]: 2025-12-02 11:18:31.12699234 +0000 UTC m=+0.199043282 container init 29c10e11437c38c38299c26b0c3571b16aec614a263b151862f423231a8ac9f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kepler, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Dec  2 06:18:31 np0005542249 podman[265546]: 2025-12-02 11:18:31.142328329 +0000 UTC m=+0.214379201 container start 29c10e11437c38c38299c26b0c3571b16aec614a263b151862f423231a8ac9f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kepler, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  2 06:18:31 np0005542249 podman[265546]: 2025-12-02 11:18:31.14647972 +0000 UTC m=+0.218530602 container attach 29c10e11437c38c38299c26b0c3571b16aec614a263b151862f423231a8ac9f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kepler, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:18:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:18:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Dec  2 06:18:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Dec  2 06:18:31 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Dec  2 06:18:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1052: 321 pgs: 321 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 2.8 KiB/s wr, 64 op/s
Dec  2 06:18:32 np0005542249 serene_kepler[265563]: {
Dec  2 06:18:32 np0005542249 serene_kepler[265563]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:18:32 np0005542249 serene_kepler[265563]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:18:32 np0005542249 serene_kepler[265563]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:18:32 np0005542249 serene_kepler[265563]:        "osd_id": 0,
Dec  2 06:18:32 np0005542249 serene_kepler[265563]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:18:32 np0005542249 serene_kepler[265563]:        "type": "bluestore"
Dec  2 06:18:32 np0005542249 serene_kepler[265563]:    },
Dec  2 06:18:32 np0005542249 serene_kepler[265563]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:18:32 np0005542249 serene_kepler[265563]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:18:32 np0005542249 serene_kepler[265563]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:18:32 np0005542249 serene_kepler[265563]:        "osd_id": 2,
Dec  2 06:18:32 np0005542249 serene_kepler[265563]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:18:32 np0005542249 serene_kepler[265563]:        "type": "bluestore"
Dec  2 06:18:32 np0005542249 serene_kepler[265563]:    },
Dec  2 06:18:32 np0005542249 serene_kepler[265563]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:18:32 np0005542249 serene_kepler[265563]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:18:32 np0005542249 serene_kepler[265563]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:18:32 np0005542249 serene_kepler[265563]:        "osd_id": 1,
Dec  2 06:18:32 np0005542249 serene_kepler[265563]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:18:32 np0005542249 serene_kepler[265563]:        "type": "bluestore"
Dec  2 06:18:32 np0005542249 serene_kepler[265563]:    }
Dec  2 06:18:32 np0005542249 serene_kepler[265563]: }
Dec  2 06:18:32 np0005542249 systemd[1]: libpod-29c10e11437c38c38299c26b0c3571b16aec614a263b151862f423231a8ac9f1.scope: Deactivated successfully.
Dec  2 06:18:32 np0005542249 systemd[1]: libpod-29c10e11437c38c38299c26b0c3571b16aec614a263b151862f423231a8ac9f1.scope: Consumed 1.481s CPU time.
Dec  2 06:18:32 np0005542249 conmon[265563]: conmon 29c10e11437c38c38299 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-29c10e11437c38c38299c26b0c3571b16aec614a263b151862f423231a8ac9f1.scope/container/memory.events
Dec  2 06:18:32 np0005542249 podman[265546]: 2025-12-02 11:18:32.621188288 +0000 UTC m=+1.693239140 container died 29c10e11437c38c38299c26b0c3571b16aec614a263b151862f423231a8ac9f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  2 06:18:32 np0005542249 systemd[1]: var-lib-containers-storage-overlay-07ed218f8c2ed5f7cd8500ad8dd4c6fd2ae8119b716fc03ec22ecc107373465e-merged.mount: Deactivated successfully.
Dec  2 06:18:32 np0005542249 podman[265546]: 2025-12-02 11:18:32.683358399 +0000 UTC m=+1.755409281 container remove 29c10e11437c38c38299c26b0c3571b16aec614a263b151862f423231a8ac9f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 06:18:32 np0005542249 systemd[1]: libpod-conmon-29c10e11437c38c38299c26b0c3571b16aec614a263b151862f423231a8ac9f1.scope: Deactivated successfully.
Dec  2 06:18:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:18:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:18:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:18:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:18:32 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 34ab5b75-b8b9-4017-a2ef-5a28031c29a1 does not exist
Dec  2 06:18:32 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev b409f630-fefd-45f8-ae93-0a5557bd6334 does not exist
Dec  2 06:18:32 np0005542249 nova_compute[254900]: 2025-12-02 11:18:32.931 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:33 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:18:33 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:18:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Dec  2 06:18:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Dec  2 06:18:33 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Dec  2 06:18:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1054: 321 pgs: 321 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 5.5 KiB/s wr, 102 op/s
Dec  2 06:18:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:18:34 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2695381213' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:18:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:18:34 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2695381213' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:18:34 np0005542249 nova_compute[254900]: 2025-12-02 11:18:34.364 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:34 np0005542249 nova_compute[254900]: 2025-12-02 11:18:34.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:18:34 np0005542249 nova_compute[254900]: 2025-12-02 11:18:34.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:18:34 np0005542249 nova_compute[254900]: 2025-12-02 11:18:34.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:18:34 np0005542249 nova_compute[254900]: 2025-12-02 11:18:34.382 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:18:35 np0005542249 nova_compute[254900]: 2025-12-02 11:18:35.383 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:18:35 np0005542249 nova_compute[254900]: 2025-12-02 11:18:35.383 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:18:35 np0005542249 nova_compute[254900]: 2025-12-02 11:18:35.384 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 06:18:35 np0005542249 nova_compute[254900]: 2025-12-02 11:18:35.408 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 06:18:35 np0005542249 nova_compute[254900]: 2025-12-02 11:18:35.409 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:18:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:18:35 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4077215233' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:18:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:18:35 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4077215233' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:18:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1055: 321 pgs: 321 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 4.0 KiB/s wr, 65 op/s
Dec  2 06:18:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:18:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:18:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:18:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:18:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  2 06:18:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:18:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.087256625643029e-07 of space, bias 1.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:18:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:18:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  2 06:18:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:18:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  2 06:18:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:18:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:18:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:18:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:18:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:18:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:18:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:18:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:18:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:18:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:18:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:18:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:18:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:18:36 np0005542249 nova_compute[254900]: 2025-12-02 11:18:36.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:18:36 np0005542249 nova_compute[254900]: 2025-12-02 11:18:36.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:18:36 np0005542249 nova_compute[254900]: 2025-12-02 11:18:36.383 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:18:36 np0005542249 nova_compute[254900]: 2025-12-02 11:18:36.411 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:18:36 np0005542249 nova_compute[254900]: 2025-12-02 11:18:36.412 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:18:36 np0005542249 nova_compute[254900]: 2025-12-02 11:18:36.412 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:18:36 np0005542249 nova_compute[254900]: 2025-12-02 11:18:36.412 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:18:36 np0005542249 nova_compute[254900]: 2025-12-02 11:18:36.413 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:18:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:18:36 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3101873277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:18:36 np0005542249 nova_compute[254900]: 2025-12-02 11:18:36.900 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:18:36 np0005542249 nova_compute[254900]: 2025-12-02 11:18:36.963 254904 DEBUG oslo_concurrency.lockutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Acquiring lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:18:36 np0005542249 nova_compute[254900]: 2025-12-02 11:18:36.964 254904 DEBUG oslo_concurrency.lockutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:18:36 np0005542249 nova_compute[254900]: 2025-12-02 11:18:36.979 254904 DEBUG nova.compute.manager [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.046 254904 DEBUG oslo_concurrency.lockutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.047 254904 DEBUG oslo_concurrency.lockutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.057 254904 DEBUG nova.virt.hardware [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.057 254904 INFO nova.compute.claims [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:18:37 np0005542249 podman[265682]: 2025-12-02 11:18:37.08093068 +0000 UTC m=+0.143624725 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.148 254904 DEBUG oslo_concurrency.processutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.189 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.191 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4692MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.191 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:18:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:18:37 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2682553263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.631 254904 DEBUG oslo_concurrency.processutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.638 254904 DEBUG nova.compute.provider_tree [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.659 254904 DEBUG nova.scheduler.client.report [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.681 254904 DEBUG oslo_concurrency.lockutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.681 254904 DEBUG nova.compute.manager [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.684 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.493s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.738 254904 DEBUG nova.compute.manager [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.739 254904 DEBUG nova.network.neutron [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.768 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Instance 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.769 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.769 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.773 254904 INFO nova.virt.libvirt.driver [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.795 254904 DEBUG nova.compute.manager [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.827 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:18:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1056: 321 pgs: 321 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 4.8 KiB/s wr, 99 op/s
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.911 254904 DEBUG nova.compute.manager [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.914 254904 DEBUG nova.virt.libvirt.driver [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.915 254904 INFO nova.virt.libvirt.driver [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Creating image(s)#033[00m
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.946 254904 DEBUG nova.storage.rbd_utils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] rbd image 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:18:37 np0005542249 nova_compute[254900]: 2025-12-02 11:18:37.987 254904 DEBUG nova.storage.rbd_utils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] rbd image 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:18:38 np0005542249 nova_compute[254900]: 2025-12-02 11:18:38.010 254904 DEBUG nova.storage.rbd_utils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] rbd image 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:18:38 np0005542249 nova_compute[254900]: 2025-12-02 11:18:38.014 254904 DEBUG oslo_concurrency.processutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:18:38 np0005542249 nova_compute[254900]: 2025-12-02 11:18:38.036 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:38 np0005542249 nova_compute[254900]: 2025-12-02 11:18:38.043 254904 DEBUG nova.policy [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '382055aacd254f8bb9b170628992619d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4beaae6889da4e57bb304963bae13143', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:18:38 np0005542249 nova_compute[254900]: 2025-12-02 11:18:38.084 254904 DEBUG oslo_concurrency.processutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:18:38 np0005542249 nova_compute[254900]: 2025-12-02 11:18:38.085 254904 DEBUG oslo_concurrency.lockutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Acquiring lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:18:38 np0005542249 nova_compute[254900]: 2025-12-02 11:18:38.086 254904 DEBUG oslo_concurrency.lockutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:18:38 np0005542249 nova_compute[254900]: 2025-12-02 11:18:38.086 254904 DEBUG oslo_concurrency.lockutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:18:38 np0005542249 nova_compute[254900]: 2025-12-02 11:18:38.113 254904 DEBUG nova.storage.rbd_utils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] rbd image 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:18:38 np0005542249 nova_compute[254900]: 2025-12-02 11:18:38.119 254904 DEBUG oslo_concurrency.processutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:18:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:18:38 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2340317340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:18:38 np0005542249 nova_compute[254900]: 2025-12-02 11:18:38.310 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:18:38 np0005542249 nova_compute[254900]: 2025-12-02 11:18:38.320 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:18:38 np0005542249 nova_compute[254900]: 2025-12-02 11:18:38.339 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:18:38 np0005542249 nova_compute[254900]: 2025-12-02 11:18:38.341 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:18:38 np0005542249 nova_compute[254900]: 2025-12-02 11:18:38.342 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:18:38 np0005542249 nova_compute[254900]: 2025-12-02 11:18:38.421 254904 DEBUG oslo_concurrency.processutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:18:38 np0005542249 nova_compute[254900]: 2025-12-02 11:18:38.493 254904 DEBUG nova.storage.rbd_utils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] resizing rbd image 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  2 06:18:38 np0005542249 nova_compute[254900]: 2025-12-02 11:18:38.621 254904 DEBUG nova.objects.instance [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lazy-loading 'migration_context' on Instance uuid 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:18:38 np0005542249 nova_compute[254900]: 2025-12-02 11:18:38.641 254904 DEBUG nova.virt.libvirt.driver [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 06:18:38 np0005542249 nova_compute[254900]: 2025-12-02 11:18:38.642 254904 DEBUG nova.virt.libvirt.driver [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Ensure instance console log exists: /var/lib/nova/instances/0be3aa38-c2a1-4a78-b4a6-6419c4c9265b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:18:38 np0005542249 nova_compute[254900]: 2025-12-02 11:18:38.643 254904 DEBUG oslo_concurrency.lockutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:18:38 np0005542249 nova_compute[254900]: 2025-12-02 11:18:38.643 254904 DEBUG oslo_concurrency.lockutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:18:38 np0005542249 nova_compute[254900]: 2025-12-02 11:18:38.644 254904 DEBUG oslo_concurrency.lockutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:18:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:18:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4259627065' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:18:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:18:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4259627065' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:18:39 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  2 06:18:39 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4835 writes, 21K keys, 4835 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4835 writes, 4835 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1550 writes, 7259 keys, 1550 commit groups, 1.0 writes per commit group, ingest: 9.67 MB, 0.02 MB/s#012Interval WAL: 1550 writes, 1550 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    117.9      0.20              0.08        12    0.016       0      0       0.0       0.0#012  L6      1/0    7.26 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    157.9    130.9      0.59              0.28        11    0.054     48K   5702       0.0       0.0#012 Sum      1/0    7.26 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3    118.4    127.7      0.78              0.36        23    0.034     48K   5702       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.8    135.9    136.1      0.38              0.19        12    0.032     28K   3517       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    157.9    130.9      0.59              0.28        11    0.054     48K   5702       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    119.4      0.19              0.08        11    0.018       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.023, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.09 GB read, 0.05 MB/s read, 0.8 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560e2b4e71f0#2 capacity: 304.00 MB usage: 8.72 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.00019 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(558,8.32 MB,2.73625%) FilterBlock(24,142.61 KB,0.0458115%) IndexBlock(24,266.58 KB,0.0856349%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  2 06:18:39 np0005542249 nova_compute[254900]: 2025-12-02 11:18:39.367 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1057: 321 pgs: 321 active+clean; 59 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 809 KiB/s wr, 137 op/s
Dec  2 06:18:39 np0005542249 nova_compute[254900]: 2025-12-02 11:18:39.912 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:39 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:39.913 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:23:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:25:d0:a1:75:ed'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:18:39 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:39.914 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 06:18:40 np0005542249 podman[265918]: 2025-12-02 11:18:40.014886867 +0000 UTC m=+0.088385961 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:18:40 np0005542249 nova_compute[254900]: 2025-12-02 11:18:40.026 254904 DEBUG nova.network.neutron [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Successfully created port: 95d81470-bd8d-43d3-bb01-caedf88641cf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:18:40 np0005542249 nova_compute[254900]: 2025-12-02 11:18:40.341 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:18:41 np0005542249 nova_compute[254900]: 2025-12-02 11:18:41.151 254904 DEBUG nova.network.neutron [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Successfully updated port: 95d81470-bd8d-43d3-bb01-caedf88641cf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:18:41 np0005542249 nova_compute[254900]: 2025-12-02 11:18:41.172 254904 DEBUG oslo_concurrency.lockutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Acquiring lock "refresh_cache-0be3aa38-c2a1-4a78-b4a6-6419c4c9265b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:18:41 np0005542249 nova_compute[254900]: 2025-12-02 11:18:41.172 254904 DEBUG oslo_concurrency.lockutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Acquired lock "refresh_cache-0be3aa38-c2a1-4a78-b4a6-6419c4c9265b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:18:41 np0005542249 nova_compute[254900]: 2025-12-02 11:18:41.172 254904 DEBUG nova.network.neutron [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:18:41 np0005542249 nova_compute[254900]: 2025-12-02 11:18:41.260 254904 DEBUG nova.compute.manager [req-c7ec7f5a-cdb0-4c07-ab57-796d28c7033b req-c8c6a546-9fc1-4a75-8685-d8b949800cf9 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Received event network-changed-95d81470-bd8d-43d3-bb01-caedf88641cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:18:41 np0005542249 nova_compute[254900]: 2025-12-02 11:18:41.261 254904 DEBUG nova.compute.manager [req-c7ec7f5a-cdb0-4c07-ab57-796d28c7033b req-c8c6a546-9fc1-4a75-8685-d8b949800cf9 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Refreshing instance network info cache due to event network-changed-95d81470-bd8d-43d3-bb01-caedf88641cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:18:41 np0005542249 nova_compute[254900]: 2025-12-02 11:18:41.261 254904 DEBUG oslo_concurrency.lockutils [req-c7ec7f5a-cdb0-4c07-ab57-796d28c7033b req-c8c6a546-9fc1-4a75-8685-d8b949800cf9 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-0be3aa38-c2a1-4a78-b4a6-6419c4c9265b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:18:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:18:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Dec  2 06:18:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Dec  2 06:18:41 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Dec  2 06:18:41 np0005542249 nova_compute[254900]: 2025-12-02 11:18:41.370 254904 DEBUG nova.network.neutron [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:18:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1059: 321 pgs: 321 active+clean; 59 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 852 KiB/s wr, 113 op/s
Dec  2 06:18:42 np0005542249 nova_compute[254900]: 2025-12-02 11:18:42.680 254904 DEBUG nova.network.neutron [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Updating instance_info_cache with network_info: [{"id": "95d81470-bd8d-43d3-bb01-caedf88641cf", "address": "fa:16:3e:96:52:83", "network": {"id": "1468b032-015a-4fb8-a7c5-2a3b8aab9149", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-555435799-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4beaae6889da4e57bb304963bae13143", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d81470-bd", "ovs_interfaceid": "95d81470-bd8d-43d3-bb01-caedf88641cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:18:42 np0005542249 nova_compute[254900]: 2025-12-02 11:18:42.696 254904 DEBUG oslo_concurrency.lockutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Releasing lock "refresh_cache-0be3aa38-c2a1-4a78-b4a6-6419c4c9265b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:18:42 np0005542249 nova_compute[254900]: 2025-12-02 11:18:42.697 254904 DEBUG nova.compute.manager [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Instance network_info: |[{"id": "95d81470-bd8d-43d3-bb01-caedf88641cf", "address": "fa:16:3e:96:52:83", "network": {"id": "1468b032-015a-4fb8-a7c5-2a3b8aab9149", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-555435799-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4beaae6889da4e57bb304963bae13143", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d81470-bd", "ovs_interfaceid": "95d81470-bd8d-43d3-bb01-caedf88641cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:18:42 np0005542249 nova_compute[254900]: 2025-12-02 11:18:42.697 254904 DEBUG oslo_concurrency.lockutils [req-c7ec7f5a-cdb0-4c07-ab57-796d28c7033b req-c8c6a546-9fc1-4a75-8685-d8b949800cf9 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-0be3aa38-c2a1-4a78-b4a6-6419c4c9265b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:18:42 np0005542249 nova_compute[254900]: 2025-12-02 11:18:42.697 254904 DEBUG nova.network.neutron [req-c7ec7f5a-cdb0-4c07-ab57-796d28c7033b req-c8c6a546-9fc1-4a75-8685-d8b949800cf9 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Refreshing network info cache for port 95d81470-bd8d-43d3-bb01-caedf88641cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:18:42 np0005542249 nova_compute[254900]: 2025-12-02 11:18:42.701 254904 DEBUG nova.virt.libvirt.driver [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Start _get_guest_xml network_info=[{"id": "95d81470-bd8d-43d3-bb01-caedf88641cf", "address": "fa:16:3e:96:52:83", "network": {"id": "1468b032-015a-4fb8-a7c5-2a3b8aab9149", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-555435799-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4beaae6889da4e57bb304963bae13143", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d81470-bd", "ovs_interfaceid": "95d81470-bd8d-43d3-bb01-caedf88641cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'image_id': '5a40f66c-ab43-47dd-9880-e59f9fa2c60e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:18:42 np0005542249 nova_compute[254900]: 2025-12-02 11:18:42.705 254904 WARNING nova.virt.libvirt.driver [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:18:42 np0005542249 nova_compute[254900]: 2025-12-02 11:18:42.713 254904 DEBUG nova.virt.libvirt.host [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:18:42 np0005542249 nova_compute[254900]: 2025-12-02 11:18:42.715 254904 DEBUG nova.virt.libvirt.host [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:18:42 np0005542249 nova_compute[254900]: 2025-12-02 11:18:42.727 254904 DEBUG nova.virt.libvirt.host [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:18:42 np0005542249 nova_compute[254900]: 2025-12-02 11:18:42.729 254904 DEBUG nova.virt.libvirt.host [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:18:42 np0005542249 nova_compute[254900]: 2025-12-02 11:18:42.729 254904 DEBUG nova.virt.libvirt.driver [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:18:42 np0005542249 nova_compute[254900]: 2025-12-02 11:18:42.730 254904 DEBUG nova.virt.hardware [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:18:42 np0005542249 nova_compute[254900]: 2025-12-02 11:18:42.731 254904 DEBUG nova.virt.hardware [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:18:42 np0005542249 nova_compute[254900]: 2025-12-02 11:18:42.731 254904 DEBUG nova.virt.hardware [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:18:42 np0005542249 nova_compute[254900]: 2025-12-02 11:18:42.732 254904 DEBUG nova.virt.hardware [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:18:42 np0005542249 nova_compute[254900]: 2025-12-02 11:18:42.732 254904 DEBUG nova.virt.hardware [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:18:42 np0005542249 nova_compute[254900]: 2025-12-02 11:18:42.733 254904 DEBUG nova.virt.hardware [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:18:42 np0005542249 nova_compute[254900]: 2025-12-02 11:18:42.733 254904 DEBUG nova.virt.hardware [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:18:42 np0005542249 nova_compute[254900]: 2025-12-02 11:18:42.734 254904 DEBUG nova.virt.hardware [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:18:42 np0005542249 nova_compute[254900]: 2025-12-02 11:18:42.734 254904 DEBUG nova.virt.hardware [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:18:42 np0005542249 nova_compute[254900]: 2025-12-02 11:18:42.735 254904 DEBUG nova.virt.hardware [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:18:42 np0005542249 nova_compute[254900]: 2025-12-02 11:18:42.735 254904 DEBUG nova.virt.hardware [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:18:42 np0005542249 nova_compute[254900]: 2025-12-02 11:18:42.740 254904 DEBUG oslo_concurrency.processutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:18:42 np0005542249 nova_compute[254900]: 2025-12-02 11:18:42.935 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:18:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/877795890' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.216 254904 DEBUG oslo_concurrency.processutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.239 254904 DEBUG nova.storage.rbd_utils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] rbd image 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.243 254904 DEBUG oslo_concurrency.processutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:18:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:18:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/562729078' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.758 254904 DEBUG oslo_concurrency.processutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.762 254904 DEBUG nova.virt.libvirt.vif [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:18:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-255975398',display_name='tempest-VolumesBackupsTest-instance-255975398',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-255975398',id=3,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7ALaS8a6YL91a/itIYR4cly6HNACMjRHEWzvpVEhpc0YBZz1dD0JHDPFPnsmqWmV9KRSmZUnLAesBYY9To+P6cyry60GCNg2HErU9iMkHgrLFWbj9JiXso/bTphdKgaQ==',key_name='tempest-keypair-818472794',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4beaae6889da4e57bb304963bae13143',ramdisk_id='',reservation_id='r-bk43r8uy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-458528599',owner_user_name='tempest-VolumesBackupsTest-458528599-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:18:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='382055aacd254f8bb9b170628992619d',uuid=0be3aa38-c2a1-4a78-b4a6-6419c4c9265b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "95d81470-bd8d-43d3-bb01-caedf88641cf", "address": "fa:16:3e:96:52:83", "network": {"id": "1468b032-015a-4fb8-a7c5-2a3b8aab9149", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-555435799-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4beaae6889da4e57bb304963bae13143", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d81470-bd", "ovs_interfaceid": "95d81470-bd8d-43d3-bb01-caedf88641cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.762 254904 DEBUG nova.network.os_vif_util [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Converting VIF {"id": "95d81470-bd8d-43d3-bb01-caedf88641cf", "address": "fa:16:3e:96:52:83", "network": {"id": "1468b032-015a-4fb8-a7c5-2a3b8aab9149", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-555435799-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4beaae6889da4e57bb304963bae13143", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d81470-bd", "ovs_interfaceid": "95d81470-bd8d-43d3-bb01-caedf88641cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.764 254904 DEBUG nova.network.os_vif_util [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:52:83,bridge_name='br-int',has_traffic_filtering=True,id=95d81470-bd8d-43d3-bb01-caedf88641cf,network=Network(1468b032-015a-4fb8-a7c5-2a3b8aab9149),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d81470-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.766 254904 DEBUG nova.objects.instance [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.789 254904 DEBUG nova.virt.libvirt.driver [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:18:43 np0005542249 nova_compute[254900]:  <uuid>0be3aa38-c2a1-4a78-b4a6-6419c4c9265b</uuid>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:  <name>instance-00000003</name>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <nova:name>tempest-VolumesBackupsTest-instance-255975398</nova:name>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:18:42</nova:creationTime>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:18:43 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:        <nova:user uuid="382055aacd254f8bb9b170628992619d">tempest-VolumesBackupsTest-458528599-project-member</nova:user>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:        <nova:project uuid="4beaae6889da4e57bb304963bae13143">tempest-VolumesBackupsTest-458528599</nova:project>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <nova:root type="image" uuid="5a40f66c-ab43-47dd-9880-e59f9fa2c60e"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:        <nova:port uuid="95d81470-bd8d-43d3-bb01-caedf88641cf">
Dec  2 06:18:43 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <entry name="serial">0be3aa38-c2a1-4a78-b4a6-6419c4c9265b</entry>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <entry name="uuid">0be3aa38-c2a1-4a78-b4a6-6419c4c9265b</entry>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/0be3aa38-c2a1-4a78-b4a6-6419c4c9265b_disk">
Dec  2 06:18:43 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:18:43 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/0be3aa38-c2a1-4a78-b4a6-6419c4c9265b_disk.config">
Dec  2 06:18:43 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:18:43 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:96:52:83"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <target dev="tap95d81470-bd"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/0be3aa38-c2a1-4a78-b4a6-6419c4c9265b/console.log" append="off"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:18:43 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:18:43 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:18:43 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:18:43 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.791 254904 DEBUG nova.compute.manager [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Preparing to wait for external event network-vif-plugged-95d81470-bd8d-43d3-bb01-caedf88641cf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.792 254904 DEBUG oslo_concurrency.lockutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Acquiring lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.793 254904 DEBUG oslo_concurrency.lockutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.793 254904 DEBUG oslo_concurrency.lockutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.795 254904 DEBUG nova.virt.libvirt.vif [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:18:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-255975398',display_name='tempest-VolumesBackupsTest-instance-255975398',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-255975398',id=3,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7ALaS8a6YL91a/itIYR4cly6HNACMjRHEWzvpVEhpc0YBZz1dD0JHDPFPnsmqWmV9KRSmZUnLAesBYY9To+P6cyry60GCNg2HErU9iMkHgrLFWbj9JiXso/bTphdKgaQ==',key_name='tempest-keypair-818472794',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4beaae6889da4e57bb304963bae13143',ramdisk_id='',reservation_id='r-bk43r8uy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-458528599',owner_user_name='tempest-VolumesBackupsTest-458528599-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:18:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='382055aacd254f8bb9b170628992619d',uuid=0be3aa38-c2a1-4a78-b4a6-6419c4c9265b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "95d81470-bd8d-43d3-bb01-caedf88641cf", "address": "fa:16:3e:96:52:83", "network": {"id": "1468b032-015a-4fb8-a7c5-2a3b8aab9149", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-555435799-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4beaae6889da4e57bb304963bae13143", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d81470-bd", "ovs_interfaceid": "95d81470-bd8d-43d3-bb01-caedf88641cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.795 254904 DEBUG nova.network.os_vif_util [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Converting VIF {"id": "95d81470-bd8d-43d3-bb01-caedf88641cf", "address": "fa:16:3e:96:52:83", "network": {"id": "1468b032-015a-4fb8-a7c5-2a3b8aab9149", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-555435799-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4beaae6889da4e57bb304963bae13143", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d81470-bd", "ovs_interfaceid": "95d81470-bd8d-43d3-bb01-caedf88641cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.796 254904 DEBUG nova.network.os_vif_util [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:52:83,bridge_name='br-int',has_traffic_filtering=True,id=95d81470-bd8d-43d3-bb01-caedf88641cf,network=Network(1468b032-015a-4fb8-a7c5-2a3b8aab9149),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d81470-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.797 254904 DEBUG os_vif [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:52:83,bridge_name='br-int',has_traffic_filtering=True,id=95d81470-bd8d-43d3-bb01-caedf88641cf,network=Network(1468b032-015a-4fb8-a7c5-2a3b8aab9149),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d81470-bd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.798 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.800 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.801 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.805 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.806 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95d81470-bd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.807 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap95d81470-bd, col_values=(('external_ids', {'iface-id': '95d81470-bd8d-43d3-bb01-caedf88641cf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:96:52:83', 'vm-uuid': '0be3aa38-c2a1-4a78-b4a6-6419c4c9265b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.809 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:43 np0005542249 NetworkManager[48987]: <info>  [1764674323.8109] manager: (tap95d81470-bd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.812 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.820 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.822 254904 INFO os_vif [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:52:83,bridge_name='br-int',has_traffic_filtering=True,id=95d81470-bd8d-43d3-bb01-caedf88641cf,network=Network(1468b032-015a-4fb8-a7c5-2a3b8aab9149),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d81470-bd')#033[00m
Dec  2 06:18:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1060: 321 pgs: 321 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 2.1 MiB/s wr, 113 op/s
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.883 254904 DEBUG nova.virt.libvirt.driver [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.884 254904 DEBUG nova.virt.libvirt.driver [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.884 254904 DEBUG nova.virt.libvirt.driver [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] No VIF found with MAC fa:16:3e:96:52:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.885 254904 INFO nova.virt.libvirt.driver [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Using config drive#033[00m
Dec  2 06:18:43 np0005542249 nova_compute[254900]: 2025-12-02 11:18:43.908 254904 DEBUG nova.storage.rbd_utils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] rbd image 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:18:44 np0005542249 nova_compute[254900]: 2025-12-02 11:18:44.301 254904 DEBUG nova.network.neutron [req-c7ec7f5a-cdb0-4c07-ab57-796d28c7033b req-c8c6a546-9fc1-4a75-8685-d8b949800cf9 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Updated VIF entry in instance network info cache for port 95d81470-bd8d-43d3-bb01-caedf88641cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:18:44 np0005542249 nova_compute[254900]: 2025-12-02 11:18:44.302 254904 DEBUG nova.network.neutron [req-c7ec7f5a-cdb0-4c07-ab57-796d28c7033b req-c8c6a546-9fc1-4a75-8685-d8b949800cf9 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Updating instance_info_cache with network_info: [{"id": "95d81470-bd8d-43d3-bb01-caedf88641cf", "address": "fa:16:3e:96:52:83", "network": {"id": "1468b032-015a-4fb8-a7c5-2a3b8aab9149", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-555435799-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4beaae6889da4e57bb304963bae13143", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d81470-bd", "ovs_interfaceid": "95d81470-bd8d-43d3-bb01-caedf88641cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:18:44 np0005542249 nova_compute[254900]: 2025-12-02 11:18:44.306 254904 INFO nova.virt.libvirt.driver [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Creating config drive at /var/lib/nova/instances/0be3aa38-c2a1-4a78-b4a6-6419c4c9265b/disk.config#033[00m
Dec  2 06:18:44 np0005542249 nova_compute[254900]: 2025-12-02 11:18:44.312 254904 DEBUG oslo_concurrency.processutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0be3aa38-c2a1-4a78-b4a6-6419c4c9265b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5shhkkms execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:18:44 np0005542249 nova_compute[254900]: 2025-12-02 11:18:44.340 254904 DEBUG oslo_concurrency.lockutils [req-c7ec7f5a-cdb0-4c07-ab57-796d28c7033b req-c8c6a546-9fc1-4a75-8685-d8b949800cf9 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-0be3aa38-c2a1-4a78-b4a6-6419c4c9265b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:18:44 np0005542249 nova_compute[254900]: 2025-12-02 11:18:44.455 254904 DEBUG oslo_concurrency.processutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0be3aa38-c2a1-4a78-b4a6-6419c4c9265b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5shhkkms" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:18:44 np0005542249 nova_compute[254900]: 2025-12-02 11:18:44.481 254904 DEBUG nova.storage.rbd_utils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] rbd image 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:18:44 np0005542249 nova_compute[254900]: 2025-12-02 11:18:44.486 254904 DEBUG oslo_concurrency.processutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0be3aa38-c2a1-4a78-b4a6-6419c4c9265b/disk.config 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:18:44 np0005542249 nova_compute[254900]: 2025-12-02 11:18:44.666 254904 DEBUG oslo_concurrency.processutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0be3aa38-c2a1-4a78-b4a6-6419c4c9265b/disk.config 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:18:44 np0005542249 nova_compute[254900]: 2025-12-02 11:18:44.668 254904 INFO nova.virt.libvirt.driver [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Deleting local config drive /var/lib/nova/instances/0be3aa38-c2a1-4a78-b4a6-6419c4c9265b/disk.config because it was imported into RBD.#033[00m
Dec  2 06:18:44 np0005542249 kernel: tap95d81470-bd: entered promiscuous mode
Dec  2 06:18:44 np0005542249 NetworkManager[48987]: <info>  [1764674324.7478] manager: (tap95d81470-bd): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Dec  2 06:18:44 np0005542249 ovn_controller[153849]: 2025-12-02T11:18:44Z|00044|binding|INFO|Claiming lport 95d81470-bd8d-43d3-bb01-caedf88641cf for this chassis.
Dec  2 06:18:44 np0005542249 ovn_controller[153849]: 2025-12-02T11:18:44Z|00045|binding|INFO|95d81470-bd8d-43d3-bb01-caedf88641cf: Claiming fa:16:3e:96:52:83 10.100.0.3
Dec  2 06:18:44 np0005542249 nova_compute[254900]: 2025-12-02 11:18:44.783 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:44 np0005542249 nova_compute[254900]: 2025-12-02 11:18:44.795 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:44 np0005542249 systemd-udevd[266069]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:18:44 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:44.803 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:52:83 10.100.0.3'], port_security=['fa:16:3e:96:52:83 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0be3aa38-c2a1-4a78-b4a6-6419c4c9265b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1468b032-015a-4fb8-a7c5-2a3b8aab9149', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4beaae6889da4e57bb304963bae13143', 'neutron:revision_number': '2', 'neutron:security_group_ids': '07845268-044d-4ee5-b3b9-deaeea1124eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b54a1343-251a-464a-be0b-78e322a858d0, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=95d81470-bd8d-43d3-bb01-caedf88641cf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:18:44 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:44.805 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 95d81470-bd8d-43d3-bb01-caedf88641cf in datapath 1468b032-015a-4fb8-a7c5-2a3b8aab9149 bound to our chassis#033[00m
Dec  2 06:18:44 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:44.806 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1468b032-015a-4fb8-a7c5-2a3b8aab9149#033[00m
Dec  2 06:18:44 np0005542249 NetworkManager[48987]: <info>  [1764674324.8274] device (tap95d81470-bd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:18:44 np0005542249 NetworkManager[48987]: <info>  [1764674324.8288] device (tap95d81470-bd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:18:44 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:44.825 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[cd372091-08e5-4fad-a8ec-901d2d324220]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:18:44 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:44.826 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1468b032-01 in ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 06:18:44 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:44.829 262398 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1468b032-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 06:18:44 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:44.829 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[49fffd78-3a21-404c-a903-aee0eaf1f212]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:18:44 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:44.830 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[96a12153-32a4-46d1-8edd-124b051d07c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:18:44 np0005542249 systemd-machined[216222]: New machine qemu-3-instance-00000003.
Dec  2 06:18:44 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:44.854 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[80dd45de-00ab-47ef-b4d9-062116830504]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:18:44 np0005542249 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Dec  2 06:18:44 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:44.894 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[19329588-7be3-4bb5-bf71-d7d2b8d6ba7e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:18:44 np0005542249 ovn_controller[153849]: 2025-12-02T11:18:44Z|00046|binding|INFO|Setting lport 95d81470-bd8d-43d3-bb01-caedf88641cf ovn-installed in OVS
Dec  2 06:18:44 np0005542249 ovn_controller[153849]: 2025-12-02T11:18:44Z|00047|binding|INFO|Setting lport 95d81470-bd8d-43d3-bb01-caedf88641cf up in Southbound
Dec  2 06:18:44 np0005542249 nova_compute[254900]: 2025-12-02 11:18:44.909 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:44 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:44.945 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[4cd57040-d56b-43b4-bb70-bdd3366aa0a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:18:44 np0005542249 NetworkManager[48987]: <info>  [1764674324.9523] manager: (tap1468b032-00): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Dec  2 06:18:44 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:44.951 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[5997e801-f379-4ccf-bc7b-7266aa305fdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:18:44 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:44.995 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[1d496d79-d5eb-469d-8b5a-b651235df9e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:18:44 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:44.998 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[ade80f59-1967-4195-9d7b-ce774fb084db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:18:45 np0005542249 NetworkManager[48987]: <info>  [1764674325.0281] device (tap1468b032-00): carrier: link connected
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:45.036 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[ea41e373-e4ac-420c-bed4-68c2c8029ca2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:45.064 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[6709bb8e-7692-4e25-871a-ffb2da84f75c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1468b032-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:08:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451785, 'reachable_time': 20515, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266105, 'error': None, 'target': 'ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:45.089 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[72b4486f-5313-4acf-8336-ad990a98756e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef3:83e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451785, 'tstamp': 451785}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266107, 'error': None, 'target': 'ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:45.108 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb78c5c-04ce-4b86-8ea2-5d1388ca7f8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1468b032-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:08:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451785, 'reachable_time': 20515, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266108, 'error': None, 'target': 'ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:45.153 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[f973d4dd-341c-4a79-bca8-69fd7284a1a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.233 254904 DEBUG nova.compute.manager [req-cd0c3a3c-27bc-471b-ad74-58e356f013ae req-c4813f66-0ba0-4911-bc5c-db090548f9e3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Received event network-vif-plugged-95d81470-bd8d-43d3-bb01-caedf88641cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.234 254904 DEBUG oslo_concurrency.lockutils [req-cd0c3a3c-27bc-471b-ad74-58e356f013ae req-c4813f66-0ba0-4911-bc5c-db090548f9e3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.235 254904 DEBUG oslo_concurrency.lockutils [req-cd0c3a3c-27bc-471b-ad74-58e356f013ae req-c4813f66-0ba0-4911-bc5c-db090548f9e3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.236 254904 DEBUG oslo_concurrency.lockutils [req-cd0c3a3c-27bc-471b-ad74-58e356f013ae req-c4813f66-0ba0-4911-bc5c-db090548f9e3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.237 254904 DEBUG nova.compute.manager [req-cd0c3a3c-27bc-471b-ad74-58e356f013ae req-c4813f66-0ba0-4911-bc5c-db090548f9e3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Processing event network-vif-plugged-95d81470-bd8d-43d3-bb01-caedf88641cf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:45.244 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[e1fb70b7-658c-4a77-a7ee-2f789a0b4b5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:45.246 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1468b032-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:45.247 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:45.247 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1468b032-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.249 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:45 np0005542249 kernel: tap1468b032-00: entered promiscuous mode
Dec  2 06:18:45 np0005542249 NetworkManager[48987]: <info>  [1764674325.2518] manager: (tap1468b032-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:45.253 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1468b032-00, col_values=(('external_ids', {'iface-id': '25616577-ef53-4f7d-aa59-a58ea4a0e3b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:18:45 np0005542249 ovn_controller[153849]: 2025-12-02T11:18:45Z|00048|binding|INFO|Releasing lport 25616577-ef53-4f7d-aa59-a58ea4a0e3b2 from this chassis (sb_readonly=0)
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.256 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.276 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:45.278 163757 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1468b032-015a-4fb8-a7c5-2a3b8aab9149.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1468b032-015a-4fb8-a7c5-2a3b8aab9149.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:45.280 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[d36e9ff0-ecb3-4f2c-a471-5e888dee53cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:45.280 163757 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]: global
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]:    log         /dev/log local0 debug
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]:    log-tag     haproxy-metadata-proxy-1468b032-015a-4fb8-a7c5-2a3b8aab9149
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]:    user        root
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]:    group       root
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]:    maxconn     1024
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]:    pidfile     /var/lib/neutron/external/pids/1468b032-015a-4fb8-a7c5-2a3b8aab9149.pid.haproxy
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]:    daemon
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]: defaults
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]:    log global
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]:    mode http
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]:    option httplog
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]:    option dontlognull
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]:    option http-server-close
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]:    option forwardfor
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]:    retries                 3
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]:    timeout http-request    30s
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]:    timeout connect         30s
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]:    timeout client          32s
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]:    timeout server          32s
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]:    timeout http-keep-alive 30s
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]: listen listener
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]:    bind 169.254.169.254:80
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]:    http-request add-header X-OVN-Network-ID 1468b032-015a-4fb8-a7c5-2a3b8aab9149
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 06:18:45 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:45.281 163757 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149', 'env', 'PROCESS_TAG=haproxy-1468b032-015a-4fb8-a7c5-2a3b8aab9149', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1468b032-015a-4fb8-a7c5-2a3b8aab9149.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.415 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674325.4146717, 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.417 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] VM Started (Lifecycle Event)#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.421 254904 DEBUG nova.compute.manager [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.427 254904 DEBUG nova.virt.libvirt.driver [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.432 254904 INFO nova.virt.libvirt.driver [-] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Instance spawned successfully.#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.433 254904 DEBUG nova.virt.libvirt.driver [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.454 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.463 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.469 254904 DEBUG nova.virt.libvirt.driver [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.470 254904 DEBUG nova.virt.libvirt.driver [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.471 254904 DEBUG nova.virt.libvirt.driver [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.472 254904 DEBUG nova.virt.libvirt.driver [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.473 254904 DEBUG nova.virt.libvirt.driver [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.473 254904 DEBUG nova.virt.libvirt.driver [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.488 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.489 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674325.4149554, 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.489 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.517 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.523 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674325.4267545, 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.523 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.550 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.556 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.561 254904 INFO nova.compute.manager [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Took 7.65 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.562 254904 DEBUG nova.compute.manager [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.573 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.626 254904 INFO nova.compute.manager [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Took 8.61 seconds to build instance.#033[00m
Dec  2 06:18:45 np0005542249 nova_compute[254900]: 2025-12-02 11:18:45.645 254904 DEBUG oslo_concurrency.lockutils [None req-dc894dc3-b428-4de7-8f7c-d34ae78938c8 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:18:45 np0005542249 podman[266182]: 2025-12-02 11:18:45.802908654 +0000 UTC m=+0.090383035 container create 3d89b7e21c6d77eb6710b0823de160a2e7e5270ae0b71d4b64c92d5bc554dcd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  2 06:18:45 np0005542249 podman[266182]: 2025-12-02 11:18:45.75951299 +0000 UTC m=+0.046987411 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:18:45 np0005542249 systemd[1]: Started libpod-conmon-3d89b7e21c6d77eb6710b0823de160a2e7e5270ae0b71d4b64c92d5bc554dcd4.scope.
Dec  2 06:18:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1061: 321 pgs: 321 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Dec  2 06:18:45 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:18:45 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d9a3f6c85b83586a4076be285f5ef3a41047f61b6bc01a263e2040f638172c3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 06:18:45 np0005542249 podman[266182]: 2025-12-02 11:18:45.914117661 +0000 UTC m=+0.201592022 container init 3d89b7e21c6d77eb6710b0823de160a2e7e5270ae0b71d4b64c92d5bc554dcd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  2 06:18:45 np0005542249 podman[266182]: 2025-12-02 11:18:45.920661758 +0000 UTC m=+0.208136109 container start 3d89b7e21c6d77eb6710b0823de160a2e7e5270ae0b71d4b64c92d5bc554dcd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  2 06:18:45 np0005542249 neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149[266198]: [NOTICE]   (266202) : New worker (266204) forked
Dec  2 06:18:45 np0005542249 neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149[266198]: [NOTICE]   (266202) : Loading success.
Dec  2 06:18:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:18:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:18:46 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1082825842' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:18:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:18:46 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1082825842' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:18:47 np0005542249 nova_compute[254900]: 2025-12-02 11:18:47.324 254904 DEBUG nova.compute.manager [req-82c6acfd-b97d-4799-acbb-8a1e1096ef9b req-c6bbd49a-5116-43de-8015-0b1f8413ceea 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Received event network-vif-plugged-95d81470-bd8d-43d3-bb01-caedf88641cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:18:47 np0005542249 nova_compute[254900]: 2025-12-02 11:18:47.326 254904 DEBUG oslo_concurrency.lockutils [req-82c6acfd-b97d-4799-acbb-8a1e1096ef9b req-c6bbd49a-5116-43de-8015-0b1f8413ceea 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:18:47 np0005542249 nova_compute[254900]: 2025-12-02 11:18:47.327 254904 DEBUG oslo_concurrency.lockutils [req-82c6acfd-b97d-4799-acbb-8a1e1096ef9b req-c6bbd49a-5116-43de-8015-0b1f8413ceea 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:18:47 np0005542249 nova_compute[254900]: 2025-12-02 11:18:47.327 254904 DEBUG oslo_concurrency.lockutils [req-82c6acfd-b97d-4799-acbb-8a1e1096ef9b req-c6bbd49a-5116-43de-8015-0b1f8413ceea 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:18:47 np0005542249 nova_compute[254900]: 2025-12-02 11:18:47.327 254904 DEBUG nova.compute.manager [req-82c6acfd-b97d-4799-acbb-8a1e1096ef9b req-c6bbd49a-5116-43de-8015-0b1f8413ceea 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] No waiting events found dispatching network-vif-plugged-95d81470-bd8d-43d3-bb01-caedf88641cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:18:47 np0005542249 nova_compute[254900]: 2025-12-02 11:18:47.328 254904 WARNING nova.compute.manager [req-82c6acfd-b97d-4799-acbb-8a1e1096ef9b req-c6bbd49a-5116-43de-8015-0b1f8413ceea 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Received unexpected event network-vif-plugged-95d81470-bd8d-43d3-bb01-caedf88641cf for instance with vm_state active and task_state None.#033[00m
Dec  2 06:18:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1062: 321 pgs: 321 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 142 op/s
Dec  2 06:18:47 np0005542249 nova_compute[254900]: 2025-12-02 11:18:47.938 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:48 np0005542249 nova_compute[254900]: 2025-12-02 11:18:48.811 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:48 np0005542249 ovn_controller[153849]: 2025-12-02T11:18:48Z|00049|binding|INFO|Releasing lport 25616577-ef53-4f7d-aa59-a58ea4a0e3b2 from this chassis (sb_readonly=0)
Dec  2 06:18:48 np0005542249 nova_compute[254900]: 2025-12-02 11:18:48.955 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:48 np0005542249 NetworkManager[48987]: <info>  [1764674328.9564] manager: (patch-br-int-to-provnet-236defd8-cd9f-40fb-be9d-c3108d5fe981): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Dec  2 06:18:48 np0005542249 NetworkManager[48987]: <info>  [1764674328.9581] manager: (patch-provnet-236defd8-cd9f-40fb-be9d-c3108d5fe981-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Dec  2 06:18:49 np0005542249 nova_compute[254900]: 2025-12-02 11:18:49.025 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:49 np0005542249 ovn_controller[153849]: 2025-12-02T11:18:49Z|00050|binding|INFO|Releasing lport 25616577-ef53-4f7d-aa59-a58ea4a0e3b2 from this chassis (sb_readonly=0)
Dec  2 06:18:49 np0005542249 nova_compute[254900]: 2025-12-02 11:18:49.033 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:49 np0005542249 nova_compute[254900]: 2025-12-02 11:18:49.217 254904 DEBUG nova.compute.manager [req-e3fed05e-c715-4d4e-af9b-1c53dc4f8ed6 req-551b91ab-b88b-4804-a4c4-d92a8c067d06 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Received event network-changed-95d81470-bd8d-43d3-bb01-caedf88641cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:18:49 np0005542249 nova_compute[254900]: 2025-12-02 11:18:49.219 254904 DEBUG nova.compute.manager [req-e3fed05e-c715-4d4e-af9b-1c53dc4f8ed6 req-551b91ab-b88b-4804-a4c4-d92a8c067d06 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Refreshing instance network info cache due to event network-changed-95d81470-bd8d-43d3-bb01-caedf88641cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:18:49 np0005542249 nova_compute[254900]: 2025-12-02 11:18:49.220 254904 DEBUG oslo_concurrency.lockutils [req-e3fed05e-c715-4d4e-af9b-1c53dc4f8ed6 req-551b91ab-b88b-4804-a4c4-d92a8c067d06 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-0be3aa38-c2a1-4a78-b4a6-6419c4c9265b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:18:49 np0005542249 nova_compute[254900]: 2025-12-02 11:18:49.220 254904 DEBUG oslo_concurrency.lockutils [req-e3fed05e-c715-4d4e-af9b-1c53dc4f8ed6 req-551b91ab-b88b-4804-a4c4-d92a8c067d06 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-0be3aa38-c2a1-4a78-b4a6-6419c4c9265b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:18:49 np0005542249 nova_compute[254900]: 2025-12-02 11:18:49.220 254904 DEBUG nova.network.neutron [req-e3fed05e-c715-4d4e-af9b-1c53dc4f8ed6 req-551b91ab-b88b-4804-a4c4-d92a8c067d06 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Refreshing network info cache for port 95d81470-bd8d-43d3-bb01-caedf88641cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:18:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1063: 321 pgs: 321 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.5 MiB/s wr, 125 op/s
Dec  2 06:18:49 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:18:49.917 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:18:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:18:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1350543035' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:18:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:18:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1350543035' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:18:50 np0005542249 nova_compute[254900]: 2025-12-02 11:18:50.413 254904 DEBUG nova.network.neutron [req-e3fed05e-c715-4d4e-af9b-1c53dc4f8ed6 req-551b91ab-b88b-4804-a4c4-d92a8c067d06 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Updated VIF entry in instance network info cache for port 95d81470-bd8d-43d3-bb01-caedf88641cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:18:50 np0005542249 nova_compute[254900]: 2025-12-02 11:18:50.414 254904 DEBUG nova.network.neutron [req-e3fed05e-c715-4d4e-af9b-1c53dc4f8ed6 req-551b91ab-b88b-4804-a4c4-d92a8c067d06 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Updating instance_info_cache with network_info: [{"id": "95d81470-bd8d-43d3-bb01-caedf88641cf", "address": "fa:16:3e:96:52:83", "network": {"id": "1468b032-015a-4fb8-a7c5-2a3b8aab9149", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-555435799-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4beaae6889da4e57bb304963bae13143", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d81470-bd", "ovs_interfaceid": "95d81470-bd8d-43d3-bb01-caedf88641cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:18:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:18:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4143096282' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:18:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:18:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4143096282' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:18:50 np0005542249 nova_compute[254900]: 2025-12-02 11:18:50.432 254904 DEBUG oslo_concurrency.lockutils [req-e3fed05e-c715-4d4e-af9b-1c53dc4f8ed6 req-551b91ab-b88b-4804-a4c4-d92a8c067d06 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-0be3aa38-c2a1-4a78-b4a6-6419c4c9265b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:18:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:18:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2294880310' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:18:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:18:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2294880310' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:18:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:18:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1064: 321 pgs: 321 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 119 op/s
Dec  2 06:18:52 np0005542249 nova_compute[254900]: 2025-12-02 11:18:52.940 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:18:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/468616133' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:18:53 np0005542249 nova_compute[254900]: 2025-12-02 11:18:53.816 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:18:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/468616133' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:18:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1065: 321 pgs: 321 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 126 op/s
Dec  2 06:18:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1066: 321 pgs: 321 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 118 op/s
Dec  2 06:18:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:18:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:18:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:18:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:18:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:18:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:18:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:18:57 np0005542249 ovn_controller[153849]: 2025-12-02T11:18:57Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:96:52:83 10.100.0.3
Dec  2 06:18:57 np0005542249 ovn_controller[153849]: 2025-12-02T11:18:57Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:96:52:83 10.100.0.3
Dec  2 06:18:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1067: 321 pgs: 321 active+clean; 109 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 155 op/s
Dec  2 06:18:57 np0005542249 nova_compute[254900]: 2025-12-02 11:18:57.944 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:58 np0005542249 podman[266214]: 2025-12-02 11:18:58.058801911 +0000 UTC m=+0.126217885 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  2 06:18:58 np0005542249 nova_compute[254900]: 2025-12-02 11:18:58.819 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:18:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1068: 321 pgs: 321 active+clean; 110 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 775 KiB/s rd, 2.0 MiB/s wr, 111 op/s
Dec  2 06:19:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:19:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1069: 321 pgs: 321 active+clean; 110 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 243 KiB/s rd, 2.0 MiB/s wr, 93 op/s
Dec  2 06:19:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:19:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3860982456' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:19:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:19:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3860982456' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:19:02 np0005542249 nova_compute[254900]: 2025-12-02 11:19:02.948 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:03 np0005542249 nova_compute[254900]: 2025-12-02 11:19:03.820 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:03 np0005542249 nova_compute[254900]: 2025-12-02 11:19:03.822 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1070: 321 pgs: 321 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 293 KiB/s rd, 2.1 MiB/s wr, 106 op/s
Dec  2 06:19:05 np0005542249 nova_compute[254900]: 2025-12-02 11:19:05.050 254904 DEBUG oslo_concurrency.lockutils [None req-ddeb054a-16ea-45b3-a7c0-384245b2f5b3 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Acquiring lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:05 np0005542249 nova_compute[254900]: 2025-12-02 11:19:05.050 254904 DEBUG oslo_concurrency.lockutils [None req-ddeb054a-16ea-45b3-a7c0-384245b2f5b3 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:05 np0005542249 nova_compute[254900]: 2025-12-02 11:19:05.069 254904 DEBUG nova.objects.instance [None req-ddeb054a-16ea-45b3-a7c0-384245b2f5b3 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lazy-loading 'flavor' on Instance uuid 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:19:05 np0005542249 nova_compute[254900]: 2025-12-02 11:19:05.091 254904 INFO nova.virt.libvirt.driver [None req-ddeb054a-16ea-45b3-a7c0-384245b2f5b3 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Ignoring supplied device name: /dev/vdb#033[00m
Dec  2 06:19:05 np0005542249 nova_compute[254900]: 2025-12-02 11:19:05.109 254904 DEBUG oslo_concurrency.lockutils [None req-ddeb054a-16ea-45b3-a7c0-384245b2f5b3 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:05 np0005542249 nova_compute[254900]: 2025-12-02 11:19:05.346 254904 DEBUG oslo_concurrency.lockutils [None req-ddeb054a-16ea-45b3-a7c0-384245b2f5b3 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Acquiring lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:05 np0005542249 nova_compute[254900]: 2025-12-02 11:19:05.347 254904 DEBUG oslo_concurrency.lockutils [None req-ddeb054a-16ea-45b3-a7c0-384245b2f5b3 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:05 np0005542249 nova_compute[254900]: 2025-12-02 11:19:05.348 254904 INFO nova.compute.manager [None req-ddeb054a-16ea-45b3-a7c0-384245b2f5b3 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Attaching volume 313d459f-0d14-43c5-8158-edc3bad815d0 to /dev/vdb#033[00m
Dec  2 06:19:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:19:05 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1338693962' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:19:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:19:05 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1338693962' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:19:05 np0005542249 nova_compute[254900]: 2025-12-02 11:19:05.545 254904 DEBUG os_brick.utils [None req-ddeb054a-16ea-45b3-a7c0-384245b2f5b3 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:19:05 np0005542249 nova_compute[254900]: 2025-12-02 11:19:05.548 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:19:05 np0005542249 nova_compute[254900]: 2025-12-02 11:19:05.568 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:19:05 np0005542249 nova_compute[254900]: 2025-12-02 11:19:05.569 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[1c244df4-782a-4111-b15f-01f8b3d8778a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:05 np0005542249 nova_compute[254900]: 2025-12-02 11:19:05.571 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:19:05 np0005542249 nova_compute[254900]: 2025-12-02 11:19:05.584 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:19:05 np0005542249 nova_compute[254900]: 2025-12-02 11:19:05.584 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[03091a8a-d2cd-46f9-8cfb-cdf5c77cbf1c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:05 np0005542249 nova_compute[254900]: 2025-12-02 11:19:05.587 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:19:05 np0005542249 nova_compute[254900]: 2025-12-02 11:19:05.602 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:19:05 np0005542249 nova_compute[254900]: 2025-12-02 11:19:05.602 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[817a6519-7be0-4b9b-8542-198ba46a35ef]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:05 np0005542249 nova_compute[254900]: 2025-12-02 11:19:05.604 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[6c5037aa-e2af-4c45-adcb-e06fb38813fc]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:05 np0005542249 nova_compute[254900]: 2025-12-02 11:19:05.605 254904 DEBUG oslo_concurrency.processutils [None req-ddeb054a-16ea-45b3-a7c0-384245b2f5b3 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:19:05 np0005542249 nova_compute[254900]: 2025-12-02 11:19:05.639 254904 DEBUG oslo_concurrency.processutils [None req-ddeb054a-16ea-45b3-a7c0-384245b2f5b3 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:19:05 np0005542249 nova_compute[254900]: 2025-12-02 11:19:05.644 254904 DEBUG os_brick.initiator.connectors.lightos [None req-ddeb054a-16ea-45b3-a7c0-384245b2f5b3 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:19:05 np0005542249 nova_compute[254900]: 2025-12-02 11:19:05.644 254904 DEBUG os_brick.initiator.connectors.lightos [None req-ddeb054a-16ea-45b3-a7c0-384245b2f5b3 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:19:05 np0005542249 nova_compute[254900]: 2025-12-02 11:19:05.645 254904 DEBUG os_brick.initiator.connectors.lightos [None req-ddeb054a-16ea-45b3-a7c0-384245b2f5b3 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:19:05 np0005542249 nova_compute[254900]: 2025-12-02 11:19:05.646 254904 DEBUG os_brick.utils [None req-ddeb054a-16ea-45b3-a7c0-384245b2f5b3 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] <== get_connector_properties: return (99ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:19:05 np0005542249 nova_compute[254900]: 2025-12-02 11:19:05.646 254904 DEBUG nova.virt.block_device [None req-ddeb054a-16ea-45b3-a7c0-384245b2f5b3 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Updating existing volume attachment record: e2ff9af2-df7d-4388-b97b-feacab519517 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:19:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1071: 321 pgs: 321 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 236 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Dec  2 06:19:06 np0005542249 nova_compute[254900]: 2025-12-02 11:19:06.206 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:19:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/656348672' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:19:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:19:06 np0005542249 nova_compute[254900]: 2025-12-02 11:19:06.373 254904 DEBUG nova.objects.instance [None req-ddeb054a-16ea-45b3-a7c0-384245b2f5b3 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lazy-loading 'flavor' on Instance uuid 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:19:06 np0005542249 nova_compute[254900]: 2025-12-02 11:19:06.403 254904 DEBUG nova.virt.libvirt.driver [None req-ddeb054a-16ea-45b3-a7c0-384245b2f5b3 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Attempting to attach volume 313d459f-0d14-43c5-8158-edc3bad815d0 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Dec  2 06:19:06 np0005542249 nova_compute[254900]: 2025-12-02 11:19:06.409 254904 DEBUG nova.virt.libvirt.guest [None req-ddeb054a-16ea-45b3-a7c0-384245b2f5b3 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] attach device xml: <disk type="network" device="disk">
Dec  2 06:19:06 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:19:06 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-313d459f-0d14-43c5-8158-edc3bad815d0">
Dec  2 06:19:06 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:19:06 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:19:06 np0005542249 nova_compute[254900]:  <auth username="openstack">
Dec  2 06:19:06 np0005542249 nova_compute[254900]:    <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:19:06 np0005542249 nova_compute[254900]:  </auth>
Dec  2 06:19:06 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:19:06 np0005542249 nova_compute[254900]:  <serial>313d459f-0d14-43c5-8158-edc3bad815d0</serial>
Dec  2 06:19:06 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:19:06 np0005542249 nova_compute[254900]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  2 06:19:06 np0005542249 nova_compute[254900]: 2025-12-02 11:19:06.566 254904 DEBUG nova.virt.libvirt.driver [None req-ddeb054a-16ea-45b3-a7c0-384245b2f5b3 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:19:06 np0005542249 nova_compute[254900]: 2025-12-02 11:19:06.567 254904 DEBUG nova.virt.libvirt.driver [None req-ddeb054a-16ea-45b3-a7c0-384245b2f5b3 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:19:06 np0005542249 nova_compute[254900]: 2025-12-02 11:19:06.568 254904 DEBUG nova.virt.libvirt.driver [None req-ddeb054a-16ea-45b3-a7c0-384245b2f5b3 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:19:06 np0005542249 nova_compute[254900]: 2025-12-02 11:19:06.568 254904 DEBUG nova.virt.libvirt.driver [None req-ddeb054a-16ea-45b3-a7c0-384245b2f5b3 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] No VIF found with MAC fa:16:3e:96:52:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:19:07 np0005542249 nova_compute[254900]: 2025-12-02 11:19:07.052 254904 DEBUG oslo_concurrency.lockutils [None req-ddeb054a-16ea-45b3-a7c0-384245b2f5b3 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1072: 321 pgs: 321 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 240 KiB/s rd, 2.1 MiB/s wr, 102 op/s
Dec  2 06:19:07 np0005542249 nova_compute[254900]: 2025-12-02 11:19:07.951 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:08 np0005542249 podman[266263]: 2025-12-02 11:19:08.04061986 +0000 UTC m=+0.122063299 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  2 06:19:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:19:08 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/911776329' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:19:08 np0005542249 nova_compute[254900]: 2025-12-02 11:19:08.824 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Dec  2 06:19:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Dec  2 06:19:09 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Dec  2 06:19:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1074: 321 pgs: 321 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 122 KiB/s wr, 50 op/s
Dec  2 06:19:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Dec  2 06:19:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Dec  2 06:19:10 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Dec  2 06:19:11 np0005542249 podman[266289]: 2025-12-02 11:19:11.011215604 +0000 UTC m=+0.079834429 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:19:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Dec  2 06:19:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Dec  2 06:19:11 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Dec  2 06:19:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:19:11 np0005542249 nova_compute[254900]: 2025-12-02 11:19:11.825 254904 DEBUG oslo_concurrency.lockutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Acquiring lock "b65d7234-010a-4dd0-b06c-58dc26cfb5a6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:11 np0005542249 nova_compute[254900]: 2025-12-02 11:19:11.826 254904 DEBUG oslo_concurrency.lockutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "b65d7234-010a-4dd0-b06c-58dc26cfb5a6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:11 np0005542249 nova_compute[254900]: 2025-12-02 11:19:11.845 254904 DEBUG nova.compute.manager [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:19:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1077: 321 pgs: 321 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 27 KiB/s wr, 35 op/s
Dec  2 06:19:11 np0005542249 nova_compute[254900]: 2025-12-02 11:19:11.921 254904 DEBUG oslo_concurrency.lockutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:11 np0005542249 nova_compute[254900]: 2025-12-02 11:19:11.922 254904 DEBUG oslo_concurrency.lockutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:11 np0005542249 nova_compute[254900]: 2025-12-02 11:19:11.933 254904 DEBUG nova.virt.hardware [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:19:11 np0005542249 nova_compute[254900]: 2025-12-02 11:19:11.934 254904 INFO nova.compute.claims [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:19:12 np0005542249 nova_compute[254900]: 2025-12-02 11:19:12.083 254904 DEBUG oslo_concurrency.processutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:19:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:19:12 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1758149857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:19:12 np0005542249 nova_compute[254900]: 2025-12-02 11:19:12.566 254904 DEBUG oslo_concurrency.processutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:19:12 np0005542249 nova_compute[254900]: 2025-12-02 11:19:12.576 254904 DEBUG nova.compute.provider_tree [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:19:12 np0005542249 nova_compute[254900]: 2025-12-02 11:19:12.599 254904 DEBUG nova.scheduler.client.report [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:19:12 np0005542249 nova_compute[254900]: 2025-12-02 11:19:12.627 254904 DEBUG oslo_concurrency.lockutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:12 np0005542249 nova_compute[254900]: 2025-12-02 11:19:12.628 254904 DEBUG nova.compute.manager [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:19:12 np0005542249 nova_compute[254900]: 2025-12-02 11:19:12.710 254904 DEBUG nova.compute.manager [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:19:12 np0005542249 nova_compute[254900]: 2025-12-02 11:19:12.710 254904 DEBUG nova.network.neutron [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:19:12 np0005542249 nova_compute[254900]: 2025-12-02 11:19:12.717 254904 DEBUG oslo_concurrency.lockutils [None req-05f8eebd-98ce-4045-b233-262d18e5abfc 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Acquiring lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:12 np0005542249 nova_compute[254900]: 2025-12-02 11:19:12.717 254904 DEBUG oslo_concurrency.lockutils [None req-05f8eebd-98ce-4045-b233-262d18e5abfc 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:12 np0005542249 nova_compute[254900]: 2025-12-02 11:19:12.737 254904 INFO nova.compute.manager [None req-05f8eebd-98ce-4045-b233-262d18e5abfc 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Detaching volume 313d459f-0d14-43c5-8158-edc3bad815d0#033[00m
Dec  2 06:19:12 np0005542249 nova_compute[254900]: 2025-12-02 11:19:12.744 254904 INFO nova.virt.libvirt.driver [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:19:12 np0005542249 nova_compute[254900]: 2025-12-02 11:19:12.773 254904 DEBUG nova.compute.manager [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:19:12 np0005542249 nova_compute[254900]: 2025-12-02 11:19:12.894 254904 DEBUG nova.compute.manager [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:19:12 np0005542249 nova_compute[254900]: 2025-12-02 11:19:12.897 254904 DEBUG nova.virt.libvirt.driver [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:19:12 np0005542249 nova_compute[254900]: 2025-12-02 11:19:12.898 254904 INFO nova.virt.libvirt.driver [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Creating image(s)#033[00m
Dec  2 06:19:12 np0005542249 nova_compute[254900]: 2025-12-02 11:19:12.934 254904 DEBUG nova.storage.rbd_utils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] rbd image b65d7234-010a-4dd0-b06c-58dc26cfb5a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:19:12 np0005542249 nova_compute[254900]: 2025-12-02 11:19:12.975 254904 DEBUG nova.storage.rbd_utils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] rbd image b65d7234-010a-4dd0-b06c-58dc26cfb5a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.013 254904 DEBUG nova.storage.rbd_utils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] rbd image b65d7234-010a-4dd0-b06c-58dc26cfb5a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.020 254904 DEBUG oslo_concurrency.processutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.047 254904 INFO nova.virt.block_device [None req-05f8eebd-98ce-4045-b233-262d18e5abfc 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Attempting to driver detach volume 313d459f-0d14-43c5-8158-edc3bad815d0 from mountpoint /dev/vdb#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.053 254904 DEBUG nova.policy [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '505334fe5eb749e19ba727c8b2d04594', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0587e0fe146043ba857b2d8002ab0a3b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.057 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.075 254904 DEBUG nova.virt.libvirt.driver [None req-05f8eebd-98ce-4045-b233-262d18e5abfc 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Attempting to detach device vdb from instance 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.077 254904 DEBUG nova.virt.libvirt.guest [None req-05f8eebd-98ce-4045-b233-262d18e5abfc 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] detach device xml: <disk type="network" device="disk">
Dec  2 06:19:13 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:19:13 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-313d459f-0d14-43c5-8158-edc3bad815d0">
Dec  2 06:19:13 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:19:13 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:19:13 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:19:13 np0005542249 nova_compute[254900]:  <serial>313d459f-0d14-43c5-8158-edc3bad815d0</serial>
Dec  2 06:19:13 np0005542249 nova_compute[254900]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  2 06:19:13 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:19:13 np0005542249 nova_compute[254900]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.089 254904 INFO nova.virt.libvirt.driver [None req-05f8eebd-98ce-4045-b233-262d18e5abfc 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Successfully detached device vdb from instance 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b from the persistent domain config.#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.090 254904 DEBUG nova.virt.libvirt.driver [None req-05f8eebd-98ce-4045-b233-262d18e5abfc 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.091 254904 DEBUG nova.virt.libvirt.guest [None req-05f8eebd-98ce-4045-b233-262d18e5abfc 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] detach device xml: <disk type="network" device="disk">
Dec  2 06:19:13 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:19:13 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-313d459f-0d14-43c5-8158-edc3bad815d0">
Dec  2 06:19:13 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:19:13 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:19:13 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:19:13 np0005542249 nova_compute[254900]:  <serial>313d459f-0d14-43c5-8158-edc3bad815d0</serial>
Dec  2 06:19:13 np0005542249 nova_compute[254900]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  2 06:19:13 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:19:13 np0005542249 nova_compute[254900]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.095 254904 DEBUG oslo_concurrency.processutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.096 254904 DEBUG oslo_concurrency.lockutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Acquiring lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.097 254904 DEBUG oslo_concurrency.lockutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.097 254904 DEBUG oslo_concurrency.lockutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.118 254904 DEBUG nova.storage.rbd_utils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] rbd image b65d7234-010a-4dd0-b06c-58dc26cfb5a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.122 254904 DEBUG oslo_concurrency.processutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 b65d7234-010a-4dd0-b06c-58dc26cfb5a6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.216 254904 DEBUG nova.virt.libvirt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Received event <DeviceRemovedEvent: 1764674353.2155733, 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.219 254904 DEBUG nova.virt.libvirt.driver [None req-05f8eebd-98ce-4045-b233-262d18e5abfc 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.225 254904 INFO nova.virt.libvirt.driver [None req-05f8eebd-98ce-4045-b233-262d18e5abfc 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Successfully detached device vdb from instance 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b from the live domain config.#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.402 254904 DEBUG oslo_concurrency.processutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 b65d7234-010a-4dd0-b06c-58dc26cfb5a6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.498 254904 DEBUG nova.objects.instance [None req-05f8eebd-98ce-4045-b233-262d18e5abfc 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lazy-loading 'flavor' on Instance uuid 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.507 254904 DEBUG nova.storage.rbd_utils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] resizing rbd image b65d7234-010a-4dd0-b06c-58dc26cfb5a6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.577 254904 DEBUG oslo_concurrency.lockutils [None req-05f8eebd-98ce-4045-b233-262d18e5abfc 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.624 254904 DEBUG nova.network.neutron [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Successfully created port: d1278df6-eced-4cb3-82f8-ebf07482ac43 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.638 254904 DEBUG nova.objects.instance [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lazy-loading 'migration_context' on Instance uuid b65d7234-010a-4dd0-b06c-58dc26cfb5a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.652 254904 DEBUG nova.virt.libvirt.driver [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.653 254904 DEBUG nova.virt.libvirt.driver [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Ensure instance console log exists: /var/lib/nova/instances/b65d7234-010a-4dd0-b06c-58dc26cfb5a6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.653 254904 DEBUG oslo_concurrency.lockutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.654 254904 DEBUG oslo_concurrency.lockutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.654 254904 DEBUG oslo_concurrency.lockutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:13 np0005542249 nova_compute[254900]: 2025-12-02 11:19:13.827 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1078: 321 pgs: 321 active+clean; 121 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 6.3 KiB/s wr, 67 op/s
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.041 254904 DEBUG oslo_concurrency.lockutils [None req-e729866e-de22-417b-88ce-f400ee9001a1 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Acquiring lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.042 254904 DEBUG oslo_concurrency.lockutils [None req-e729866e-de22-417b-88ce-f400ee9001a1 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.042 254904 DEBUG oslo_concurrency.lockutils [None req-e729866e-de22-417b-88ce-f400ee9001a1 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Acquiring lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.043 254904 DEBUG oslo_concurrency.lockutils [None req-e729866e-de22-417b-88ce-f400ee9001a1 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.043 254904 DEBUG oslo_concurrency.lockutils [None req-e729866e-de22-417b-88ce-f400ee9001a1 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.044 254904 INFO nova.compute.manager [None req-e729866e-de22-417b-88ce-f400ee9001a1 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Terminating instance#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.046 254904 DEBUG nova.compute.manager [None req-e729866e-de22-417b-88ce-f400ee9001a1 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:19:14 np0005542249 kernel: tap95d81470-bd (unregistering): left promiscuous mode
Dec  2 06:19:14 np0005542249 NetworkManager[48987]: <info>  [1764674354.1057] device (tap95d81470-bd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:19:14 np0005542249 ovn_controller[153849]: 2025-12-02T11:19:14Z|00051|binding|INFO|Releasing lport 95d81470-bd8d-43d3-bb01-caedf88641cf from this chassis (sb_readonly=0)
Dec  2 06:19:14 np0005542249 ovn_controller[153849]: 2025-12-02T11:19:14Z|00052|binding|INFO|Setting lport 95d81470-bd8d-43d3-bb01-caedf88641cf down in Southbound
Dec  2 06:19:14 np0005542249 ovn_controller[153849]: 2025-12-02T11:19:14Z|00053|binding|INFO|Removing iface tap95d81470-bd ovn-installed in OVS
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.117 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.120 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:14 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:14.134 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:52:83 10.100.0.3'], port_security=['fa:16:3e:96:52:83 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0be3aa38-c2a1-4a78-b4a6-6419c4c9265b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1468b032-015a-4fb8-a7c5-2a3b8aab9149', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4beaae6889da4e57bb304963bae13143', 'neutron:revision_number': '4', 'neutron:security_group_ids': '07845268-044d-4ee5-b3b9-deaeea1124eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.238'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b54a1343-251a-464a-be0b-78e322a858d0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=95d81470-bd8d-43d3-bb01-caedf88641cf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:19:14 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:14.137 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 95d81470-bd8d-43d3-bb01-caedf88641cf in datapath 1468b032-015a-4fb8-a7c5-2a3b8aab9149 unbound from our chassis#033[00m
Dec  2 06:19:14 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:14.139 163757 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1468b032-015a-4fb8-a7c5-2a3b8aab9149, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 06:19:14 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:14.140 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[86cf3f75-0069-4114-b992-91258ae32543]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:14 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:14.141 163757 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149 namespace which is not needed anymore#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.150 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:14 np0005542249 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Dec  2 06:19:14 np0005542249 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 14.666s CPU time.
Dec  2 06:19:14 np0005542249 systemd-machined[216222]: Machine qemu-3-instance-00000003 terminated.
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.299 254904 INFO nova.virt.libvirt.driver [-] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Instance destroyed successfully.#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.300 254904 DEBUG nova.objects.instance [None req-e729866e-de22-417b-88ce-f400ee9001a1 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lazy-loading 'resources' on Instance uuid 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:19:14 np0005542249 neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149[266198]: [NOTICE]   (266202) : haproxy version is 2.8.14-c23fe91
Dec  2 06:19:14 np0005542249 neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149[266198]: [NOTICE]   (266202) : path to executable is /usr/sbin/haproxy
Dec  2 06:19:14 np0005542249 neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149[266198]: [WARNING]  (266202) : Exiting Master process...
Dec  2 06:19:14 np0005542249 neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149[266198]: [ALERT]    (266202) : Current worker (266204) exited with code 143 (Terminated)
Dec  2 06:19:14 np0005542249 neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149[266198]: [WARNING]  (266202) : All workers exited. Exiting... (0)
Dec  2 06:19:14 np0005542249 systemd[1]: libpod-3d89b7e21c6d77eb6710b0823de160a2e7e5270ae0b71d4b64c92d5bc554dcd4.scope: Deactivated successfully.
Dec  2 06:19:14 np0005542249 podman[266524]: 2025-12-02 11:19:14.350557227 +0000 UTC m=+0.076105771 container died 3d89b7e21c6d77eb6710b0823de160a2e7e5270ae0b71d4b64c92d5bc554dcd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:19:14 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3d89b7e21c6d77eb6710b0823de160a2e7e5270ae0b71d4b64c92d5bc554dcd4-userdata-shm.mount: Deactivated successfully.
Dec  2 06:19:14 np0005542249 systemd[1]: var-lib-containers-storage-overlay-8d9a3f6c85b83586a4076be285f5ef3a41047f61b6bc01a263e2040f638172c3-merged.mount: Deactivated successfully.
Dec  2 06:19:14 np0005542249 podman[266524]: 2025-12-02 11:19:14.408844629 +0000 UTC m=+0.134393173 container cleanup 3d89b7e21c6d77eb6710b0823de160a2e7e5270ae0b71d4b64c92d5bc554dcd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  2 06:19:14 np0005542249 systemd[1]: libpod-conmon-3d89b7e21c6d77eb6710b0823de160a2e7e5270ae0b71d4b64c92d5bc554dcd4.scope: Deactivated successfully.
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.439 254904 DEBUG nova.virt.libvirt.vif [None req-e729866e-de22-417b-88ce-f400ee9001a1 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:18:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-255975398',display_name='tempest-VolumesBackupsTest-instance-255975398',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-255975398',id=3,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7ALaS8a6YL91a/itIYR4cly6HNACMjRHEWzvpVEhpc0YBZz1dD0JHDPFPnsmqWmV9KRSmZUnLAesBYY9To+P6cyry60GCNg2HErU9iMkHgrLFWbj9JiXso/bTphdKgaQ==',key_name='tempest-keypair-818472794',keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:18:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4beaae6889da4e57bb304963bae13143',ramdisk_id='',reservation_id='r-bk43r8uy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-458528599',owner_user_name='tempest-VolumesBackupsTest-458528599-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:18:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='382055aacd254f8bb9b170628992619d',uuid=0be3aa38-c2a1-4a78-b4a6-6419c4c9265b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "95d81470-bd8d-43d3-bb01-caedf88641cf", "address": "fa:16:3e:96:52:83", "network": {"id": "1468b032-015a-4fb8-a7c5-2a3b8aab9149", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-555435799-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4beaae6889da4e57bb304963bae13143", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d81470-bd", "ovs_interfaceid": "95d81470-bd8d-43d3-bb01-caedf88641cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.440 254904 DEBUG nova.network.os_vif_util [None req-e729866e-de22-417b-88ce-f400ee9001a1 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Converting VIF {"id": "95d81470-bd8d-43d3-bb01-caedf88641cf", "address": "fa:16:3e:96:52:83", "network": {"id": "1468b032-015a-4fb8-a7c5-2a3b8aab9149", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-555435799-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4beaae6889da4e57bb304963bae13143", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d81470-bd", "ovs_interfaceid": "95d81470-bd8d-43d3-bb01-caedf88641cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.441 254904 DEBUG nova.network.os_vif_util [None req-e729866e-de22-417b-88ce-f400ee9001a1 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:96:52:83,bridge_name='br-int',has_traffic_filtering=True,id=95d81470-bd8d-43d3-bb01-caedf88641cf,network=Network(1468b032-015a-4fb8-a7c5-2a3b8aab9149),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d81470-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.441 254904 DEBUG os_vif [None req-e729866e-de22-417b-88ce-f400ee9001a1 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:52:83,bridge_name='br-int',has_traffic_filtering=True,id=95d81470-bd8d-43d3-bb01-caedf88641cf,network=Network(1468b032-015a-4fb8-a7c5-2a3b8aab9149),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d81470-bd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.443 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.443 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95d81470-bd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.445 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.447 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.449 254904 INFO os_vif [None req-e729866e-de22-417b-88ce-f400ee9001a1 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:52:83,bridge_name='br-int',has_traffic_filtering=True,id=95d81470-bd8d-43d3-bb01-caedf88641cf,network=Network(1468b032-015a-4fb8-a7c5-2a3b8aab9149),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d81470-bd')#033[00m
Dec  2 06:19:14 np0005542249 podman[266567]: 2025-12-02 11:19:14.499075689 +0000 UTC m=+0.055437477 container remove 3d89b7e21c6d77eb6710b0823de160a2e7e5270ae0b71d4b64c92d5bc554dcd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 06:19:14 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:14.507 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[88c53b04-d5c6-4ab5-9cc8-ed6521db8400]: (4, ('Tue Dec  2 11:19:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149 (3d89b7e21c6d77eb6710b0823de160a2e7e5270ae0b71d4b64c92d5bc554dcd4)\n3d89b7e21c6d77eb6710b0823de160a2e7e5270ae0b71d4b64c92d5bc554dcd4\nTue Dec  2 11:19:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149 (3d89b7e21c6d77eb6710b0823de160a2e7e5270ae0b71d4b64c92d5bc554dcd4)\n3d89b7e21c6d77eb6710b0823de160a2e7e5270ae0b71d4b64c92d5bc554dcd4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:14 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:14.511 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[2f41b7aa-14f7-4502-a13b-b4242477b7e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:14 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:14.513 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1468b032-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:19:14 np0005542249 kernel: tap1468b032-00: left promiscuous mode
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.516 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.519 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:14 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:14.522 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[7a6b4d4d-93fa-420b-949e-249442c08809]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.535 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:14 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:14.536 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[eba20a18-0884-4ec6-a95e-c223c05de7b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:14 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:14.541 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[0b028186-ea99-4442-85c6-b1f0b0b4aaf1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:14 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:14.570 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[4110ac1b-59b5-4527-8166-d9f982652d41]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451775, 'reachable_time': 36232, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266600, 'error': None, 'target': 'ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:14 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:14.574 164036 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 06:19:14 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:14.574 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[79ec8c16-bac6-4c48-a272-2f012936307f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:14 np0005542249 systemd[1]: run-netns-ovnmeta\x2d1468b032\x2d015a\x2d4fb8\x2da7c5\x2d2a3b8aab9149.mount: Deactivated successfully.
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.670 254904 DEBUG nova.compute.manager [req-1b740f28-3ebf-4b6b-b49d-39e1749bad07 req-61285529-fccf-422e-9e60-e2ba61b24248 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Received event network-vif-unplugged-95d81470-bd8d-43d3-bb01-caedf88641cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.671 254904 DEBUG oslo_concurrency.lockutils [req-1b740f28-3ebf-4b6b-b49d-39e1749bad07 req-61285529-fccf-422e-9e60-e2ba61b24248 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.672 254904 DEBUG oslo_concurrency.lockutils [req-1b740f28-3ebf-4b6b-b49d-39e1749bad07 req-61285529-fccf-422e-9e60-e2ba61b24248 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.672 254904 DEBUG oslo_concurrency.lockutils [req-1b740f28-3ebf-4b6b-b49d-39e1749bad07 req-61285529-fccf-422e-9e60-e2ba61b24248 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.673 254904 DEBUG nova.compute.manager [req-1b740f28-3ebf-4b6b-b49d-39e1749bad07 req-61285529-fccf-422e-9e60-e2ba61b24248 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] No waiting events found dispatching network-vif-unplugged-95d81470-bd8d-43d3-bb01-caedf88641cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.673 254904 DEBUG nova.compute.manager [req-1b740f28-3ebf-4b6b-b49d-39e1749bad07 req-61285529-fccf-422e-9e60-e2ba61b24248 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Received event network-vif-unplugged-95d81470-bd8d-43d3-bb01-caedf88641cf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.847 254904 INFO nova.virt.libvirt.driver [None req-e729866e-de22-417b-88ce-f400ee9001a1 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Deleting instance files /var/lib/nova/instances/0be3aa38-c2a1-4a78-b4a6-6419c4c9265b_del#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.848 254904 INFO nova.virt.libvirt.driver [None req-e729866e-de22-417b-88ce-f400ee9001a1 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Deletion of /var/lib/nova/instances/0be3aa38-c2a1-4a78-b4a6-6419c4c9265b_del complete#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.902 254904 INFO nova.compute.manager [None req-e729866e-de22-417b-88ce-f400ee9001a1 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.903 254904 DEBUG oslo.service.loopingcall [None req-e729866e-de22-417b-88ce-f400ee9001a1 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.903 254904 DEBUG nova.compute.manager [-] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:19:14 np0005542249 nova_compute[254900]: 2025-12-02 11:19:14.904 254904 DEBUG nova.network.neutron [-] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:19:15 np0005542249 nova_compute[254900]: 2025-12-02 11:19:15.437 254904 DEBUG nova.network.neutron [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Successfully updated port: d1278df6-eced-4cb3-82f8-ebf07482ac43 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:19:15 np0005542249 nova_compute[254900]: 2025-12-02 11:19:15.461 254904 DEBUG oslo_concurrency.lockutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Acquiring lock "refresh_cache-b65d7234-010a-4dd0-b06c-58dc26cfb5a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:19:15 np0005542249 nova_compute[254900]: 2025-12-02 11:19:15.462 254904 DEBUG oslo_concurrency.lockutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Acquired lock "refresh_cache-b65d7234-010a-4dd0-b06c-58dc26cfb5a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:19:15 np0005542249 nova_compute[254900]: 2025-12-02 11:19:15.463 254904 DEBUG nova.network.neutron [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:19:15 np0005542249 nova_compute[254900]: 2025-12-02 11:19:15.553 254904 DEBUG nova.compute.manager [req-adb478b7-d8f2-4fdd-952e-426afcdc15c6 req-201cd1e2-c646-4b03-a867-0e12577483f9 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Received event network-changed-d1278df6-eced-4cb3-82f8-ebf07482ac43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:19:15 np0005542249 nova_compute[254900]: 2025-12-02 11:19:15.553 254904 DEBUG nova.compute.manager [req-adb478b7-d8f2-4fdd-952e-426afcdc15c6 req-201cd1e2-c646-4b03-a867-0e12577483f9 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Refreshing instance network info cache due to event network-changed-d1278df6-eced-4cb3-82f8-ebf07482ac43. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:19:15 np0005542249 nova_compute[254900]: 2025-12-02 11:19:15.554 254904 DEBUG oslo_concurrency.lockutils [req-adb478b7-d8f2-4fdd-952e-426afcdc15c6 req-201cd1e2-c646-4b03-a867-0e12577483f9 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-b65d7234-010a-4dd0-b06c-58dc26cfb5a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:19:15 np0005542249 nova_compute[254900]: 2025-12-02 11:19:15.702 254904 DEBUG nova.network.neutron [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:19:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1079: 321 pgs: 321 active+clean; 129 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 1.0 MiB/s wr, 64 op/s
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.175 254904 DEBUG nova.network.neutron [-] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.201 254904 INFO nova.compute.manager [-] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Took 1.30 seconds to deallocate network for instance.#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.269 254904 DEBUG oslo_concurrency.lockutils [None req-e729866e-de22-417b-88ce-f400ee9001a1 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.270 254904 DEBUG oslo_concurrency.lockutils [None req-e729866e-de22-417b-88ce-f400ee9001a1 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:19:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Dec  2 06:19:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Dec  2 06:19:16 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.352 254904 DEBUG oslo_concurrency.processutils [None req-e729866e-de22-417b-88ce-f400ee9001a1 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.758 254904 DEBUG nova.compute.manager [req-534eed57-68ef-48ff-aee4-4fe7e8a4e499 req-210767ae-dcd7-434d-a0b4-5e787df53b87 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Received event network-vif-plugged-95d81470-bd8d-43d3-bb01-caedf88641cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.759 254904 DEBUG oslo_concurrency.lockutils [req-534eed57-68ef-48ff-aee4-4fe7e8a4e499 req-210767ae-dcd7-434d-a0b4-5e787df53b87 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.759 254904 DEBUG oslo_concurrency.lockutils [req-534eed57-68ef-48ff-aee4-4fe7e8a4e499 req-210767ae-dcd7-434d-a0b4-5e787df53b87 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.759 254904 DEBUG oslo_concurrency.lockutils [req-534eed57-68ef-48ff-aee4-4fe7e8a4e499 req-210767ae-dcd7-434d-a0b4-5e787df53b87 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.760 254904 DEBUG nova.compute.manager [req-534eed57-68ef-48ff-aee4-4fe7e8a4e499 req-210767ae-dcd7-434d-a0b4-5e787df53b87 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] No waiting events found dispatching network-vif-plugged-95d81470-bd8d-43d3-bb01-caedf88641cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.760 254904 WARNING nova.compute.manager [req-534eed57-68ef-48ff-aee4-4fe7e8a4e499 req-210767ae-dcd7-434d-a0b4-5e787df53b87 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Received unexpected event network-vif-plugged-95d81470-bd8d-43d3-bb01-caedf88641cf for instance with vm_state deleted and task_state None.#033[00m
Dec  2 06:19:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:19:16 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2495143815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.805 254904 DEBUG oslo_concurrency.processutils [None req-e729866e-de22-417b-88ce-f400ee9001a1 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.813 254904 DEBUG nova.compute.provider_tree [None req-e729866e-de22-417b-88ce-f400ee9001a1 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.827 254904 DEBUG nova.scheduler.client.report [None req-e729866e-de22-417b-88ce-f400ee9001a1 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.851 254904 DEBUG oslo_concurrency.lockutils [None req-e729866e-de22-417b-88ce-f400ee9001a1 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.878 254904 INFO nova.scheduler.client.report [None req-e729866e-de22-417b-88ce-f400ee9001a1 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Deleted allocations for instance 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.938 254904 DEBUG nova.network.neutron [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Updating instance_info_cache with network_info: [{"id": "d1278df6-eced-4cb3-82f8-ebf07482ac43", "address": "fa:16:3e:38:9c:77", "network": {"id": "72c6dc0c-99bf-460e-aaba-86258fa10534", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1502229343-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0587e0fe146043ba857b2d8002ab0a3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1278df6-ec", "ovs_interfaceid": "d1278df6-eced-4cb3-82f8-ebf07482ac43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.946 254904 DEBUG oslo_concurrency.lockutils [None req-e729866e-de22-417b-88ce-f400ee9001a1 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "0be3aa38-c2a1-4a78-b4a6-6419c4c9265b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.961 254904 DEBUG oslo_concurrency.lockutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Releasing lock "refresh_cache-b65d7234-010a-4dd0-b06c-58dc26cfb5a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.962 254904 DEBUG nova.compute.manager [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Instance network_info: |[{"id": "d1278df6-eced-4cb3-82f8-ebf07482ac43", "address": "fa:16:3e:38:9c:77", "network": {"id": "72c6dc0c-99bf-460e-aaba-86258fa10534", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1502229343-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0587e0fe146043ba857b2d8002ab0a3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1278df6-ec", "ovs_interfaceid": "d1278df6-eced-4cb3-82f8-ebf07482ac43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.962 254904 DEBUG oslo_concurrency.lockutils [req-adb478b7-d8f2-4fdd-952e-426afcdc15c6 req-201cd1e2-c646-4b03-a867-0e12577483f9 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-b65d7234-010a-4dd0-b06c-58dc26cfb5a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.963 254904 DEBUG nova.network.neutron [req-adb478b7-d8f2-4fdd-952e-426afcdc15c6 req-201cd1e2-c646-4b03-a867-0e12577483f9 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Refreshing network info cache for port d1278df6-eced-4cb3-82f8-ebf07482ac43 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.965 254904 DEBUG nova.virt.libvirt.driver [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Start _get_guest_xml network_info=[{"id": "d1278df6-eced-4cb3-82f8-ebf07482ac43", "address": "fa:16:3e:38:9c:77", "network": {"id": "72c6dc0c-99bf-460e-aaba-86258fa10534", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1502229343-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0587e0fe146043ba857b2d8002ab0a3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1278df6-ec", "ovs_interfaceid": "d1278df6-eced-4cb3-82f8-ebf07482ac43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'image_id': '5a40f66c-ab43-47dd-9880-e59f9fa2c60e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.971 254904 WARNING nova.virt.libvirt.driver [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.979 254904 DEBUG nova.virt.libvirt.host [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.980 254904 DEBUG nova.virt.libvirt.host [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.983 254904 DEBUG nova.virt.libvirt.host [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.984 254904 DEBUG nova.virt.libvirt.host [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.984 254904 DEBUG nova.virt.libvirt.driver [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.985 254904 DEBUG nova.virt.hardware [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.985 254904 DEBUG nova.virt.hardware [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.985 254904 DEBUG nova.virt.hardware [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.986 254904 DEBUG nova.virt.hardware [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.986 254904 DEBUG nova.virt.hardware [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.986 254904 DEBUG nova.virt.hardware [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.986 254904 DEBUG nova.virt.hardware [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.986 254904 DEBUG nova.virt.hardware [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.987 254904 DEBUG nova.virt.hardware [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.987 254904 DEBUG nova.virt.hardware [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.987 254904 DEBUG nova.virt.hardware [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:19:16 np0005542249 nova_compute[254900]: 2025-12-02 11:19:16.990 254904 DEBUG oslo_concurrency.processutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:19:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:19:17 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2469012797' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.467 254904 DEBUG oslo_concurrency.processutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.500 254904 DEBUG nova.storage.rbd_utils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] rbd image b65d7234-010a-4dd0-b06c-58dc26cfb5a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.505 254904 DEBUG oslo_concurrency.processutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.663 254904 DEBUG nova.compute.manager [req-b471a65b-4008-481a-9b23-4ba76c5245c2 req-12bf6f21-a116-4438-adf0-7d9a8bdf20c7 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Received event network-vif-deleted-95d81470-bd8d-43d3-bb01-caedf88641cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:19:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1081: 321 pgs: 321 active+clean; 107 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 670 KiB/s rd, 2.7 MiB/s wr, 132 op/s
Dec  2 06:19:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:19:17 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1613173568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.934 254904 DEBUG oslo_concurrency.processutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.938 254904 DEBUG nova.virt.libvirt.vif [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:19:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-2046096500',display_name='tempest-VolumesActionsTest-instance-2046096500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-2046096500',id=4,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0587e0fe146043ba857b2d8002ab0a3b',ramdisk_id='',reservation_id='r-mz5zmhmf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1586928656',owner_user_name='tempest-VolumesActionsTest-1586928656-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:19:12Z,user_data=None,user_id='505334fe5eb749e19ba727c8b2d04594',uuid=b65d7234-010a-4dd0-b06c-58dc26cfb5a6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d1278df6-eced-4cb3-82f8-ebf07482ac43", "address": "fa:16:3e:38:9c:77", "network": {"id": "72c6dc0c-99bf-460e-aaba-86258fa10534", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1502229343-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0587e0fe146043ba857b2d8002ab0a3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1278df6-ec", "ovs_interfaceid": "d1278df6-eced-4cb3-82f8-ebf07482ac43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.939 254904 DEBUG nova.network.os_vif_util [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Converting VIF {"id": "d1278df6-eced-4cb3-82f8-ebf07482ac43", "address": "fa:16:3e:38:9c:77", "network": {"id": "72c6dc0c-99bf-460e-aaba-86258fa10534", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1502229343-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0587e0fe146043ba857b2d8002ab0a3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1278df6-ec", "ovs_interfaceid": "d1278df6-eced-4cb3-82f8-ebf07482ac43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.940 254904 DEBUG nova.network.os_vif_util [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:9c:77,bridge_name='br-int',has_traffic_filtering=True,id=d1278df6-eced-4cb3-82f8-ebf07482ac43,network=Network(72c6dc0c-99bf-460e-aaba-86258fa10534),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1278df6-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.943 254904 DEBUG nova.objects.instance [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lazy-loading 'pci_devices' on Instance uuid b65d7234-010a-4dd0-b06c-58dc26cfb5a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.957 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.970 254904 DEBUG nova.virt.libvirt.driver [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:19:17 np0005542249 nova_compute[254900]:  <uuid>b65d7234-010a-4dd0-b06c-58dc26cfb5a6</uuid>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:  <name>instance-00000004</name>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <nova:name>tempest-VolumesActionsTest-instance-2046096500</nova:name>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:19:16</nova:creationTime>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:19:17 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:        <nova:user uuid="505334fe5eb749e19ba727c8b2d04594">tempest-VolumesActionsTest-1586928656-project-member</nova:user>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:        <nova:project uuid="0587e0fe146043ba857b2d8002ab0a3b">tempest-VolumesActionsTest-1586928656</nova:project>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <nova:root type="image" uuid="5a40f66c-ab43-47dd-9880-e59f9fa2c60e"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:        <nova:port uuid="d1278df6-eced-4cb3-82f8-ebf07482ac43">
Dec  2 06:19:17 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <entry name="serial">b65d7234-010a-4dd0-b06c-58dc26cfb5a6</entry>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <entry name="uuid">b65d7234-010a-4dd0-b06c-58dc26cfb5a6</entry>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/b65d7234-010a-4dd0-b06c-58dc26cfb5a6_disk">
Dec  2 06:19:17 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:19:17 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/b65d7234-010a-4dd0-b06c-58dc26cfb5a6_disk.config">
Dec  2 06:19:17 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:19:17 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:38:9c:77"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <target dev="tapd1278df6-ec"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/b65d7234-010a-4dd0-b06c-58dc26cfb5a6/console.log" append="off"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:19:17 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:19:17 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:19:17 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:19:17 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.972 254904 DEBUG nova.compute.manager [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Preparing to wait for external event network-vif-plugged-d1278df6-eced-4cb3-82f8-ebf07482ac43 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.973 254904 DEBUG oslo_concurrency.lockutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Acquiring lock "b65d7234-010a-4dd0-b06c-58dc26cfb5a6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.973 254904 DEBUG oslo_concurrency.lockutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "b65d7234-010a-4dd0-b06c-58dc26cfb5a6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.974 254904 DEBUG oslo_concurrency.lockutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "b65d7234-010a-4dd0-b06c-58dc26cfb5a6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.974 254904 DEBUG nova.virt.libvirt.vif [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:19:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-2046096500',display_name='tempest-VolumesActionsTest-instance-2046096500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-2046096500',id=4,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0587e0fe146043ba857b2d8002ab0a3b',ramdisk_id='',reservation_id='r-mz5zmhmf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1586928656',owner_user_name='tempest-VolumesActionsTest-1586928656-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:19:12Z,user_data=None,user_id='505334fe5eb749e19ba727c8b2d04594',uuid=b65d7234-010a-4dd0-b06c-58dc26cfb5a6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d1278df6-eced-4cb3-82f8-ebf07482ac43", "address": "fa:16:3e:38:9c:77", "network": {"id": "72c6dc0c-99bf-460e-aaba-86258fa10534", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1502229343-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0587e0fe146043ba857b2d8002ab0a3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1278df6-ec", "ovs_interfaceid": "d1278df6-eced-4cb3-82f8-ebf07482ac43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.975 254904 DEBUG nova.network.os_vif_util [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Converting VIF {"id": "d1278df6-eced-4cb3-82f8-ebf07482ac43", "address": "fa:16:3e:38:9c:77", "network": {"id": "72c6dc0c-99bf-460e-aaba-86258fa10534", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1502229343-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0587e0fe146043ba857b2d8002ab0a3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1278df6-ec", "ovs_interfaceid": "d1278df6-eced-4cb3-82f8-ebf07482ac43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.975 254904 DEBUG nova.network.os_vif_util [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:9c:77,bridge_name='br-int',has_traffic_filtering=True,id=d1278df6-eced-4cb3-82f8-ebf07482ac43,network=Network(72c6dc0c-99bf-460e-aaba-86258fa10534),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1278df6-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.976 254904 DEBUG os_vif [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:9c:77,bridge_name='br-int',has_traffic_filtering=True,id=d1278df6-eced-4cb3-82f8-ebf07482ac43,network=Network(72c6dc0c-99bf-460e-aaba-86258fa10534),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1278df6-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.979 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.979 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.980 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.984 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.985 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd1278df6-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.985 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd1278df6-ec, col_values=(('external_ids', {'iface-id': 'd1278df6-eced-4cb3-82f8-ebf07482ac43', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:9c:77', 'vm-uuid': 'b65d7234-010a-4dd0-b06c-58dc26cfb5a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.987 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:17 np0005542249 NetworkManager[48987]: <info>  [1764674357.9886] manager: (tapd1278df6-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.989 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.994 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:17 np0005542249 nova_compute[254900]: 2025-12-02 11:19:17.995 254904 INFO os_vif [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:9c:77,bridge_name='br-int',has_traffic_filtering=True,id=d1278df6-eced-4cb3-82f8-ebf07482ac43,network=Network(72c6dc0c-99bf-460e-aaba-86258fa10534),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1278df6-ec')#033[00m
Dec  2 06:19:18 np0005542249 nova_compute[254900]: 2025-12-02 11:19:18.061 254904 DEBUG nova.virt.libvirt.driver [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:19:18 np0005542249 nova_compute[254900]: 2025-12-02 11:19:18.062 254904 DEBUG nova.virt.libvirt.driver [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:19:18 np0005542249 nova_compute[254900]: 2025-12-02 11:19:18.062 254904 DEBUG nova.virt.libvirt.driver [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] No VIF found with MAC fa:16:3e:38:9c:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:19:18 np0005542249 nova_compute[254900]: 2025-12-02 11:19:18.063 254904 INFO nova.virt.libvirt.driver [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Using config drive#033[00m
Dec  2 06:19:18 np0005542249 nova_compute[254900]: 2025-12-02 11:19:18.098 254904 DEBUG nova.storage.rbd_utils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] rbd image b65d7234-010a-4dd0-b06c-58dc26cfb5a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:19:18 np0005542249 nova_compute[254900]: 2025-12-02 11:19:18.165 254904 DEBUG nova.network.neutron [req-adb478b7-d8f2-4fdd-952e-426afcdc15c6 req-201cd1e2-c646-4b03-a867-0e12577483f9 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Updated VIF entry in instance network info cache for port d1278df6-eced-4cb3-82f8-ebf07482ac43. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:19:18 np0005542249 nova_compute[254900]: 2025-12-02 11:19:18.165 254904 DEBUG nova.network.neutron [req-adb478b7-d8f2-4fdd-952e-426afcdc15c6 req-201cd1e2-c646-4b03-a867-0e12577483f9 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Updating instance_info_cache with network_info: [{"id": "d1278df6-eced-4cb3-82f8-ebf07482ac43", "address": "fa:16:3e:38:9c:77", "network": {"id": "72c6dc0c-99bf-460e-aaba-86258fa10534", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1502229343-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0587e0fe146043ba857b2d8002ab0a3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1278df6-ec", "ovs_interfaceid": "d1278df6-eced-4cb3-82f8-ebf07482ac43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:19:18 np0005542249 nova_compute[254900]: 2025-12-02 11:19:18.189 254904 DEBUG oslo_concurrency.lockutils [req-adb478b7-d8f2-4fdd-952e-426afcdc15c6 req-201cd1e2-c646-4b03-a867-0e12577483f9 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-b65d7234-010a-4dd0-b06c-58dc26cfb5a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:19:18 np0005542249 nova_compute[254900]: 2025-12-02 11:19:18.517 254904 INFO nova.virt.libvirt.driver [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Creating config drive at /var/lib/nova/instances/b65d7234-010a-4dd0-b06c-58dc26cfb5a6/disk.config#033[00m
Dec  2 06:19:18 np0005542249 nova_compute[254900]: 2025-12-02 11:19:18.522 254904 DEBUG oslo_concurrency.processutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b65d7234-010a-4dd0-b06c-58dc26cfb5a6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3njjsz5h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:19:18 np0005542249 nova_compute[254900]: 2025-12-02 11:19:18.663 254904 DEBUG oslo_concurrency.processutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b65d7234-010a-4dd0-b06c-58dc26cfb5a6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3njjsz5h" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:19:18 np0005542249 nova_compute[254900]: 2025-12-02 11:19:18.700 254904 DEBUG nova.storage.rbd_utils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] rbd image b65d7234-010a-4dd0-b06c-58dc26cfb5a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:19:18 np0005542249 nova_compute[254900]: 2025-12-02 11:19:18.705 254904 DEBUG oslo_concurrency.processutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b65d7234-010a-4dd0-b06c-58dc26cfb5a6/disk.config b65d7234-010a-4dd0-b06c-58dc26cfb5a6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:19:18 np0005542249 nova_compute[254900]: 2025-12-02 11:19:18.891 254904 DEBUG oslo_concurrency.processutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b65d7234-010a-4dd0-b06c-58dc26cfb5a6/disk.config b65d7234-010a-4dd0-b06c-58dc26cfb5a6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:19:18 np0005542249 nova_compute[254900]: 2025-12-02 11:19:18.892 254904 INFO nova.virt.libvirt.driver [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Deleting local config drive /var/lib/nova/instances/b65d7234-010a-4dd0-b06c-58dc26cfb5a6/disk.config because it was imported into RBD.#033[00m
Dec  2 06:19:18 np0005542249 kernel: tapd1278df6-ec: entered promiscuous mode
Dec  2 06:19:18 np0005542249 NetworkManager[48987]: <info>  [1764674358.9613] manager: (tapd1278df6-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Dec  2 06:19:18 np0005542249 ovn_controller[153849]: 2025-12-02T11:19:18Z|00054|binding|INFO|Claiming lport d1278df6-eced-4cb3-82f8-ebf07482ac43 for this chassis.
Dec  2 06:19:18 np0005542249 ovn_controller[153849]: 2025-12-02T11:19:18Z|00055|binding|INFO|d1278df6-eced-4cb3-82f8-ebf07482ac43: Claiming fa:16:3e:38:9c:77 10.100.0.7
Dec  2 06:19:18 np0005542249 nova_compute[254900]: 2025-12-02 11:19:18.961 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:18 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:18.970 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:9c:77 10.100.0.7'], port_security=['fa:16:3e:38:9c:77 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b65d7234-010a-4dd0-b06c-58dc26cfb5a6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72c6dc0c-99bf-460e-aaba-86258fa10534', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0587e0fe146043ba857b2d8002ab0a3b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '277ca5fd-9d7a-41af-9803-178b99c967e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2ed1272d-4b2e-4a1e-a64d-f275687e74f1, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=d1278df6-eced-4cb3-82f8-ebf07482ac43) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:19:18 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:18.972 163757 INFO neutron.agent.ovn.metadata.agent [-] Port d1278df6-eced-4cb3-82f8-ebf07482ac43 in datapath 72c6dc0c-99bf-460e-aaba-86258fa10534 bound to our chassis#033[00m
Dec  2 06:19:18 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:18.974 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 72c6dc0c-99bf-460e-aaba-86258fa10534#033[00m
Dec  2 06:19:18 np0005542249 ovn_controller[153849]: 2025-12-02T11:19:18Z|00056|binding|INFO|Setting lport d1278df6-eced-4cb3-82f8-ebf07482ac43 ovn-installed in OVS
Dec  2 06:19:18 np0005542249 ovn_controller[153849]: 2025-12-02T11:19:18Z|00057|binding|INFO|Setting lport d1278df6-eced-4cb3-82f8-ebf07482ac43 up in Southbound
Dec  2 06:19:18 np0005542249 nova_compute[254900]: 2025-12-02 11:19:18.988 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:18 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:18.990 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[1c8e7610-86de-4265-90d6-b02f53f7450a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:18 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:18.992 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap72c6dc0c-91 in ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 06:19:18 np0005542249 nova_compute[254900]: 2025-12-02 11:19:18.993 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:18 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:18.996 262398 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap72c6dc0c-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 06:19:18 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:18.996 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[8340f319-c121-419c-bbb0-4b7176859d35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:18 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:18.998 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[bb5ceba8-a5ba-4158-beef-852f20a42d91]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:19 np0005542249 systemd-udevd[266759]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:19:19 np0005542249 systemd-machined[216222]: New machine qemu-4-instance-00000004.
Dec  2 06:19:19 np0005542249 NetworkManager[48987]: <info>  [1764674359.0222] device (tapd1278df6-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:19.021 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[93bb004a-bf34-4606-927a-656cccde2852]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:19 np0005542249 NetworkManager[48987]: <info>  [1764674359.0238] device (tapd1278df6-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:19:19 np0005542249 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:19.038 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[86fa38f6-3359-4b38-ba09-8bea0979f9a6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:19.068 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[b4e9fa98-96b7-4c64-b3a6-c25b2a0324cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:19 np0005542249 NetworkManager[48987]: <info>  [1764674359.0769] manager: (tap72c6dc0c-90): new Veth device (/org/freedesktop/NetworkManager/Devices/41)
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:19.075 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[b3080743-bc86-4022-b608-442b78cefb75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:19.125 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[ad66595d-8502-4502-94f2-f484e43af4f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:19.129 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[37607988-8ab2-4a08-9a90-b86d660faa77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:19 np0005542249 NetworkManager[48987]: <info>  [1764674359.1564] device (tap72c6dc0c-90): carrier: link connected
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:19.170 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb953f8-8b44-4c7a-818e-d0f72b1862b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:19.194 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[2aaaa006-acbf-42e1-b01d-e29ee7389b65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72c6dc0c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:a4:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455197, 'reachable_time': 37401, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266792, 'error': None, 'target': 'ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:19.212 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[b1c3eadb-96f7-46c6-aeae-721ce064c4c3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:a4e7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455197, 'tstamp': 455197}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266793, 'error': None, 'target': 'ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.226 254904 DEBUG nova.compute.manager [req-af1588b0-b9c2-4658-9ee9-b40b932f9e53 req-72a78822-81ec-47ce-b9d1-151f86142f99 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Received event network-vif-plugged-d1278df6-eced-4cb3-82f8-ebf07482ac43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.227 254904 DEBUG oslo_concurrency.lockutils [req-af1588b0-b9c2-4658-9ee9-b40b932f9e53 req-72a78822-81ec-47ce-b9d1-151f86142f99 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "b65d7234-010a-4dd0-b06c-58dc26cfb5a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.227 254904 DEBUG oslo_concurrency.lockutils [req-af1588b0-b9c2-4658-9ee9-b40b932f9e53 req-72a78822-81ec-47ce-b9d1-151f86142f99 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "b65d7234-010a-4dd0-b06c-58dc26cfb5a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.227 254904 DEBUG oslo_concurrency.lockutils [req-af1588b0-b9c2-4658-9ee9-b40b932f9e53 req-72a78822-81ec-47ce-b9d1-151f86142f99 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "b65d7234-010a-4dd0-b06c-58dc26cfb5a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.227 254904 DEBUG nova.compute.manager [req-af1588b0-b9c2-4658-9ee9-b40b932f9e53 req-72a78822-81ec-47ce-b9d1-151f86142f99 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Processing event network-vif-plugged-d1278df6-eced-4cb3-82f8-ebf07482ac43 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:19.236 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[00856b51-d1e0-4809-b1ff-297cd8cf5386]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72c6dc0c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:a4:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455197, 'reachable_time': 37401, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266794, 'error': None, 'target': 'ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:19.273 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[c0e2a3b7-5b97-4b5d-8587-b09f8e4b18eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:19.335 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[965b28b5-5605-446f-bf62-ca8dc512d5b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:19.337 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72c6dc0c-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:19.338 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:19.338 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72c6dc0c-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:19:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Dec  2 06:19:19 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Dec  2 06:19:19 np0005542249 NetworkManager[48987]: <info>  [1764674359.3887] manager: (tap72c6dc0c-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.388 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:19 np0005542249 kernel: tap72c6dc0c-90: entered promiscuous mode
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:19.395 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap72c6dc0c-90, col_values=(('external_ids', {'iface-id': '534ec691-ef20-495e-9e85-e96ce635d23b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:19:19 np0005542249 ovn_controller[153849]: 2025-12-02T11:19:19Z|00058|binding|INFO|Releasing lport 534ec691-ef20-495e-9e85-e96ce635d23b from this chassis (sb_readonly=0)
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.397 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.418 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:19.420 163757 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/72c6dc0c-99bf-460e-aaba-86258fa10534.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/72c6dc0c-99bf-460e-aaba-86258fa10534.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:19.421 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[abc790c2-ea9b-473a-87cb-560097a84c3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:19.422 163757 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: global
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]:    log         /dev/log local0 debug
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]:    log-tag     haproxy-metadata-proxy-72c6dc0c-99bf-460e-aaba-86258fa10534
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]:    user        root
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]:    group       root
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]:    maxconn     1024
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]:    pidfile     /var/lib/neutron/external/pids/72c6dc0c-99bf-460e-aaba-86258fa10534.pid.haproxy
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]:    daemon
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: defaults
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]:    log global
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]:    mode http
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]:    option httplog
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]:    option dontlognull
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]:    option http-server-close
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]:    option forwardfor
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]:    retries                 3
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]:    timeout http-request    30s
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]:    timeout connect         30s
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]:    timeout client          32s
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]:    timeout server          32s
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]:    timeout http-keep-alive 30s
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: listen listener
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]:    bind 169.254.169.254:80
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]:    http-request add-header X-OVN-Network-ID 72c6dc0c-99bf-460e-aaba-86258fa10534
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:19.425 163757 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534', 'env', 'PROCESS_TAG=haproxy-72c6dc0c-99bf-460e-aaba-86258fa10534', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/72c6dc0c-99bf-460e-aaba-86258fa10534.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.547 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674359.547203, b65d7234-010a-4dd0-b06c-58dc26cfb5a6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.548 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] VM Started (Lifecycle Event)#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.551 254904 DEBUG nova.compute.manager [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.556 254904 DEBUG nova.virt.libvirt.driver [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.559 254904 INFO nova.virt.libvirt.driver [-] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Instance spawned successfully.#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.560 254904 DEBUG nova.virt.libvirt.driver [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.571 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.579 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.585 254904 DEBUG nova.virt.libvirt.driver [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.585 254904 DEBUG nova.virt.libvirt.driver [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.586 254904 DEBUG nova.virt.libvirt.driver [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.586 254904 DEBUG nova.virt.libvirt.driver [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.587 254904 DEBUG nova.virt.libvirt.driver [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.588 254904 DEBUG nova.virt.libvirt.driver [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.626 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.627 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674359.5473416, b65d7234-010a-4dd0-b06c-58dc26cfb5a6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.628 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.660 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.663 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674359.5548685, b65d7234-010a-4dd0-b06c-58dc26cfb5a6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.664 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.671 254904 INFO nova.compute.manager [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Took 6.78 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.671 254904 DEBUG nova.compute.manager [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.683 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.687 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.709 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.727 254904 INFO nova.compute.manager [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Took 7.84 seconds to build instance.#033[00m
Dec  2 06:19:19 np0005542249 nova_compute[254900]: 2025-12-02 11:19:19.745 254904 DEBUG oslo_concurrency.lockutils [None req-51279cad-f0a2-4649-82f6-96c790e8d4a2 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "b65d7234-010a-4dd0-b06c-58dc26cfb5a6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:19.833 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:19.834 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:19.834 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1083: 321 pgs: 321 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 144 op/s
Dec  2 06:19:19 np0005542249 podman[266865]: 2025-12-02 11:19:19.904877619 +0000 UTC m=+0.081288437 container create 0ef0726aa9e104c1034725592c61579184b79485de559e1f6486a9173f674910 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec  2 06:19:19 np0005542249 systemd[1]: Started libpod-conmon-0ef0726aa9e104c1034725592c61579184b79485de559e1f6486a9173f674910.scope.
Dec  2 06:19:19 np0005542249 podman[266865]: 2025-12-02 11:19:19.864510018 +0000 UTC m=+0.040920886 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:19:19 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:19:19 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e0a19f711f03069ce77994e8050383c4b9942f1a9ee7bbf271d26ea19bbd8e4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 06:19:20 np0005542249 podman[266865]: 2025-12-02 11:19:20.0167985 +0000 UTC m=+0.193209358 container init 0ef0726aa9e104c1034725592c61579184b79485de559e1f6486a9173f674910 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:19:20 np0005542249 podman[266865]: 2025-12-02 11:19:20.026910615 +0000 UTC m=+0.203321503 container start 0ef0726aa9e104c1034725592c61579184b79485de559e1f6486a9173f674910 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  2 06:19:20 np0005542249 neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534[266881]: [NOTICE]   (266885) : New worker (266887) forked
Dec  2 06:19:20 np0005542249 neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534[266881]: [NOTICE]   (266885) : Loading success.
Dec  2 06:19:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:19:20 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1802648729' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:19:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:19:20 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1802648729' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:19:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:19:21 np0005542249 nova_compute[254900]: 2025-12-02 11:19:21.353 254904 DEBUG nova.compute.manager [req-441d42c4-052d-4a89-9cc7-ad01c7879bb2 req-3fedf781-43c2-4ca1-81c7-1a8cc47a31b4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Received event network-vif-plugged-d1278df6-eced-4cb3-82f8-ebf07482ac43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:19:21 np0005542249 nova_compute[254900]: 2025-12-02 11:19:21.354 254904 DEBUG oslo_concurrency.lockutils [req-441d42c4-052d-4a89-9cc7-ad01c7879bb2 req-3fedf781-43c2-4ca1-81c7-1a8cc47a31b4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "b65d7234-010a-4dd0-b06c-58dc26cfb5a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:21 np0005542249 nova_compute[254900]: 2025-12-02 11:19:21.354 254904 DEBUG oslo_concurrency.lockutils [req-441d42c4-052d-4a89-9cc7-ad01c7879bb2 req-3fedf781-43c2-4ca1-81c7-1a8cc47a31b4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "b65d7234-010a-4dd0-b06c-58dc26cfb5a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:21 np0005542249 nova_compute[254900]: 2025-12-02 11:19:21.354 254904 DEBUG oslo_concurrency.lockutils [req-441d42c4-052d-4a89-9cc7-ad01c7879bb2 req-3fedf781-43c2-4ca1-81c7-1a8cc47a31b4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "b65d7234-010a-4dd0-b06c-58dc26cfb5a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:21 np0005542249 nova_compute[254900]: 2025-12-02 11:19:21.355 254904 DEBUG nova.compute.manager [req-441d42c4-052d-4a89-9cc7-ad01c7879bb2 req-3fedf781-43c2-4ca1-81c7-1a8cc47a31b4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] No waiting events found dispatching network-vif-plugged-d1278df6-eced-4cb3-82f8-ebf07482ac43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:19:21 np0005542249 nova_compute[254900]: 2025-12-02 11:19:21.355 254904 WARNING nova.compute.manager [req-441d42c4-052d-4a89-9cc7-ad01c7879bb2 req-3fedf781-43c2-4ca1-81c7-1a8cc47a31b4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Received unexpected event network-vif-plugged-d1278df6-eced-4cb3-82f8-ebf07482ac43 for instance with vm_state active and task_state None.#033[00m
Dec  2 06:19:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1084: 321 pgs: 321 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 97 op/s
Dec  2 06:19:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Dec  2 06:19:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Dec  2 06:19:22 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Dec  2 06:19:22 np0005542249 nova_compute[254900]: 2025-12-02 11:19:22.865 254904 DEBUG oslo_concurrency.lockutils [None req-fc891aa1-02cc-4171-ac41-b7ed74df9b4d 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Acquiring lock "b65d7234-010a-4dd0-b06c-58dc26cfb5a6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:22 np0005542249 nova_compute[254900]: 2025-12-02 11:19:22.865 254904 DEBUG oslo_concurrency.lockutils [None req-fc891aa1-02cc-4171-ac41-b7ed74df9b4d 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "b65d7234-010a-4dd0-b06c-58dc26cfb5a6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:22 np0005542249 nova_compute[254900]: 2025-12-02 11:19:22.866 254904 DEBUG oslo_concurrency.lockutils [None req-fc891aa1-02cc-4171-ac41-b7ed74df9b4d 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Acquiring lock "b65d7234-010a-4dd0-b06c-58dc26cfb5a6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:22 np0005542249 nova_compute[254900]: 2025-12-02 11:19:22.866 254904 DEBUG oslo_concurrency.lockutils [None req-fc891aa1-02cc-4171-ac41-b7ed74df9b4d 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "b65d7234-010a-4dd0-b06c-58dc26cfb5a6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:22 np0005542249 nova_compute[254900]: 2025-12-02 11:19:22.867 254904 DEBUG oslo_concurrency.lockutils [None req-fc891aa1-02cc-4171-ac41-b7ed74df9b4d 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "b65d7234-010a-4dd0-b06c-58dc26cfb5a6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:22 np0005542249 nova_compute[254900]: 2025-12-02 11:19:22.868 254904 INFO nova.compute.manager [None req-fc891aa1-02cc-4171-ac41-b7ed74df9b4d 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Terminating instance#033[00m
Dec  2 06:19:22 np0005542249 nova_compute[254900]: 2025-12-02 11:19:22.869 254904 DEBUG nova.compute.manager [None req-fc891aa1-02cc-4171-ac41-b7ed74df9b4d 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:19:22 np0005542249 kernel: tapd1278df6-ec (unregistering): left promiscuous mode
Dec  2 06:19:22 np0005542249 NetworkManager[48987]: <info>  [1764674362.9247] device (tapd1278df6-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:19:22 np0005542249 ovn_controller[153849]: 2025-12-02T11:19:22Z|00059|binding|INFO|Releasing lport d1278df6-eced-4cb3-82f8-ebf07482ac43 from this chassis (sb_readonly=0)
Dec  2 06:19:22 np0005542249 ovn_controller[153849]: 2025-12-02T11:19:22Z|00060|binding|INFO|Setting lport d1278df6-eced-4cb3-82f8-ebf07482ac43 down in Southbound
Dec  2 06:19:22 np0005542249 ovn_controller[153849]: 2025-12-02T11:19:22Z|00061|binding|INFO|Removing iface tapd1278df6-ec ovn-installed in OVS
Dec  2 06:19:22 np0005542249 nova_compute[254900]: 2025-12-02 11:19:22.944 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:22 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:22.952 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:9c:77 10.100.0.7'], port_security=['fa:16:3e:38:9c:77 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b65d7234-010a-4dd0-b06c-58dc26cfb5a6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72c6dc0c-99bf-460e-aaba-86258fa10534', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0587e0fe146043ba857b2d8002ab0a3b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '277ca5fd-9d7a-41af-9803-178b99c967e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2ed1272d-4b2e-4a1e-a64d-f275687e74f1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=d1278df6-eced-4cb3-82f8-ebf07482ac43) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:19:22 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:22.954 163757 INFO neutron.agent.ovn.metadata.agent [-] Port d1278df6-eced-4cb3-82f8-ebf07482ac43 in datapath 72c6dc0c-99bf-460e-aaba-86258fa10534 unbound from our chassis#033[00m
Dec  2 06:19:22 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:22.956 163757 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 72c6dc0c-99bf-460e-aaba-86258fa10534, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 06:19:22 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:22.957 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[8aaeda31-7b41-4cf3-8ffb-be141df12465]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:22 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:22.958 163757 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534 namespace which is not needed anymore#033[00m
Dec  2 06:19:22 np0005542249 nova_compute[254900]: 2025-12-02 11:19:22.978 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:22 np0005542249 nova_compute[254900]: 2025-12-02 11:19:22.986 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:22 np0005542249 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Dec  2 06:19:22 np0005542249 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 3.908s CPU time.
Dec  2 06:19:23 np0005542249 systemd-machined[216222]: Machine qemu-4-instance-00000004 terminated.
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.095 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.102 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.116 254904 INFO nova.virt.libvirt.driver [-] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Instance destroyed successfully.#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.119 254904 DEBUG nova.objects.instance [None req-fc891aa1-02cc-4171-ac41-b7ed74df9b4d 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lazy-loading 'resources' on Instance uuid b65d7234-010a-4dd0-b06c-58dc26cfb5a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.133 254904 DEBUG nova.virt.libvirt.vif [None req-fc891aa1-02cc-4171-ac41-b7ed74df9b4d 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:19:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-2046096500',display_name='tempest-VolumesActionsTest-instance-2046096500',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-2046096500',id=4,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:19:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0587e0fe146043ba857b2d8002ab0a3b',ramdisk_id='',reservation_id='r-mz5zmhmf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-1586928656',owner_user_name='tempest-VolumesActionsTest-1586928656-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:19:19Z,user_data=None,user_id='505334fe5eb749e19ba727c8b2d04594',uuid=b65d7234-010a-4dd0-b06c-58dc26cfb5a6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1278df6-eced-4cb3-82f8-ebf07482ac43", "address": "fa:16:3e:38:9c:77", "network": {"id": "72c6dc0c-99bf-460e-aaba-86258fa10534", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1502229343-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0587e0fe146043ba857b2d8002ab0a3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1278df6-ec", "ovs_interfaceid": "d1278df6-eced-4cb3-82f8-ebf07482ac43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.134 254904 DEBUG nova.network.os_vif_util [None req-fc891aa1-02cc-4171-ac41-b7ed74df9b4d 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Converting VIF {"id": "d1278df6-eced-4cb3-82f8-ebf07482ac43", "address": "fa:16:3e:38:9c:77", "network": {"id": "72c6dc0c-99bf-460e-aaba-86258fa10534", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1502229343-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0587e0fe146043ba857b2d8002ab0a3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1278df6-ec", "ovs_interfaceid": "d1278df6-eced-4cb3-82f8-ebf07482ac43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.136 254904 DEBUG nova.network.os_vif_util [None req-fc891aa1-02cc-4171-ac41-b7ed74df9b4d 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:9c:77,bridge_name='br-int',has_traffic_filtering=True,id=d1278df6-eced-4cb3-82f8-ebf07482ac43,network=Network(72c6dc0c-99bf-460e-aaba-86258fa10534),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1278df6-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.137 254904 DEBUG os_vif [None req-fc891aa1-02cc-4171-ac41-b7ed74df9b4d 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:9c:77,bridge_name='br-int',has_traffic_filtering=True,id=d1278df6-eced-4cb3-82f8-ebf07482ac43,network=Network(72c6dc0c-99bf-460e-aaba-86258fa10534),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1278df6-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.140 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.141 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd1278df6-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.144 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.146 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.151 254904 INFO os_vif [None req-fc891aa1-02cc-4171-ac41-b7ed74df9b4d 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:9c:77,bridge_name='br-int',has_traffic_filtering=True,id=d1278df6-eced-4cb3-82f8-ebf07482ac43,network=Network(72c6dc0c-99bf-460e-aaba-86258fa10534),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1278df6-ec')#033[00m
Dec  2 06:19:23 np0005542249 neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534[266881]: [NOTICE]   (266885) : haproxy version is 2.8.14-c23fe91
Dec  2 06:19:23 np0005542249 neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534[266881]: [NOTICE]   (266885) : path to executable is /usr/sbin/haproxy
Dec  2 06:19:23 np0005542249 neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534[266881]: [WARNING]  (266885) : Exiting Master process...
Dec  2 06:19:23 np0005542249 neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534[266881]: [WARNING]  (266885) : Exiting Master process...
Dec  2 06:19:23 np0005542249 neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534[266881]: [ALERT]    (266885) : Current worker (266887) exited with code 143 (Terminated)
Dec  2 06:19:23 np0005542249 neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534[266881]: [WARNING]  (266885) : All workers exited. Exiting... (0)
Dec  2 06:19:23 np0005542249 systemd[1]: libpod-0ef0726aa9e104c1034725592c61579184b79485de559e1f6486a9173f674910.scope: Deactivated successfully.
Dec  2 06:19:23 np0005542249 podman[266921]: 2025-12-02 11:19:23.179675316 +0000 UTC m=+0.073682847 container died 0ef0726aa9e104c1034725592c61579184b79485de559e1f6486a9173f674910 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  2 06:19:23 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0ef0726aa9e104c1034725592c61579184b79485de559e1f6486a9173f674910-userdata-shm.mount: Deactivated successfully.
Dec  2 06:19:23 np0005542249 systemd[1]: var-lib-containers-storage-overlay-2e0a19f711f03069ce77994e8050383c4b9942f1a9ee7bbf271d26ea19bbd8e4-merged.mount: Deactivated successfully.
Dec  2 06:19:23 np0005542249 podman[266921]: 2025-12-02 11:19:23.236692894 +0000 UTC m=+0.130700445 container cleanup 0ef0726aa9e104c1034725592c61579184b79485de559e1f6486a9173f674910 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec  2 06:19:23 np0005542249 systemd[1]: libpod-conmon-0ef0726aa9e104c1034725592c61579184b79485de559e1f6486a9173f674910.scope: Deactivated successfully.
Dec  2 06:19:23 np0005542249 podman[266977]: 2025-12-02 11:19:23.333105957 +0000 UTC m=+0.057237905 container remove 0ef0726aa9e104c1034725592c61579184b79485de559e1f6486a9173f674910 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  2 06:19:23 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:23.345 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[fc3da631-f1e4-4fbc-8f7b-689edd2a1ef0]: (4, ('Tue Dec  2 11:19:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534 (0ef0726aa9e104c1034725592c61579184b79485de559e1f6486a9173f674910)\n0ef0726aa9e104c1034725592c61579184b79485de559e1f6486a9173f674910\nTue Dec  2 11:19:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534 (0ef0726aa9e104c1034725592c61579184b79485de559e1f6486a9173f674910)\n0ef0726aa9e104c1034725592c61579184b79485de559e1f6486a9173f674910\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:23 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:23.349 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[fb76cfb6-4664-44b0-870c-90a6e0d6d6bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:23 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:23.350 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72c6dc0c-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.353 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:23 np0005542249 kernel: tap72c6dc0c-90: left promiscuous mode
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.385 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:23 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:23.390 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[4b417c5e-935e-4b81-bec5-e524ec596593]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:23 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:23.407 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[70359678-070e-4724-b1a6-ca4d97be5086]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:23 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:23.410 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[3696ec58-2e7a-41da-ae91-fe590fe51dd8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.425 254904 DEBUG nova.compute.manager [req-2780b4c2-39e6-43c3-b2ec-07d7716ac1f6 req-2f7bf6e1-7fa6-4718-bc2e-b618d544f909 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Received event network-vif-unplugged-d1278df6-eced-4cb3-82f8-ebf07482ac43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.426 254904 DEBUG oslo_concurrency.lockutils [req-2780b4c2-39e6-43c3-b2ec-07d7716ac1f6 req-2f7bf6e1-7fa6-4718-bc2e-b618d544f909 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "b65d7234-010a-4dd0-b06c-58dc26cfb5a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.427 254904 DEBUG oslo_concurrency.lockutils [req-2780b4c2-39e6-43c3-b2ec-07d7716ac1f6 req-2f7bf6e1-7fa6-4718-bc2e-b618d544f909 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "b65d7234-010a-4dd0-b06c-58dc26cfb5a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.428 254904 DEBUG oslo_concurrency.lockutils [req-2780b4c2-39e6-43c3-b2ec-07d7716ac1f6 req-2f7bf6e1-7fa6-4718-bc2e-b618d544f909 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "b65d7234-010a-4dd0-b06c-58dc26cfb5a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.428 254904 DEBUG nova.compute.manager [req-2780b4c2-39e6-43c3-b2ec-07d7716ac1f6 req-2f7bf6e1-7fa6-4718-bc2e-b618d544f909 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] No waiting events found dispatching network-vif-unplugged-d1278df6-eced-4cb3-82f8-ebf07482ac43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.429 254904 DEBUG nova.compute.manager [req-2780b4c2-39e6-43c3-b2ec-07d7716ac1f6 req-2f7bf6e1-7fa6-4718-bc2e-b618d544f909 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Received event network-vif-unplugged-d1278df6-eced-4cb3-82f8-ebf07482ac43 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.429 254904 DEBUG nova.compute.manager [req-2780b4c2-39e6-43c3-b2ec-07d7716ac1f6 req-2f7bf6e1-7fa6-4718-bc2e-b618d544f909 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Received event network-vif-plugged-d1278df6-eced-4cb3-82f8-ebf07482ac43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.430 254904 DEBUG oslo_concurrency.lockutils [req-2780b4c2-39e6-43c3-b2ec-07d7716ac1f6 req-2f7bf6e1-7fa6-4718-bc2e-b618d544f909 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "b65d7234-010a-4dd0-b06c-58dc26cfb5a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.431 254904 DEBUG oslo_concurrency.lockutils [req-2780b4c2-39e6-43c3-b2ec-07d7716ac1f6 req-2f7bf6e1-7fa6-4718-bc2e-b618d544f909 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "b65d7234-010a-4dd0-b06c-58dc26cfb5a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.432 254904 DEBUG oslo_concurrency.lockutils [req-2780b4c2-39e6-43c3-b2ec-07d7716ac1f6 req-2f7bf6e1-7fa6-4718-bc2e-b618d544f909 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "b65d7234-010a-4dd0-b06c-58dc26cfb5a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.432 254904 DEBUG nova.compute.manager [req-2780b4c2-39e6-43c3-b2ec-07d7716ac1f6 req-2f7bf6e1-7fa6-4718-bc2e-b618d544f909 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] No waiting events found dispatching network-vif-plugged-d1278df6-eced-4cb3-82f8-ebf07482ac43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.433 254904 WARNING nova.compute.manager [req-2780b4c2-39e6-43c3-b2ec-07d7716ac1f6 req-2f7bf6e1-7fa6-4718-bc2e-b618d544f909 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Received unexpected event network-vif-plugged-d1278df6-eced-4cb3-82f8-ebf07482ac43 for instance with vm_state active and task_state deleting.#033[00m
Dec  2 06:19:23 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:23.440 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[c41619d6-eba5-4e62-b386-518133df60af]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455188, 'reachable_time': 21229, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266992, 'error': None, 'target': 'ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:23 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:23.445 164036 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 06:19:23 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:23.445 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[f5261f30-e768-477c-bb71-c6f26309aeea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:23 np0005542249 systemd[1]: run-netns-ovnmeta\x2d72c6dc0c\x2d99bf\x2d460e\x2daaba\x2d86258fa10534.mount: Deactivated successfully.
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.573 254904 INFO nova.virt.libvirt.driver [None req-fc891aa1-02cc-4171-ac41-b7ed74df9b4d 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Deleting instance files /var/lib/nova/instances/b65d7234-010a-4dd0-b06c-58dc26cfb5a6_del#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.575 254904 INFO nova.virt.libvirt.driver [None req-fc891aa1-02cc-4171-ac41-b7ed74df9b4d 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Deletion of /var/lib/nova/instances/b65d7234-010a-4dd0-b06c-58dc26cfb5a6_del complete#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.635 254904 INFO nova.compute.manager [None req-fc891aa1-02cc-4171-ac41-b7ed74df9b4d 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.636 254904 DEBUG oslo.service.loopingcall [None req-fc891aa1-02cc-4171-ac41-b7ed74df9b4d 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.636 254904 DEBUG nova.compute.manager [-] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:19:23 np0005542249 nova_compute[254900]: 2025-12-02 11:19:23.637 254904 DEBUG nova.network.neutron [-] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:19:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1086: 321 pgs: 321 active+clean; 126 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 4.6 MiB/s wr, 283 op/s
Dec  2 06:19:24 np0005542249 nova_compute[254900]: 2025-12-02 11:19:24.311 254904 DEBUG nova.network.neutron [-] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:19:24 np0005542249 nova_compute[254900]: 2025-12-02 11:19:24.340 254904 INFO nova.compute.manager [-] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Took 0.70 seconds to deallocate network for instance.#033[00m
Dec  2 06:19:24 np0005542249 nova_compute[254900]: 2025-12-02 11:19:24.395 254904 DEBUG nova.compute.manager [req-a6d0cb8c-8353-4995-b657-cd40d8dead93 req-fb3658a0-52e4-49ca-879b-095d746def02 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Received event network-vif-deleted-d1278df6-eced-4cb3-82f8-ebf07482ac43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:19:24 np0005542249 nova_compute[254900]: 2025-12-02 11:19:24.398 254904 DEBUG oslo_concurrency.lockutils [None req-fc891aa1-02cc-4171-ac41-b7ed74df9b4d 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:24 np0005542249 nova_compute[254900]: 2025-12-02 11:19:24.398 254904 DEBUG oslo_concurrency.lockutils [None req-fc891aa1-02cc-4171-ac41-b7ed74df9b4d 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:24 np0005542249 nova_compute[254900]: 2025-12-02 11:19:24.493 254904 DEBUG oslo_concurrency.processutils [None req-fc891aa1-02cc-4171-ac41-b7ed74df9b4d 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:19:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:19:25 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/821963383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:19:25 np0005542249 nova_compute[254900]: 2025-12-02 11:19:25.087 254904 DEBUG oslo_concurrency.processutils [None req-fc891aa1-02cc-4171-ac41-b7ed74df9b4d 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:19:25 np0005542249 nova_compute[254900]: 2025-12-02 11:19:25.095 254904 DEBUG nova.compute.provider_tree [None req-fc891aa1-02cc-4171-ac41-b7ed74df9b4d 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:19:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1087: 321 pgs: 321 active+clean; 126 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 2.7 MiB/s wr, 237 op/s
Dec  2 06:19:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:19:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3472256961' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:19:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:19:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3472256961' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:19:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:19:26
Dec  2 06:19:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:19:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:19:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'default.rgw.log', '.mgr', 'images', 'backups', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta']
Dec  2 06:19:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:19:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Dec  2 06:19:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:19:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Dec  2 06:19:26 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Dec  2 06:19:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:19:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:19:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:19:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:19:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:19:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:19:26 np0005542249 nova_compute[254900]: 2025-12-02 11:19:26.560 254904 DEBUG nova.scheduler.client.report [None req-fc891aa1-02cc-4171-ac41-b7ed74df9b4d 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:19:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:19:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:19:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:19:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:19:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:19:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:19:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:19:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:19:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:19:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:19:27 np0005542249 nova_compute[254900]: 2025-12-02 11:19:27.027 254904 DEBUG oslo_concurrency.lockutils [None req-fc891aa1-02cc-4171-ac41-b7ed74df9b4d 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:27 np0005542249 nova_compute[254900]: 2025-12-02 11:19:27.076 254904 INFO nova.scheduler.client.report [None req-fc891aa1-02cc-4171-ac41-b7ed74df9b4d 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Deleted allocations for instance b65d7234-010a-4dd0-b06c-58dc26cfb5a6#033[00m
Dec  2 06:19:27 np0005542249 nova_compute[254900]: 2025-12-02 11:19:27.188 254904 DEBUG oslo_concurrency.lockutils [None req-fc891aa1-02cc-4171-ac41-b7ed74df9b4d 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "b65d7234-010a-4dd0-b06c-58dc26cfb5a6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.322s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:19:27 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1151275585' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:19:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:19:27 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1151275585' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:19:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1089: 321 pgs: 321 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 2.7 MiB/s wr, 284 op/s
Dec  2 06:19:27 np0005542249 nova_compute[254900]: 2025-12-02 11:19:27.985 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:28 np0005542249 nova_compute[254900]: 2025-12-02 11:19:28.144 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:29 np0005542249 podman[267015]: 2025-12-02 11:19:29.005550553 +0000 UTC m=+0.085052306 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  2 06:19:29 np0005542249 nova_compute[254900]: 2025-12-02 11:19:29.298 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764674354.297085, 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:19:29 np0005542249 nova_compute[254900]: 2025-12-02 11:19:29.299 254904 INFO nova.compute.manager [-] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:19:29 np0005542249 nova_compute[254900]: 2025-12-02 11:19:29.323 254904 DEBUG nova.compute.manager [None req-3ce66121-91ea-457c-8cd1-0c11dcbd3a1a - - - - - -] [instance: 0be3aa38-c2a1-4a78-b4a6-6419c4c9265b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:19:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1090: 321 pgs: 321 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.7 MiB/s wr, 302 op/s
Dec  2 06:19:30 np0005542249 nova_compute[254900]: 2025-12-02 11:19:30.276 254904 DEBUG oslo_concurrency.lockutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Acquiring lock "4981d357-863a-45cb-a3e6-6d8e85010252" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:30 np0005542249 nova_compute[254900]: 2025-12-02 11:19:30.277 254904 DEBUG oslo_concurrency.lockutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "4981d357-863a-45cb-a3e6-6d8e85010252" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:30 np0005542249 nova_compute[254900]: 2025-12-02 11:19:30.298 254904 DEBUG nova.compute.manager [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:19:30 np0005542249 nova_compute[254900]: 2025-12-02 11:19:30.366 254904 DEBUG oslo_concurrency.lockutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:30 np0005542249 nova_compute[254900]: 2025-12-02 11:19:30.367 254904 DEBUG oslo_concurrency.lockutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:30 np0005542249 nova_compute[254900]: 2025-12-02 11:19:30.377 254904 DEBUG nova.virt.hardware [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:19:30 np0005542249 nova_compute[254900]: 2025-12-02 11:19:30.377 254904 INFO nova.compute.claims [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:19:30 np0005542249 nova_compute[254900]: 2025-12-02 11:19:30.523 254904 DEBUG oslo_concurrency.processutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:19:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:19:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1825277133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:19:30 np0005542249 nova_compute[254900]: 2025-12-02 11:19:30.981 254904 DEBUG oslo_concurrency.processutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:19:30 np0005542249 nova_compute[254900]: 2025-12-02 11:19:30.991 254904 DEBUG nova.compute.provider_tree [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:19:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Dec  2 06:19:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Dec  2 06:19:31 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Dec  2 06:19:31 np0005542249 nova_compute[254900]: 2025-12-02 11:19:31.017 254904 DEBUG nova.scheduler.client.report [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:19:31 np0005542249 nova_compute[254900]: 2025-12-02 11:19:31.056 254904 DEBUG oslo_concurrency.lockutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:31 np0005542249 nova_compute[254900]: 2025-12-02 11:19:31.058 254904 DEBUG nova.compute.manager [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:19:31 np0005542249 nova_compute[254900]: 2025-12-02 11:19:31.107 254904 DEBUG nova.compute.manager [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:19:31 np0005542249 nova_compute[254900]: 2025-12-02 11:19:31.108 254904 DEBUG nova.network.neutron [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:19:31 np0005542249 nova_compute[254900]: 2025-12-02 11:19:31.129 254904 INFO nova.virt.libvirt.driver [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:19:31 np0005542249 nova_compute[254900]: 2025-12-02 11:19:31.154 254904 DEBUG nova.compute.manager [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:19:31 np0005542249 nova_compute[254900]: 2025-12-02 11:19:31.240 254904 DEBUG nova.compute.manager [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:19:31 np0005542249 nova_compute[254900]: 2025-12-02 11:19:31.241 254904 DEBUG nova.virt.libvirt.driver [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:19:31 np0005542249 nova_compute[254900]: 2025-12-02 11:19:31.241 254904 INFO nova.virt.libvirt.driver [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Creating image(s)#033[00m
Dec  2 06:19:31 np0005542249 nova_compute[254900]: 2025-12-02 11:19:31.264 254904 DEBUG nova.storage.rbd_utils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] rbd image 4981d357-863a-45cb-a3e6-6d8e85010252_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:19:31 np0005542249 nova_compute[254900]: 2025-12-02 11:19:31.289 254904 DEBUG nova.storage.rbd_utils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] rbd image 4981d357-863a-45cb-a3e6-6d8e85010252_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:19:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:19:31 np0005542249 nova_compute[254900]: 2025-12-02 11:19:31.326 254904 DEBUG nova.storage.rbd_utils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] rbd image 4981d357-863a-45cb-a3e6-6d8e85010252_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:19:31 np0005542249 nova_compute[254900]: 2025-12-02 11:19:31.333 254904 DEBUG oslo_concurrency.processutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:19:31 np0005542249 nova_compute[254900]: 2025-12-02 11:19:31.363 254904 DEBUG nova.policy [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '505334fe5eb749e19ba727c8b2d04594', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0587e0fe146043ba857b2d8002ab0a3b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:19:31 np0005542249 nova_compute[254900]: 2025-12-02 11:19:31.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:19:31 np0005542249 nova_compute[254900]: 2025-12-02 11:19:31.383 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  2 06:19:31 np0005542249 nova_compute[254900]: 2025-12-02 11:19:31.401 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  2 06:19:31 np0005542249 nova_compute[254900]: 2025-12-02 11:19:31.404 254904 DEBUG oslo_concurrency.processutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:19:31 np0005542249 nova_compute[254900]: 2025-12-02 11:19:31.405 254904 DEBUG oslo_concurrency.lockutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Acquiring lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:31 np0005542249 nova_compute[254900]: 2025-12-02 11:19:31.405 254904 DEBUG oslo_concurrency.lockutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:31 np0005542249 nova_compute[254900]: 2025-12-02 11:19:31.406 254904 DEBUG oslo_concurrency.lockutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:31 np0005542249 nova_compute[254900]: 2025-12-02 11:19:31.433 254904 DEBUG nova.storage.rbd_utils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] rbd image 4981d357-863a-45cb-a3e6-6d8e85010252_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:19:31 np0005542249 nova_compute[254900]: 2025-12-02 11:19:31.438 254904 DEBUG oslo_concurrency.processutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 4981d357-863a-45cb-a3e6-6d8e85010252_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:19:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:19:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1906527178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:19:31 np0005542249 nova_compute[254900]: 2025-12-02 11:19:31.881 254904 DEBUG oslo_concurrency.processutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 4981d357-863a-45cb-a3e6-6d8e85010252_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:19:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1092: 321 pgs: 321 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 134 KiB/s wr, 127 op/s
Dec  2 06:19:31 np0005542249 nova_compute[254900]: 2025-12-02 11:19:31.967 254904 DEBUG nova.storage.rbd_utils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] resizing rbd image 4981d357-863a-45cb-a3e6-6d8e85010252_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  2 06:19:32 np0005542249 nova_compute[254900]: 2025-12-02 11:19:32.080 254904 DEBUG nova.objects.instance [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lazy-loading 'migration_context' on Instance uuid 4981d357-863a-45cb-a3e6-6d8e85010252 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:19:32 np0005542249 nova_compute[254900]: 2025-12-02 11:19:32.095 254904 DEBUG nova.virt.libvirt.driver [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 06:19:32 np0005542249 nova_compute[254900]: 2025-12-02 11:19:32.095 254904 DEBUG nova.virt.libvirt.driver [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Ensure instance console log exists: /var/lib/nova/instances/4981d357-863a-45cb-a3e6-6d8e85010252/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:19:32 np0005542249 nova_compute[254900]: 2025-12-02 11:19:32.096 254904 DEBUG oslo_concurrency.lockutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:32 np0005542249 nova_compute[254900]: 2025-12-02 11:19:32.096 254904 DEBUG oslo_concurrency.lockutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:32 np0005542249 nova_compute[254900]: 2025-12-02 11:19:32.097 254904 DEBUG oslo_concurrency.lockutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:32 np0005542249 nova_compute[254900]: 2025-12-02 11:19:32.224 254904 DEBUG nova.network.neutron [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Successfully created port: 90a1736a-6da6-48ad-8cfb-3ba7e7538576 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:19:32 np0005542249 nova_compute[254900]: 2025-12-02 11:19:32.977 254904 DEBUG nova.network.neutron [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Successfully updated port: 90a1736a-6da6-48ad-8cfb-3ba7e7538576 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:19:32 np0005542249 nova_compute[254900]: 2025-12-02 11:19:32.987 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:32 np0005542249 nova_compute[254900]: 2025-12-02 11:19:32.992 254904 DEBUG oslo_concurrency.lockutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Acquiring lock "refresh_cache-4981d357-863a-45cb-a3e6-6d8e85010252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:19:32 np0005542249 nova_compute[254900]: 2025-12-02 11:19:32.993 254904 DEBUG oslo_concurrency.lockutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Acquired lock "refresh_cache-4981d357-863a-45cb-a3e6-6d8e85010252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:19:32 np0005542249 nova_compute[254900]: 2025-12-02 11:19:32.993 254904 DEBUG nova.network.neutron [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:19:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Dec  2 06:19:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Dec  2 06:19:33 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Dec  2 06:19:33 np0005542249 nova_compute[254900]: 2025-12-02 11:19:33.075 254904 DEBUG nova.compute.manager [req-a4313dfc-1c23-4aae-a908-9175b76fd7a2 req-ef0ed8e4-c932-4947-a760-4a0d8fb2fd63 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Received event network-changed-90a1736a-6da6-48ad-8cfb-3ba7e7538576 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:19:33 np0005542249 nova_compute[254900]: 2025-12-02 11:19:33.076 254904 DEBUG nova.compute.manager [req-a4313dfc-1c23-4aae-a908-9175b76fd7a2 req-ef0ed8e4-c932-4947-a760-4a0d8fb2fd63 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Refreshing instance network info cache due to event network-changed-90a1736a-6da6-48ad-8cfb-3ba7e7538576. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:19:33 np0005542249 nova_compute[254900]: 2025-12-02 11:19:33.076 254904 DEBUG oslo_concurrency.lockutils [req-a4313dfc-1c23-4aae-a908-9175b76fd7a2 req-ef0ed8e4-c932-4947-a760-4a0d8fb2fd63 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-4981d357-863a-45cb-a3e6-6d8e85010252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:19:33 np0005542249 nova_compute[254900]: 2025-12-02 11:19:33.148 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:33 np0005542249 nova_compute[254900]: 2025-12-02 11:19:33.221 254904 DEBUG nova.network.neutron [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:19:33 np0005542249 nova_compute[254900]: 2025-12-02 11:19:33.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:19:33 np0005542249 nova_compute[254900]: 2025-12-02 11:19:33.382 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  2 06:19:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:19:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/579309754' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:19:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:19:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/579309754' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:19:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1094: 321 pgs: 321 active+clean; 159 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 748 KiB/s rd, 4.0 MiB/s wr, 211 op/s
Dec  2 06:19:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:19:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:19:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:19:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:19:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:19:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:19:33 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev f2f01164-d8c6-4ef2-984d-f55c02a7fe8c does not exist
Dec  2 06:19:33 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 37e31e37-030b-4681-868f-940cd8676fd9 does not exist
Dec  2 06:19:33 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 4a27b959-ab8c-411e-8d8b-74e8ad5717a7 does not exist
Dec  2 06:19:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:19:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:19:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:19:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:19:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:19:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:19:33 np0005542249 nova_compute[254900]: 2025-12-02 11:19:33.968 254904 DEBUG nova.network.neutron [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Updating instance_info_cache with network_info: [{"id": "90a1736a-6da6-48ad-8cfb-3ba7e7538576", "address": "fa:16:3e:08:e5:40", "network": {"id": "72c6dc0c-99bf-460e-aaba-86258fa10534", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1502229343-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0587e0fe146043ba857b2d8002ab0a3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90a1736a-6d", "ovs_interfaceid": "90a1736a-6da6-48ad-8cfb-3ba7e7538576", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:19:33 np0005542249 nova_compute[254900]: 2025-12-02 11:19:33.985 254904 DEBUG oslo_concurrency.lockutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Releasing lock "refresh_cache-4981d357-863a-45cb-a3e6-6d8e85010252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:19:33 np0005542249 nova_compute[254900]: 2025-12-02 11:19:33.986 254904 DEBUG nova.compute.manager [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Instance network_info: |[{"id": "90a1736a-6da6-48ad-8cfb-3ba7e7538576", "address": "fa:16:3e:08:e5:40", "network": {"id": "72c6dc0c-99bf-460e-aaba-86258fa10534", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1502229343-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0587e0fe146043ba857b2d8002ab0a3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90a1736a-6d", "ovs_interfaceid": "90a1736a-6da6-48ad-8cfb-3ba7e7538576", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:19:33 np0005542249 nova_compute[254900]: 2025-12-02 11:19:33.986 254904 DEBUG oslo_concurrency.lockutils [req-a4313dfc-1c23-4aae-a908-9175b76fd7a2 req-ef0ed8e4-c932-4947-a760-4a0d8fb2fd63 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-4981d357-863a-45cb-a3e6-6d8e85010252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:19:33 np0005542249 nova_compute[254900]: 2025-12-02 11:19:33.986 254904 DEBUG nova.network.neutron [req-a4313dfc-1c23-4aae-a908-9175b76fd7a2 req-ef0ed8e4-c932-4947-a760-4a0d8fb2fd63 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Refreshing network info cache for port 90a1736a-6da6-48ad-8cfb-3ba7e7538576 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:19:33 np0005542249 nova_compute[254900]: 2025-12-02 11:19:33.990 254904 DEBUG nova.virt.libvirt.driver [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Start _get_guest_xml network_info=[{"id": "90a1736a-6da6-48ad-8cfb-3ba7e7538576", "address": "fa:16:3e:08:e5:40", "network": {"id": "72c6dc0c-99bf-460e-aaba-86258fa10534", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1502229343-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0587e0fe146043ba857b2d8002ab0a3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90a1736a-6d", "ovs_interfaceid": "90a1736a-6da6-48ad-8cfb-3ba7e7538576", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'image_id': '5a40f66c-ab43-47dd-9880-e59f9fa2c60e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:19:33 np0005542249 nova_compute[254900]: 2025-12-02 11:19:33.995 254904 WARNING nova.virt.libvirt.driver [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:19:34 np0005542249 nova_compute[254900]: 2025-12-02 11:19:34.000 254904 DEBUG nova.virt.libvirt.host [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:19:34 np0005542249 nova_compute[254900]: 2025-12-02 11:19:34.001 254904 DEBUG nova.virt.libvirt.host [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:19:34 np0005542249 nova_compute[254900]: 2025-12-02 11:19:34.005 254904 DEBUG nova.virt.libvirt.host [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:19:34 np0005542249 nova_compute[254900]: 2025-12-02 11:19:34.005 254904 DEBUG nova.virt.libvirt.host [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:19:34 np0005542249 nova_compute[254900]: 2025-12-02 11:19:34.006 254904 DEBUG nova.virt.libvirt.driver [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:19:34 np0005542249 nova_compute[254900]: 2025-12-02 11:19:34.006 254904 DEBUG nova.virt.hardware [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:19:34 np0005542249 nova_compute[254900]: 2025-12-02 11:19:34.006 254904 DEBUG nova.virt.hardware [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:19:34 np0005542249 nova_compute[254900]: 2025-12-02 11:19:34.007 254904 DEBUG nova.virt.hardware [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:19:34 np0005542249 nova_compute[254900]: 2025-12-02 11:19:34.007 254904 DEBUG nova.virt.hardware [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:19:34 np0005542249 nova_compute[254900]: 2025-12-02 11:19:34.007 254904 DEBUG nova.virt.hardware [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:19:34 np0005542249 nova_compute[254900]: 2025-12-02 11:19:34.007 254904 DEBUG nova.virt.hardware [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:19:34 np0005542249 nova_compute[254900]: 2025-12-02 11:19:34.008 254904 DEBUG nova.virt.hardware [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:19:34 np0005542249 nova_compute[254900]: 2025-12-02 11:19:34.008 254904 DEBUG nova.virt.hardware [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:19:34 np0005542249 nova_compute[254900]: 2025-12-02 11:19:34.008 254904 DEBUG nova.virt.hardware [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:19:34 np0005542249 nova_compute[254900]: 2025-12-02 11:19:34.008 254904 DEBUG nova.virt.hardware [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:19:34 np0005542249 nova_compute[254900]: 2025-12-02 11:19:34.009 254904 DEBUG nova.virt.hardware [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:19:34 np0005542249 nova_compute[254900]: 2025-12-02 11:19:34.012 254904 DEBUG oslo_concurrency.processutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:19:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Dec  2 06:19:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Dec  2 06:19:34 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Dec  2 06:19:34 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:19:34 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:19:34 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:19:34 np0005542249 nova_compute[254900]: 2025-12-02 11:19:34.399 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:19:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:19:34 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2161716935' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:19:34 np0005542249 nova_compute[254900]: 2025-12-02 11:19:34.513 254904 DEBUG oslo_concurrency.processutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:19:34 np0005542249 nova_compute[254900]: 2025-12-02 11:19:34.549 254904 DEBUG nova.storage.rbd_utils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] rbd image 4981d357-863a-45cb-a3e6-6d8e85010252_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:19:34 np0005542249 nova_compute[254900]: 2025-12-02 11:19:34.556 254904 DEBUG oslo_concurrency.processutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:19:34 np0005542249 podman[267538]: 2025-12-02 11:19:34.63197248 +0000 UTC m=+0.034628431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:19:34 np0005542249 podman[267538]: 2025-12-02 11:19:34.82111967 +0000 UTC m=+0.223775601 container create db5749787e200823d0c40cfd0be32f9b75800fcf9020f9abc3d22a656751047e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ritchie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  2 06:19:34 np0005542249 systemd[1]: Started libpod-conmon-db5749787e200823d0c40cfd0be32f9b75800fcf9020f9abc3d22a656751047e.scope.
Dec  2 06:19:34 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:19:34 np0005542249 podman[267538]: 2025-12-02 11:19:34.908227568 +0000 UTC m=+0.310883509 container init db5749787e200823d0c40cfd0be32f9b75800fcf9020f9abc3d22a656751047e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ritchie, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:19:34 np0005542249 podman[267538]: 2025-12-02 11:19:34.919026512 +0000 UTC m=+0.321682423 container start db5749787e200823d0c40cfd0be32f9b75800fcf9020f9abc3d22a656751047e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ritchie, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  2 06:19:34 np0005542249 podman[267538]: 2025-12-02 11:19:34.923382407 +0000 UTC m=+0.326038468 container attach db5749787e200823d0c40cfd0be32f9b75800fcf9020f9abc3d22a656751047e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ritchie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  2 06:19:34 np0005542249 sad_ritchie[267573]: 167 167
Dec  2 06:19:34 np0005542249 systemd[1]: libpod-db5749787e200823d0c40cfd0be32f9b75800fcf9020f9abc3d22a656751047e.scope: Deactivated successfully.
Dec  2 06:19:34 np0005542249 conmon[267573]: conmon db5749787e200823d0c4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-db5749787e200823d0c40cfd0be32f9b75800fcf9020f9abc3d22a656751047e.scope/container/memory.events
Dec  2 06:19:34 np0005542249 podman[267538]: 2025-12-02 11:19:34.928437949 +0000 UTC m=+0.331093860 container died db5749787e200823d0c40cfd0be32f9b75800fcf9020f9abc3d22a656751047e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  2 06:19:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:19:34 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2683660981' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:19:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:19:34 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2683660981' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:19:34 np0005542249 systemd[1]: var-lib-containers-storage-overlay-ace43a0aed56b46e3dd3ca8c8ed3aac094ef39e9285b1b410798ef6c9599be6e-merged.mount: Deactivated successfully.
Dec  2 06:19:34 np0005542249 podman[267538]: 2025-12-02 11:19:34.975169758 +0000 UTC m=+0.377825709 container remove db5749787e200823d0c40cfd0be32f9b75800fcf9020f9abc3d22a656751047e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ritchie, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:19:34 np0005542249 systemd[1]: libpod-conmon-db5749787e200823d0c40cfd0be32f9b75800fcf9020f9abc3d22a656751047e.scope: Deactivated successfully.
Dec  2 06:19:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:19:35 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/145588751' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.030 254904 DEBUG oslo_concurrency.processutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.033 254904 DEBUG nova.virt.libvirt.vif [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:19:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-409056215',display_name='tempest-VolumesActionsTest-instance-409056215',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-409056215',id=5,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0587e0fe146043ba857b2d8002ab0a3b',ramdisk_id='',reservation_id='r-jjv9t5ua',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1586928656',owner_user_name='tempest-VolumesActionsTest-1586928656-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:19:31Z,user_data=None,user_id='505334fe5eb749e19ba727c8b2d04594',uuid=4981d357-863a-45cb-a3e6-6d8e85010252,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "90a1736a-6da6-48ad-8cfb-3ba7e7538576", "address": "fa:16:3e:08:e5:40", "network": {"id": "72c6dc0c-99bf-460e-aaba-86258fa10534", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1502229343-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0587e0fe146043ba857b2d8002ab0a3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90a1736a-6d", "ovs_interfaceid": "90a1736a-6da6-48ad-8cfb-3ba7e7538576", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.034 254904 DEBUG nova.network.os_vif_util [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Converting VIF {"id": "90a1736a-6da6-48ad-8cfb-3ba7e7538576", "address": "fa:16:3e:08:e5:40", "network": {"id": "72c6dc0c-99bf-460e-aaba-86258fa10534", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1502229343-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0587e0fe146043ba857b2d8002ab0a3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90a1736a-6d", "ovs_interfaceid": "90a1736a-6da6-48ad-8cfb-3ba7e7538576", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.035 254904 DEBUG nova.network.os_vif_util [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:e5:40,bridge_name='br-int',has_traffic_filtering=True,id=90a1736a-6da6-48ad-8cfb-3ba7e7538576,network=Network(72c6dc0c-99bf-460e-aaba-86258fa10534),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90a1736a-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.038 254904 DEBUG nova.objects.instance [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 4981d357-863a-45cb-a3e6-6d8e85010252 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.056 254904 DEBUG nova.virt.libvirt.driver [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:19:35 np0005542249 nova_compute[254900]:  <uuid>4981d357-863a-45cb-a3e6-6d8e85010252</uuid>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:  <name>instance-00000005</name>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <nova:name>tempest-VolumesActionsTest-instance-409056215</nova:name>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:19:33</nova:creationTime>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:19:35 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:        <nova:user uuid="505334fe5eb749e19ba727c8b2d04594">tempest-VolumesActionsTest-1586928656-project-member</nova:user>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:        <nova:project uuid="0587e0fe146043ba857b2d8002ab0a3b">tempest-VolumesActionsTest-1586928656</nova:project>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <nova:root type="image" uuid="5a40f66c-ab43-47dd-9880-e59f9fa2c60e"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:        <nova:port uuid="90a1736a-6da6-48ad-8cfb-3ba7e7538576">
Dec  2 06:19:35 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <entry name="serial">4981d357-863a-45cb-a3e6-6d8e85010252</entry>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <entry name="uuid">4981d357-863a-45cb-a3e6-6d8e85010252</entry>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/4981d357-863a-45cb-a3e6-6d8e85010252_disk">
Dec  2 06:19:35 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:19:35 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/4981d357-863a-45cb-a3e6-6d8e85010252_disk.config">
Dec  2 06:19:35 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:19:35 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:08:e5:40"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <target dev="tap90a1736a-6d"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/4981d357-863a-45cb-a3e6-6d8e85010252/console.log" append="off"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:19:35 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:19:35 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:19:35 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:19:35 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.057 254904 DEBUG nova.compute.manager [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Preparing to wait for external event network-vif-plugged-90a1736a-6da6-48ad-8cfb-3ba7e7538576 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.058 254904 DEBUG oslo_concurrency.lockutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Acquiring lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.058 254904 DEBUG oslo_concurrency.lockutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.058 254904 DEBUG oslo_concurrency.lockutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.059 254904 DEBUG nova.virt.libvirt.vif [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:19:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-409056215',display_name='tempest-VolumesActionsTest-instance-409056215',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-409056215',id=5,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0587e0fe146043ba857b2d8002ab0a3b',ramdisk_id='',reservation_id='r-jjv9t5ua',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1586928656',owner_user_name='tempest-VolumesActionsTest-1586928656-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:19:31Z,user_data=None,user_id='505334fe5eb749e19ba727c8b2d04594',uuid=4981d357-863a-45cb-a3e6-6d8e85010252,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "90a1736a-6da6-48ad-8cfb-3ba7e7538576", "address": "fa:16:3e:08:e5:40", "network": {"id": "72c6dc0c-99bf-460e-aaba-86258fa10534", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1502229343-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0587e0fe146043ba857b2d8002ab0a3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90a1736a-6d", "ovs_interfaceid": "90a1736a-6da6-48ad-8cfb-3ba7e7538576", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.060 254904 DEBUG nova.network.os_vif_util [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Converting VIF {"id": "90a1736a-6da6-48ad-8cfb-3ba7e7538576", "address": "fa:16:3e:08:e5:40", "network": {"id": "72c6dc0c-99bf-460e-aaba-86258fa10534", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1502229343-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0587e0fe146043ba857b2d8002ab0a3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90a1736a-6d", "ovs_interfaceid": "90a1736a-6da6-48ad-8cfb-3ba7e7538576", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.060 254904 DEBUG nova.network.os_vif_util [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:e5:40,bridge_name='br-int',has_traffic_filtering=True,id=90a1736a-6da6-48ad-8cfb-3ba7e7538576,network=Network(72c6dc0c-99bf-460e-aaba-86258fa10534),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90a1736a-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.060 254904 DEBUG os_vif [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:e5:40,bridge_name='br-int',has_traffic_filtering=True,id=90a1736a-6da6-48ad-8cfb-3ba7e7538576,network=Network(72c6dc0c-99bf-460e-aaba-86258fa10534),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90a1736a-6d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.061 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.062 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.062 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.065 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.065 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap90a1736a-6d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.066 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap90a1736a-6d, col_values=(('external_ids', {'iface-id': '90a1736a-6da6-48ad-8cfb-3ba7e7538576', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:08:e5:40', 'vm-uuid': '4981d357-863a-45cb-a3e6-6d8e85010252'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.068 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:35 np0005542249 NetworkManager[48987]: <info>  [1764674375.0699] manager: (tap90a1736a-6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.070 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.076 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.077 254904 INFO os_vif [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:e5:40,bridge_name='br-int',has_traffic_filtering=True,id=90a1736a-6da6-48ad-8cfb-3ba7e7538576,network=Network(72c6dc0c-99bf-460e-aaba-86258fa10534),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90a1736a-6d')#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.090 254904 DEBUG nova.network.neutron [req-a4313dfc-1c23-4aae-a908-9175b76fd7a2 req-ef0ed8e4-c932-4947-a760-4a0d8fb2fd63 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Updated VIF entry in instance network info cache for port 90a1736a-6da6-48ad-8cfb-3ba7e7538576. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.091 254904 DEBUG nova.network.neutron [req-a4313dfc-1c23-4aae-a908-9175b76fd7a2 req-ef0ed8e4-c932-4947-a760-4a0d8fb2fd63 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Updating instance_info_cache with network_info: [{"id": "90a1736a-6da6-48ad-8cfb-3ba7e7538576", "address": "fa:16:3e:08:e5:40", "network": {"id": "72c6dc0c-99bf-460e-aaba-86258fa10534", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1502229343-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0587e0fe146043ba857b2d8002ab0a3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90a1736a-6d", "ovs_interfaceid": "90a1736a-6da6-48ad-8cfb-3ba7e7538576", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.105 254904 DEBUG oslo_concurrency.lockutils [req-a4313dfc-1c23-4aae-a908-9175b76fd7a2 req-ef0ed8e4-c932-4947-a760-4a0d8fb2fd63 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-4981d357-863a-45cb-a3e6-6d8e85010252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.144 254904 DEBUG nova.virt.libvirt.driver [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.144 254904 DEBUG nova.virt.libvirt.driver [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.144 254904 DEBUG nova.virt.libvirt.driver [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] No VIF found with MAC fa:16:3e:08:e5:40, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.145 254904 INFO nova.virt.libvirt.driver [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Using config drive#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.168 254904 DEBUG nova.storage.rbd_utils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] rbd image 4981d357-863a-45cb-a3e6-6d8e85010252_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:19:35 np0005542249 podman[267602]: 2025-12-02 11:19:35.184089647 +0000 UTC m=+0.057787389 container create 03c8bd062347b6cbe8c19c48028cf2aa5b0739569cd158e285cc00feb862b2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_fermat, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:19:35 np0005542249 systemd[1]: Started libpod-conmon-03c8bd062347b6cbe8c19c48028cf2aa5b0739569cd158e285cc00feb862b2b1.scope.
Dec  2 06:19:35 np0005542249 podman[267602]: 2025-12-02 11:19:35.156292076 +0000 UTC m=+0.029989868 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:19:35 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:19:35 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ec9413f03e768dc964f66721899de9b45e19be7106ce967e5fd956162796f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:19:35 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ec9413f03e768dc964f66721899de9b45e19be7106ce967e5fd956162796f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:19:35 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ec9413f03e768dc964f66721899de9b45e19be7106ce967e5fd956162796f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:19:35 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ec9413f03e768dc964f66721899de9b45e19be7106ce967e5fd956162796f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:19:35 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ec9413f03e768dc964f66721899de9b45e19be7106ce967e5fd956162796f5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:19:35 np0005542249 podman[267602]: 2025-12-02 11:19:35.277411919 +0000 UTC m=+0.151109651 container init 03c8bd062347b6cbe8c19c48028cf2aa5b0739569cd158e285cc00feb862b2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  2 06:19:35 np0005542249 podman[267602]: 2025-12-02 11:19:35.294804426 +0000 UTC m=+0.168502208 container start 03c8bd062347b6cbe8c19c48028cf2aa5b0739569cd158e285cc00feb862b2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  2 06:19:35 np0005542249 podman[267602]: 2025-12-02 11:19:35.298262806 +0000 UTC m=+0.171960588 container attach 03c8bd062347b6cbe8c19c48028cf2aa5b0739569cd158e285cc00feb862b2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.377 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.401 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.401 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.401 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.418 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.419 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 06:19:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1096: 321 pgs: 321 active+clean; 197 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 903 KiB/s rd, 9.1 MiB/s wr, 182 op/s
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.909 254904 INFO nova.virt.libvirt.driver [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Creating config drive at /var/lib/nova/instances/4981d357-863a-45cb-a3e6-6d8e85010252/disk.config#033[00m
Dec  2 06:19:35 np0005542249 nova_compute[254900]: 2025-12-02 11:19:35.916 254904 DEBUG oslo_concurrency.processutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4981d357-863a-45cb-a3e6-6d8e85010252/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpizub_4ly execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:19:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:19:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:19:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:19:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:19:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Dec  2 06:19:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:19:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0006925663988734779 of space, bias 1.0, pg target 0.20776991966204336 quantized to 32 (current 32)
Dec  2 06:19:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:19:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0001926162539884092 of space, bias 1.0, pg target 0.057784876196522755 quantized to 32 (current 32)
Dec  2 06:19:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:19:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  2 06:19:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:19:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:19:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:19:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:19:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:19:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:19:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:19:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:19:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:19:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:19:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:19:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:19:36 np0005542249 nova_compute[254900]: 2025-12-02 11:19:36.056 254904 DEBUG oslo_concurrency.processutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4981d357-863a-45cb-a3e6-6d8e85010252/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpizub_4ly" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:19:36 np0005542249 nova_compute[254900]: 2025-12-02 11:19:36.090 254904 DEBUG nova.storage.rbd_utils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] rbd image 4981d357-863a-45cb-a3e6-6d8e85010252_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:19:36 np0005542249 nova_compute[254900]: 2025-12-02 11:19:36.095 254904 DEBUG oslo_concurrency.processutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4981d357-863a-45cb-a3e6-6d8e85010252/disk.config 4981d357-863a-45cb-a3e6-6d8e85010252_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:19:36 np0005542249 nova_compute[254900]: 2025-12-02 11:19:36.248 254904 DEBUG oslo_concurrency.processutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4981d357-863a-45cb-a3e6-6d8e85010252/disk.config 4981d357-863a-45cb-a3e6-6d8e85010252_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:19:36 np0005542249 nova_compute[254900]: 2025-12-02 11:19:36.249 254904 INFO nova.virt.libvirt.driver [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Deleting local config drive /var/lib/nova/instances/4981d357-863a-45cb-a3e6-6d8e85010252/disk.config because it was imported into RBD.#033[00m
Dec  2 06:19:36 np0005542249 kernel: tap90a1736a-6d: entered promiscuous mode
Dec  2 06:19:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:19:36 np0005542249 ovn_controller[153849]: 2025-12-02T11:19:36Z|00062|binding|INFO|Claiming lport 90a1736a-6da6-48ad-8cfb-3ba7e7538576 for this chassis.
Dec  2 06:19:36 np0005542249 ovn_controller[153849]: 2025-12-02T11:19:36Z|00063|binding|INFO|90a1736a-6da6-48ad-8cfb-3ba7e7538576: Claiming fa:16:3e:08:e5:40 10.100.0.4
Dec  2 06:19:36 np0005542249 NetworkManager[48987]: <info>  [1764674376.3277] manager: (tap90a1736a-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Dec  2 06:19:36 np0005542249 nova_compute[254900]: 2025-12-02 11:19:36.326 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.334 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:e5:40 10.100.0.4'], port_security=['fa:16:3e:08:e5:40 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4981d357-863a-45cb-a3e6-6d8e85010252', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72c6dc0c-99bf-460e-aaba-86258fa10534', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0587e0fe146043ba857b2d8002ab0a3b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '277ca5fd-9d7a-41af-9803-178b99c967e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2ed1272d-4b2e-4a1e-a64d-f275687e74f1, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=90a1736a-6da6-48ad-8cfb-3ba7e7538576) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.336 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 90a1736a-6da6-48ad-8cfb-3ba7e7538576 in datapath 72c6dc0c-99bf-460e-aaba-86258fa10534 bound to our chassis#033[00m
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.339 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 72c6dc0c-99bf-460e-aaba-86258fa10534#033[00m
Dec  2 06:19:36 np0005542249 systemd-udevd[267715]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.358 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[4d163e6c-30d6-4ff9-a7f8-7e4e4f955256]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.360 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap72c6dc0c-91 in ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.363 262398 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap72c6dc0c-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.363 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[e92b3a97-94b1-4e38-bea1-613898e61d41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:36 np0005542249 ovn_controller[153849]: 2025-12-02T11:19:36Z|00064|binding|INFO|Setting lport 90a1736a-6da6-48ad-8cfb-3ba7e7538576 ovn-installed in OVS
Dec  2 06:19:36 np0005542249 ovn_controller[153849]: 2025-12-02T11:19:36Z|00065|binding|INFO|Setting lport 90a1736a-6da6-48ad-8cfb-3ba7e7538576 up in Southbound
Dec  2 06:19:36 np0005542249 NetworkManager[48987]: <info>  [1764674376.3725] device (tap90a1736a-6d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:19:36 np0005542249 nova_compute[254900]: 2025-12-02 11:19:36.367 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:36 np0005542249 nova_compute[254900]: 2025-12-02 11:19:36.370 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:36 np0005542249 NetworkManager[48987]: <info>  [1764674376.3737] device (tap90a1736a-6d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.364 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[6cd39502-5c26-49d5-b6e5-664c79d70947]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:36 np0005542249 systemd-machined[216222]: New machine qemu-5-instance-00000005.
Dec  2 06:19:36 np0005542249 nova_compute[254900]: 2025-12-02 11:19:36.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:19:36 np0005542249 nova_compute[254900]: 2025-12-02 11:19:36.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:19:36 np0005542249 nova_compute[254900]: 2025-12-02 11:19:36.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:19:36 np0005542249 nova_compute[254900]: 2025-12-02 11:19:36.383 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:19:36 np0005542249 nova_compute[254900]: 2025-12-02 11:19:36.383 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.391 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf6a5a3-8fb7-4c16-9d0e-35c0dbc488e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:36 np0005542249 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.408 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[c0b2f3f1-8940-4c5f-9f33-e5a498a9c8d6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:19:36 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/736570673' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.450 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[178e6eeb-7351-41bf-943b-19d1f985de57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:36 np0005542249 NetworkManager[48987]: <info>  [1764674376.4587] manager: (tap72c6dc0c-90): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.457 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[9b14578d-979f-49fa-b82e-13a9de262e46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:36 np0005542249 cool_fermat[267637]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:19:36 np0005542249 cool_fermat[267637]: --> relative data size: 1.0
Dec  2 06:19:36 np0005542249 cool_fermat[267637]: --> All data devices are unavailable
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.500 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[148a8106-d229-4134-adb9-a1d0c02853cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.504 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[5497161e-7d3a-4d54-aba4-e2382a552926]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:36 np0005542249 systemd[1]: libpod-03c8bd062347b6cbe8c19c48028cf2aa5b0739569cd158e285cc00feb862b2b1.scope: Deactivated successfully.
Dec  2 06:19:36 np0005542249 systemd[1]: libpod-03c8bd062347b6cbe8c19c48028cf2aa5b0739569cd158e285cc00feb862b2b1.scope: Consumed 1.127s CPU time.
Dec  2 06:19:36 np0005542249 NetworkManager[48987]: <info>  [1764674376.5358] device (tap72c6dc0c-90): carrier: link connected
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.542 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[78a11bb5-bde4-4141-94de-34a3586b73fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.562 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[11f9fbf0-1080-439a-9191-5630706cac0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72c6dc0c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:a4:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456935, 'reachable_time': 43089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267759, 'error': None, 'target': 'ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:36 np0005542249 podman[267749]: 2025-12-02 11:19:36.575598469 +0000 UTC m=+0.041210144 container died 03c8bd062347b6cbe8c19c48028cf2aa5b0739569cd158e285cc00feb862b2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.579 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[b043a9d4-b92d-4441-b011-363bcb4f4687]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:a4e7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456935, 'tstamp': 456935}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267763, 'error': None, 'target': 'ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:36 np0005542249 systemd[1]: var-lib-containers-storage-overlay-b0ec9413f03e768dc964f66721899de9b45e19be7106ce967e5fd956162796f5-merged.mount: Deactivated successfully.
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.605 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[1218d90a-0572-4997-a475-a0dccc6fdf28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72c6dc0c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:a4:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456935, 'reachable_time': 43089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267768, 'error': None, 'target': 'ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:36 np0005542249 podman[267749]: 2025-12-02 11:19:36.629172967 +0000 UTC m=+0.094784642 container remove 03c8bd062347b6cbe8c19c48028cf2aa5b0739569cd158e285cc00feb862b2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_fermat, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  2 06:19:36 np0005542249 systemd[1]: libpod-conmon-03c8bd062347b6cbe8c19c48028cf2aa5b0739569cd158e285cc00feb862b2b1.scope: Deactivated successfully.
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.662 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[e3fc9130-9c10-46a1-99a2-f5917edb1db9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.774 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[0b6a8d90-69cd-4ab3-b999-ff840c835fe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.781 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72c6dc0c-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.782 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.784 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72c6dc0c-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:19:36 np0005542249 NetworkManager[48987]: <info>  [1764674376.7898] manager: (tap72c6dc0c-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Dec  2 06:19:36 np0005542249 nova_compute[254900]: 2025-12-02 11:19:36.788 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:36 np0005542249 kernel: tap72c6dc0c-90: entered promiscuous mode
Dec  2 06:19:36 np0005542249 nova_compute[254900]: 2025-12-02 11:19:36.794 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.806 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap72c6dc0c-90, col_values=(('external_ids', {'iface-id': '534ec691-ef20-495e-9e85-e96ce635d23b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:19:36 np0005542249 nova_compute[254900]: 2025-12-02 11:19:36.809 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:36 np0005542249 ovn_controller[153849]: 2025-12-02T11:19:36Z|00066|binding|INFO|Releasing lport 534ec691-ef20-495e-9e85-e96ce635d23b from this chassis (sb_readonly=0)
Dec  2 06:19:36 np0005542249 nova_compute[254900]: 2025-12-02 11:19:36.811 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.812 163757 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/72c6dc0c-99bf-460e-aaba-86258fa10534.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/72c6dc0c-99bf-460e-aaba-86258fa10534.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.813 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[5de018de-3db4-4036-9910-ba0c86d588e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.815 163757 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: global
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]:    log         /dev/log local0 debug
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]:    log-tag     haproxy-metadata-proxy-72c6dc0c-99bf-460e-aaba-86258fa10534
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]:    user        root
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]:    group       root
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]:    maxconn     1024
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]:    pidfile     /var/lib/neutron/external/pids/72c6dc0c-99bf-460e-aaba-86258fa10534.pid.haproxy
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]:    daemon
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: defaults
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]:    log global
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]:    mode http
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]:    option httplog
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]:    option dontlognull
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]:    option http-server-close
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]:    option forwardfor
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]:    retries                 3
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]:    timeout http-request    30s
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]:    timeout connect         30s
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]:    timeout client          32s
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]:    timeout server          32s
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]:    timeout http-keep-alive 30s
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: listen listener
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]:    bind 169.254.169.254:80
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]:    http-request add-header X-OVN-Network-ID 72c6dc0c-99bf-460e-aaba-86258fa10534
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 06:19:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:36.816 163757 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534', 'env', 'PROCESS_TAG=haproxy-72c6dc0c-99bf-460e-aaba-86258fa10534', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/72c6dc0c-99bf-460e-aaba-86258fa10534.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 06:19:36 np0005542249 nova_compute[254900]: 2025-12-02 11:19:36.835 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.237 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674377.2359893, 4981d357-863a-45cb-a3e6-6d8e85010252 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.237 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] VM Started (Lifecycle Event)#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.255 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.259 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674377.236741, 4981d357-863a-45cb-a3e6-6d8e85010252 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.260 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.273 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.276 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.288 254904 DEBUG nova.compute.manager [req-9845beca-1bfd-41b7-bbba-a59b1f11bfdf req-523db7da-00a9-4066-99f7-b7886e345745 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Received event network-vif-plugged-90a1736a-6da6-48ad-8cfb-3ba7e7538576 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.289 254904 DEBUG oslo_concurrency.lockutils [req-9845beca-1bfd-41b7-bbba-a59b1f11bfdf req-523db7da-00a9-4066-99f7-b7886e345745 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.289 254904 DEBUG oslo_concurrency.lockutils [req-9845beca-1bfd-41b7-bbba-a59b1f11bfdf req-523db7da-00a9-4066-99f7-b7886e345745 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.289 254904 DEBUG oslo_concurrency.lockutils [req-9845beca-1bfd-41b7-bbba-a59b1f11bfdf req-523db7da-00a9-4066-99f7-b7886e345745 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.289 254904 DEBUG nova.compute.manager [req-9845beca-1bfd-41b7-bbba-a59b1f11bfdf req-523db7da-00a9-4066-99f7-b7886e345745 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Processing event network-vif-plugged-90a1736a-6da6-48ad-8cfb-3ba7e7538576 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.290 254904 DEBUG nova.compute.manager [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.291 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.294 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674377.2940512, 4981d357-863a-45cb-a3e6-6d8e85010252 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.294 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.296 254904 DEBUG nova.virt.libvirt.driver [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.299 254904 INFO nova.virt.libvirt.driver [-] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Instance spawned successfully.#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.300 254904 DEBUG nova.virt.libvirt.driver [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.317 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:19:37 np0005542249 podman[267953]: 2025-12-02 11:19:37.318827788 +0000 UTC m=+0.076705897 container create 22a92606f29f56e23e686683fde78d44b3cefd8c9494b65e69ca005865063e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.330 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.339 254904 DEBUG nova.virt.libvirt.driver [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.340 254904 DEBUG nova.virt.libvirt.driver [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.341 254904 DEBUG nova.virt.libvirt.driver [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.342 254904 DEBUG nova.virt.libvirt.driver [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.344 254904 DEBUG nova.virt.libvirt.driver [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.345 254904 DEBUG nova.virt.libvirt.driver [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:19:37 np0005542249 systemd[1]: Started libpod-conmon-22a92606f29f56e23e686683fde78d44b3cefd8c9494b65e69ca005865063e69.scope.
Dec  2 06:19:37 np0005542249 podman[267953]: 2025-12-02 11:19:37.269900262 +0000 UTC m=+0.027778351 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:19:37 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:19:37 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c2270f6af86de65570f5c79e4b7cded91a515d42abe618c175a590432132457/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 06:19:37 np0005542249 podman[267953]: 2025-12-02 11:19:37.425292965 +0000 UTC m=+0.183171054 container init 22a92606f29f56e23e686683fde78d44b3cefd8c9494b65e69ca005865063e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  2 06:19:37 np0005542249 podman[267953]: 2025-12-02 11:19:37.436829518 +0000 UTC m=+0.194707587 container start 22a92606f29f56e23e686683fde78d44b3cefd8c9494b65e69ca005865063e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  2 06:19:37 np0005542249 neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534[267993]: [NOTICE]   (267997) : New worker (268000) forked
Dec  2 06:19:37 np0005542249 neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534[267993]: [NOTICE]   (267997) : Loading success.
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.487 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.508 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.509 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.509 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.510 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.511 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.540 254904 INFO nova.compute.manager [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Took 6.30 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:19:37 np0005542249 podman[267999]: 2025-12-02 11:19:37.54081485 +0000 UTC m=+0.050935199 container create e25c2992dc0e2feb76c504885df88d2c05f6f95480e2d8611370c9c804641aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.541 254904 DEBUG nova.compute.manager [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:19:37 np0005542249 systemd[1]: Started libpod-conmon-e25c2992dc0e2feb76c504885df88d2c05f6f95480e2d8611370c9c804641aa9.scope.
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.611 254904 INFO nova.compute.manager [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Took 7.27 seconds to build instance.#033[00m
Dec  2 06:19:37 np0005542249 podman[267999]: 2025-12-02 11:19:37.518505994 +0000 UTC m=+0.028626343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.627 254904 DEBUG oslo_concurrency.lockutils [None req-0191d833-11b2-47c2-ba83-7863fa774f41 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "4981d357-863a-45cb-a3e6-6d8e85010252" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.351s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:37 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:19:37 np0005542249 podman[267999]: 2025-12-02 11:19:37.660731451 +0000 UTC m=+0.170851880 container init e25c2992dc0e2feb76c504885df88d2c05f6f95480e2d8611370c9c804641aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  2 06:19:37 np0005542249 podman[267999]: 2025-12-02 11:19:37.671319699 +0000 UTC m=+0.181440048 container start e25c2992dc0e2feb76c504885df88d2c05f6f95480e2d8611370c9c804641aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  2 06:19:37 np0005542249 podman[267999]: 2025-12-02 11:19:37.675103639 +0000 UTC m=+0.185224068 container attach e25c2992dc0e2feb76c504885df88d2c05f6f95480e2d8611370c9c804641aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_archimedes, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  2 06:19:37 np0005542249 vibrant_archimedes[268025]: 167 167
Dec  2 06:19:37 np0005542249 systemd[1]: libpod-e25c2992dc0e2feb76c504885df88d2c05f6f95480e2d8611370c9c804641aa9.scope: Deactivated successfully.
Dec  2 06:19:37 np0005542249 podman[267999]: 2025-12-02 11:19:37.68352344 +0000 UTC m=+0.193643779 container died e25c2992dc0e2feb76c504885df88d2c05f6f95480e2d8611370c9c804641aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_archimedes, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  2 06:19:37 np0005542249 systemd[1]: var-lib-containers-storage-overlay-40ad98b3c4274310265a74d838d763e2118d374b359529b9a75525f4811e71f7-merged.mount: Deactivated successfully.
Dec  2 06:19:37 np0005542249 podman[267999]: 2025-12-02 11:19:37.727597828 +0000 UTC m=+0.237718167 container remove e25c2992dc0e2feb76c504885df88d2c05f6f95480e2d8611370c9c804641aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_archimedes, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:19:37 np0005542249 systemd[1]: libpod-conmon-e25c2992dc0e2feb76c504885df88d2c05f6f95480e2d8611370c9c804641aa9.scope: Deactivated successfully.
Dec  2 06:19:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1097: 321 pgs: 321 active+clean; 253 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 11 MiB/s wr, 272 op/s
Dec  2 06:19:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:19:37 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4262447409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.968 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:19:37 np0005542249 podman[268066]: 2025-12-02 11:19:37.972231166 +0000 UTC m=+0.061200810 container create 221c62cd7499320f80d941958575802de21c22dfa641d84ded9b8e1888bb0d00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lichterman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:19:37 np0005542249 nova_compute[254900]: 2025-12-02 11:19:37.992 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:38 np0005542249 systemd[1]: Started libpod-conmon-221c62cd7499320f80d941958575802de21c22dfa641d84ded9b8e1888bb0d00.scope.
Dec  2 06:19:38 np0005542249 podman[268066]: 2025-12-02 11:19:37.946791797 +0000 UTC m=+0.035761471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:19:38 np0005542249 nova_compute[254900]: 2025-12-02 11:19:38.045 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:19:38 np0005542249 nova_compute[254900]: 2025-12-02 11:19:38.045 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:19:38 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:19:38 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/101fda2f279f7387b128d345469a596ea886492a1dd2539397b0483817957559/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:19:38 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/101fda2f279f7387b128d345469a596ea886492a1dd2539397b0483817957559/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:19:38 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/101fda2f279f7387b128d345469a596ea886492a1dd2539397b0483817957559/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:19:38 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/101fda2f279f7387b128d345469a596ea886492a1dd2539397b0483817957559/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:19:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Dec  2 06:19:38 np0005542249 podman[268066]: 2025-12-02 11:19:38.083870129 +0000 UTC m=+0.172839803 container init 221c62cd7499320f80d941958575802de21c22dfa641d84ded9b8e1888bb0d00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lichterman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  2 06:19:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Dec  2 06:19:38 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Dec  2 06:19:38 np0005542249 podman[268066]: 2025-12-02 11:19:38.093447791 +0000 UTC m=+0.182417475 container start 221c62cd7499320f80d941958575802de21c22dfa641d84ded9b8e1888bb0d00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  2 06:19:38 np0005542249 podman[268066]: 2025-12-02 11:19:38.099069909 +0000 UTC m=+0.188039603 container attach 221c62cd7499320f80d941958575802de21c22dfa641d84ded9b8e1888bb0d00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lichterman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  2 06:19:38 np0005542249 nova_compute[254900]: 2025-12-02 11:19:38.114 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764674363.1131566, b65d7234-010a-4dd0-b06c-58dc26cfb5a6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:19:38 np0005542249 nova_compute[254900]: 2025-12-02 11:19:38.115 254904 INFO nova.compute.manager [-] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:19:38 np0005542249 nova_compute[254900]: 2025-12-02 11:19:38.135 254904 DEBUG nova.compute.manager [None req-abf839a7-e550-4c63-8aeb-5f3883004c4d - - - - - -] [instance: b65d7234-010a-4dd0-b06c-58dc26cfb5a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:19:38 np0005542249 podman[268088]: 2025-12-02 11:19:38.232597217 +0000 UTC m=+0.147310041 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:19:38 np0005542249 nova_compute[254900]: 2025-12-02 11:19:38.289 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:19:38 np0005542249 nova_compute[254900]: 2025-12-02 11:19:38.290 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4567MB free_disk=59.967525482177734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:19:38 np0005542249 nova_compute[254900]: 2025-12-02 11:19:38.291 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:38 np0005542249 nova_compute[254900]: 2025-12-02 11:19:38.291 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:38 np0005542249 nova_compute[254900]: 2025-12-02 11:19:38.518 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Instance 4981d357-863a-45cb-a3e6-6d8e85010252 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 06:19:38 np0005542249 nova_compute[254900]: 2025-12-02 11:19:38.519 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:19:38 np0005542249 nova_compute[254900]: 2025-12-02 11:19:38.519 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:19:38 np0005542249 nova_compute[254900]: 2025-12-02 11:19:38.712 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]: {
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:    "0": [
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:        {
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "devices": [
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "/dev/loop3"
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            ],
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "lv_name": "ceph_lv0",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "lv_size": "21470642176",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "name": "ceph_lv0",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "tags": {
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.cluster_name": "ceph",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.crush_device_class": "",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.encrypted": "0",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.osd_id": "0",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.type": "block",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.vdo": "0"
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            },
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "type": "block",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "vg_name": "ceph_vg0"
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:        }
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:    ],
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:    "1": [
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:        {
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "devices": [
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "/dev/loop4"
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            ],
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "lv_name": "ceph_lv1",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "lv_size": "21470642176",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "name": "ceph_lv1",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "tags": {
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.cluster_name": "ceph",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.crush_device_class": "",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.encrypted": "0",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.osd_id": "1",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.type": "block",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.vdo": "0"
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            },
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "type": "block",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "vg_name": "ceph_vg1"
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:        }
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:    ],
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:    "2": [
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:        {
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "devices": [
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "/dev/loop5"
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            ],
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "lv_name": "ceph_lv2",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "lv_size": "21470642176",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "name": "ceph_lv2",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "tags": {
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.cluster_name": "ceph",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.crush_device_class": "",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.encrypted": "0",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.osd_id": "2",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.type": "block",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:                "ceph.vdo": "0"
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            },
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "type": "block",
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:            "vg_name": "ceph_vg2"
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:        }
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]:    ]
Dec  2 06:19:38 np0005542249 dreamy_lichterman[268085]: }
Dec  2 06:19:38 np0005542249 systemd[1]: libpod-221c62cd7499320f80d941958575802de21c22dfa641d84ded9b8e1888bb0d00.scope: Deactivated successfully.
Dec  2 06:19:38 np0005542249 podman[268066]: 2025-12-02 11:19:38.981969947 +0000 UTC m=+1.070939621 container died 221c62cd7499320f80d941958575802de21c22dfa641d84ded9b8e1888bb0d00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  2 06:19:39 np0005542249 systemd[1]: var-lib-containers-storage-overlay-101fda2f279f7387b128d345469a596ea886492a1dd2539397b0483817957559-merged.mount: Deactivated successfully.
Dec  2 06:19:39 np0005542249 podman[268066]: 2025-12-02 11:19:39.045329682 +0000 UTC m=+1.134299336 container remove 221c62cd7499320f80d941958575802de21c22dfa641d84ded9b8e1888bb0d00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lichterman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:19:39 np0005542249 systemd[1]: libpod-conmon-221c62cd7499320f80d941958575802de21c22dfa641d84ded9b8e1888bb0d00.scope: Deactivated successfully.
Dec  2 06:19:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:19:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1458440827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:19:39 np0005542249 nova_compute[254900]: 2025-12-02 11:19:39.264 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:19:39 np0005542249 nova_compute[254900]: 2025-12-02 11:19:39.271 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:19:39 np0005542249 nova_compute[254900]: 2025-12-02 11:19:39.293 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:19:39 np0005542249 nova_compute[254900]: 2025-12-02 11:19:39.317 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:19:39 np0005542249 nova_compute[254900]: 2025-12-02 11:19:39.317 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:39 np0005542249 nova_compute[254900]: 2025-12-02 11:19:39.570 254904 DEBUG nova.compute.manager [req-e06b63b9-7949-455a-88f0-00e70e31fa3a req-c56fc596-7b43-4468-a61b-e3e9feb7efe3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Received event network-vif-plugged-90a1736a-6da6-48ad-8cfb-3ba7e7538576 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:19:39 np0005542249 nova_compute[254900]: 2025-12-02 11:19:39.571 254904 DEBUG oslo_concurrency.lockutils [req-e06b63b9-7949-455a-88f0-00e70e31fa3a req-c56fc596-7b43-4468-a61b-e3e9feb7efe3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:39 np0005542249 nova_compute[254900]: 2025-12-02 11:19:39.571 254904 DEBUG oslo_concurrency.lockutils [req-e06b63b9-7949-455a-88f0-00e70e31fa3a req-c56fc596-7b43-4468-a61b-e3e9feb7efe3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:39 np0005542249 nova_compute[254900]: 2025-12-02 11:19:39.571 254904 DEBUG oslo_concurrency.lockutils [req-e06b63b9-7949-455a-88f0-00e70e31fa3a req-c56fc596-7b43-4468-a61b-e3e9feb7efe3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:39 np0005542249 nova_compute[254900]: 2025-12-02 11:19:39.571 254904 DEBUG nova.compute.manager [req-e06b63b9-7949-455a-88f0-00e70e31fa3a req-c56fc596-7b43-4468-a61b-e3e9feb7efe3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] No waiting events found dispatching network-vif-plugged-90a1736a-6da6-48ad-8cfb-3ba7e7538576 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:19:39 np0005542249 nova_compute[254900]: 2025-12-02 11:19:39.572 254904 WARNING nova.compute.manager [req-e06b63b9-7949-455a-88f0-00e70e31fa3a req-c56fc596-7b43-4468-a61b-e3e9feb7efe3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Received unexpected event network-vif-plugged-90a1736a-6da6-48ad-8cfb-3ba7e7538576 for instance with vm_state active and task_state None.#033[00m
Dec  2 06:19:39 np0005542249 nova_compute[254900]: 2025-12-02 11:19:39.796 254904 DEBUG oslo_concurrency.lockutils [None req-c4c7d6b8-ca7f-4b8a-982f-7f3264717192 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Acquiring lock "4981d357-863a-45cb-a3e6-6d8e85010252" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:39 np0005542249 nova_compute[254900]: 2025-12-02 11:19:39.796 254904 DEBUG oslo_concurrency.lockutils [None req-c4c7d6b8-ca7f-4b8a-982f-7f3264717192 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "4981d357-863a-45cb-a3e6-6d8e85010252" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:39 np0005542249 nova_compute[254900]: 2025-12-02 11:19:39.797 254904 DEBUG oslo_concurrency.lockutils [None req-c4c7d6b8-ca7f-4b8a-982f-7f3264717192 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Acquiring lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:39 np0005542249 nova_compute[254900]: 2025-12-02 11:19:39.797 254904 DEBUG oslo_concurrency.lockutils [None req-c4c7d6b8-ca7f-4b8a-982f-7f3264717192 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:39 np0005542249 nova_compute[254900]: 2025-12-02 11:19:39.798 254904 DEBUG oslo_concurrency.lockutils [None req-c4c7d6b8-ca7f-4b8a-982f-7f3264717192 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:39 np0005542249 nova_compute[254900]: 2025-12-02 11:19:39.800 254904 INFO nova.compute.manager [None req-c4c7d6b8-ca7f-4b8a-982f-7f3264717192 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Terminating instance#033[00m
Dec  2 06:19:39 np0005542249 nova_compute[254900]: 2025-12-02 11:19:39.802 254904 DEBUG nova.compute.manager [None req-c4c7d6b8-ca7f-4b8a-982f-7f3264717192 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:19:39 np0005542249 podman[268292]: 2025-12-02 11:19:39.805593349 +0000 UTC m=+0.110156596 container create 8ff28f6b1d62ebfe19e2f1bc244e3c8f88b8848662a36032b194c389b5a16541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  2 06:19:39 np0005542249 podman[268292]: 2025-12-02 11:19:39.727566348 +0000 UTC m=+0.032129675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:19:39 np0005542249 kernel: tap90a1736a-6d (unregistering): left promiscuous mode
Dec  2 06:19:39 np0005542249 systemd[1]: Started libpod-conmon-8ff28f6b1d62ebfe19e2f1bc244e3c8f88b8848662a36032b194c389b5a16541.scope.
Dec  2 06:19:39 np0005542249 NetworkManager[48987]: <info>  [1764674379.8626] device (tap90a1736a-6d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:19:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1099: 321 pgs: 321 active+clean; 273 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 7.3 MiB/s rd, 8.0 MiB/s wr, 206 op/s
Dec  2 06:19:39 np0005542249 ovn_controller[153849]: 2025-12-02T11:19:39Z|00067|binding|INFO|Releasing lport 90a1736a-6da6-48ad-8cfb-3ba7e7538576 from this chassis (sb_readonly=0)
Dec  2 06:19:39 np0005542249 ovn_controller[153849]: 2025-12-02T11:19:39Z|00068|binding|INFO|Setting lport 90a1736a-6da6-48ad-8cfb-3ba7e7538576 down in Southbound
Dec  2 06:19:39 np0005542249 nova_compute[254900]: 2025-12-02 11:19:39.934 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:39 np0005542249 ovn_controller[153849]: 2025-12-02T11:19:39Z|00069|binding|INFO|Removing iface tap90a1736a-6d ovn-installed in OVS
Dec  2 06:19:39 np0005542249 nova_compute[254900]: 2025-12-02 11:19:39.937 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:39 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:39.944 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:e5:40 10.100.0.4'], port_security=['fa:16:3e:08:e5:40 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4981d357-863a-45cb-a3e6-6d8e85010252', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72c6dc0c-99bf-460e-aaba-86258fa10534', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0587e0fe146043ba857b2d8002ab0a3b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '277ca5fd-9d7a-41af-9803-178b99c967e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2ed1272d-4b2e-4a1e-a64d-f275687e74f1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=90a1736a-6da6-48ad-8cfb-3ba7e7538576) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:19:39 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:39.946 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 90a1736a-6da6-48ad-8cfb-3ba7e7538576 in datapath 72c6dc0c-99bf-460e-aaba-86258fa10534 unbound from our chassis#033[00m
Dec  2 06:19:39 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:39.947 163757 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 72c6dc0c-99bf-460e-aaba-86258fa10534, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 06:19:39 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:39.948 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[11ce21c9-2121-4d90-8ae8-0dececfd16b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:39 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:39.948 163757 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534 namespace which is not needed anymore#033[00m
Dec  2 06:19:39 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:19:39 np0005542249 nova_compute[254900]: 2025-12-02 11:19:39.958 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:39 np0005542249 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Dec  2 06:19:39 np0005542249 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 3.373s CPU time.
Dec  2 06:19:39 np0005542249 systemd-machined[216222]: Machine qemu-5-instance-00000005 terminated.
Dec  2 06:19:39 np0005542249 podman[268292]: 2025-12-02 11:19:39.997735057 +0000 UTC m=+0.302298344 container init 8ff28f6b1d62ebfe19e2f1bc244e3c8f88b8848662a36032b194c389b5a16541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  2 06:19:40 np0005542249 podman[268292]: 2025-12-02 11:19:40.010166574 +0000 UTC m=+0.314729811 container start 8ff28f6b1d62ebfe19e2f1bc244e3c8f88b8848662a36032b194c389b5a16541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  2 06:19:40 np0005542249 nervous_ishizaka[268308]: 167 167
Dec  2 06:19:40 np0005542249 systemd[1]: libpod-8ff28f6b1d62ebfe19e2f1bc244e3c8f88b8848662a36032b194c389b5a16541.scope: Deactivated successfully.
Dec  2 06:19:40 np0005542249 conmon[268308]: conmon 8ff28f6b1d62ebfe19e2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8ff28f6b1d62ebfe19e2f1bc244e3c8f88b8848662a36032b194c389b5a16541.scope/container/memory.events
Dec  2 06:19:40 np0005542249 podman[268292]: 2025-12-02 11:19:40.022157228 +0000 UTC m=+0.326720465 container attach 8ff28f6b1d62ebfe19e2f1bc244e3c8f88b8848662a36032b194c389b5a16541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec  2 06:19:40 np0005542249 podman[268292]: 2025-12-02 11:19:40.024799188 +0000 UTC m=+0.329362455 container died 8ff28f6b1d62ebfe19e2f1bc244e3c8f88b8848662a36032b194c389b5a16541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ishizaka, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  2 06:19:40 np0005542249 kernel: tap90a1736a-6d: entered promiscuous mode
Dec  2 06:19:40 np0005542249 kernel: tap90a1736a-6d (unregistering): left promiscuous mode
Dec  2 06:19:40 np0005542249 NetworkManager[48987]: <info>  [1764674380.0370] manager: (tap90a1736a-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/47)
Dec  2 06:19:40 np0005542249 ovn_controller[153849]: 2025-12-02T11:19:40Z|00070|binding|INFO|Claiming lport 90a1736a-6da6-48ad-8cfb-3ba7e7538576 for this chassis.
Dec  2 06:19:40 np0005542249 ovn_controller[153849]: 2025-12-02T11:19:40Z|00071|binding|INFO|90a1736a-6da6-48ad-8cfb-3ba7e7538576: Claiming fa:16:3e:08:e5:40 10.100.0.4
Dec  2 06:19:40 np0005542249 nova_compute[254900]: 2025-12-02 11:19:40.041 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:40.052 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:e5:40 10.100.0.4'], port_security=['fa:16:3e:08:e5:40 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4981d357-863a-45cb-a3e6-6d8e85010252', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72c6dc0c-99bf-460e-aaba-86258fa10534', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0587e0fe146043ba857b2d8002ab0a3b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '277ca5fd-9d7a-41af-9803-178b99c967e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2ed1272d-4b2e-4a1e-a64d-f275687e74f1, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=90a1736a-6da6-48ad-8cfb-3ba7e7538576) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:19:40 np0005542249 nova_compute[254900]: 2025-12-02 11:19:40.059 254904 INFO nova.virt.libvirt.driver [-] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Instance destroyed successfully.#033[00m
Dec  2 06:19:40 np0005542249 nova_compute[254900]: 2025-12-02 11:19:40.060 254904 DEBUG nova.objects.instance [None req-c4c7d6b8-ca7f-4b8a-982f-7f3264717192 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lazy-loading 'resources' on Instance uuid 4981d357-863a-45cb-a3e6-6d8e85010252 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:19:40 np0005542249 systemd[1]: var-lib-containers-storage-overlay-fb07c9927d65367fae90b03b42ce1cf14559301307a3fda385ceaef1b79238f0-merged.mount: Deactivated successfully.
Dec  2 06:19:40 np0005542249 nova_compute[254900]: 2025-12-02 11:19:40.067 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:40 np0005542249 ovn_controller[153849]: 2025-12-02T11:19:40Z|00072|binding|INFO|Setting lport 90a1736a-6da6-48ad-8cfb-3ba7e7538576 ovn-installed in OVS
Dec  2 06:19:40 np0005542249 ovn_controller[153849]: 2025-12-02T11:19:40Z|00073|binding|INFO|Setting lport 90a1736a-6da6-48ad-8cfb-3ba7e7538576 up in Southbound
Dec  2 06:19:40 np0005542249 nova_compute[254900]: 2025-12-02 11:19:40.072 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:40 np0005542249 nova_compute[254900]: 2025-12-02 11:19:40.079 254904 DEBUG nova.virt.libvirt.vif [None req-c4c7d6b8-ca7f-4b8a-982f-7f3264717192 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:19:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-409056215',display_name='tempest-VolumesActionsTest-instance-409056215',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-409056215',id=5,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:19:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0587e0fe146043ba857b2d8002ab0a3b',ramdisk_id='',reservation_id='r-jjv9t5ua',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-1586928656',owner_user_name='tempest-VolumesActionsTest-1586928656-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:19:37Z,user_data=None,user_id='505334fe5eb749e19ba727c8b2d04594',uuid=4981d357-863a-45cb-a3e6-6d8e85010252,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "90a1736a-6da6-48ad-8cfb-3ba7e7538576", "address": "fa:16:3e:08:e5:40", "network": {"id": "72c6dc0c-99bf-460e-aaba-86258fa10534", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1502229343-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0587e0fe146043ba857b2d8002ab0a3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90a1736a-6d", "ovs_interfaceid": "90a1736a-6da6-48ad-8cfb-3ba7e7538576", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:19:40 np0005542249 nova_compute[254900]: 2025-12-02 11:19:40.080 254904 DEBUG nova.network.os_vif_util [None req-c4c7d6b8-ca7f-4b8a-982f-7f3264717192 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Converting VIF {"id": "90a1736a-6da6-48ad-8cfb-3ba7e7538576", "address": "fa:16:3e:08:e5:40", "network": {"id": "72c6dc0c-99bf-460e-aaba-86258fa10534", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1502229343-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0587e0fe146043ba857b2d8002ab0a3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90a1736a-6d", "ovs_interfaceid": "90a1736a-6da6-48ad-8cfb-3ba7e7538576", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:19:40 np0005542249 nova_compute[254900]: 2025-12-02 11:19:40.081 254904 DEBUG nova.network.os_vif_util [None req-c4c7d6b8-ca7f-4b8a-982f-7f3264717192 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:e5:40,bridge_name='br-int',has_traffic_filtering=True,id=90a1736a-6da6-48ad-8cfb-3ba7e7538576,network=Network(72c6dc0c-99bf-460e-aaba-86258fa10534),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90a1736a-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:19:40 np0005542249 nova_compute[254900]: 2025-12-02 11:19:40.082 254904 DEBUG os_vif [None req-c4c7d6b8-ca7f-4b8a-982f-7f3264717192 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:e5:40,bridge_name='br-int',has_traffic_filtering=True,id=90a1736a-6da6-48ad-8cfb-3ba7e7538576,network=Network(72c6dc0c-99bf-460e-aaba-86258fa10534),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90a1736a-6d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:19:40 np0005542249 nova_compute[254900]: 2025-12-02 11:19:40.085 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:40 np0005542249 nova_compute[254900]: 2025-12-02 11:19:40.085 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap90a1736a-6d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:19:40 np0005542249 podman[268292]: 2025-12-02 11:19:40.086317644 +0000 UTC m=+0.390880921 container remove 8ff28f6b1d62ebfe19e2f1bc244e3c8f88b8848662a36032b194c389b5a16541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ishizaka, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:19:40 np0005542249 nova_compute[254900]: 2025-12-02 11:19:40.090 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:19:40 np0005542249 ovn_controller[153849]: 2025-12-02T11:19:40Z|00074|binding|INFO|Releasing lport 90a1736a-6da6-48ad-8cfb-3ba7e7538576 from this chassis (sb_readonly=0)
Dec  2 06:19:40 np0005542249 ovn_controller[153849]: 2025-12-02T11:19:40Z|00075|binding|INFO|Setting lport 90a1736a-6da6-48ad-8cfb-3ba7e7538576 down in Southbound
Dec  2 06:19:40 np0005542249 nova_compute[254900]: 2025-12-02 11:19:40.094 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:40 np0005542249 systemd[1]: libpod-conmon-8ff28f6b1d62ebfe19e2f1bc244e3c8f88b8848662a36032b194c389b5a16541.scope: Deactivated successfully.
Dec  2 06:19:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:40.103 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:e5:40 10.100.0.4'], port_security=['fa:16:3e:08:e5:40 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4981d357-863a-45cb-a3e6-6d8e85010252', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72c6dc0c-99bf-460e-aaba-86258fa10534', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0587e0fe146043ba857b2d8002ab0a3b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '277ca5fd-9d7a-41af-9803-178b99c967e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2ed1272d-4b2e-4a1e-a64d-f275687e74f1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=90a1736a-6da6-48ad-8cfb-3ba7e7538576) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:19:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Dec  2 06:19:40 np0005542249 nova_compute[254900]: 2025-12-02 11:19:40.105 254904 INFO os_vif [None req-c4c7d6b8-ca7f-4b8a-982f-7f3264717192 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:e5:40,bridge_name='br-int',has_traffic_filtering=True,id=90a1736a-6da6-48ad-8cfb-3ba7e7538576,network=Network(72c6dc0c-99bf-460e-aaba-86258fa10534),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90a1736a-6d')#033[00m
Dec  2 06:19:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Dec  2 06:19:40 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Dec  2 06:19:40 np0005542249 neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534[267993]: [NOTICE]   (267997) : haproxy version is 2.8.14-c23fe91
Dec  2 06:19:40 np0005542249 neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534[267993]: [NOTICE]   (267997) : path to executable is /usr/sbin/haproxy
Dec  2 06:19:40 np0005542249 neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534[267993]: [WARNING]  (267997) : Exiting Master process...
Dec  2 06:19:40 np0005542249 neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534[267993]: [WARNING]  (267997) : Exiting Master process...
Dec  2 06:19:40 np0005542249 nova_compute[254900]: 2025-12-02 11:19:40.130 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:40 np0005542249 neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534[267993]: [ALERT]    (267997) : Current worker (268000) exited with code 143 (Terminated)
Dec  2 06:19:40 np0005542249 neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534[267993]: [WARNING]  (267997) : All workers exited. Exiting... (0)
Dec  2 06:19:40 np0005542249 systemd[1]: libpod-22a92606f29f56e23e686683fde78d44b3cefd8c9494b65e69ca005865063e69.scope: Deactivated successfully.
Dec  2 06:19:40 np0005542249 podman[268349]: 2025-12-02 11:19:40.144144373 +0000 UTC m=+0.061811265 container died 22a92606f29f56e23e686683fde78d44b3cefd8c9494b65e69ca005865063e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 06:19:40 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-22a92606f29f56e23e686683fde78d44b3cefd8c9494b65e69ca005865063e69-userdata-shm.mount: Deactivated successfully.
Dec  2 06:19:40 np0005542249 systemd[1]: var-lib-containers-storage-overlay-1c2270f6af86de65570f5c79e4b7cded91a515d42abe618c175a590432132457-merged.mount: Deactivated successfully.
Dec  2 06:19:40 np0005542249 podman[268349]: 2025-12-02 11:19:40.198569374 +0000 UTC m=+0.116236246 container cleanup 22a92606f29f56e23e686683fde78d44b3cefd8c9494b65e69ca005865063e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 06:19:40 np0005542249 systemd[1]: libpod-conmon-22a92606f29f56e23e686683fde78d44b3cefd8c9494b65e69ca005865063e69.scope: Deactivated successfully.
Dec  2 06:19:40 np0005542249 podman[268405]: 2025-12-02 11:19:40.283973698 +0000 UTC m=+0.060455240 container remove 22a92606f29f56e23e686683fde78d44b3cefd8c9494b65e69ca005865063e69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:19:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:40.293 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[61ebe6c7-9ef1-44b3-8804-f23852886f25]: (4, ('Tue Dec  2 11:19:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534 (22a92606f29f56e23e686683fde78d44b3cefd8c9494b65e69ca005865063e69)\n22a92606f29f56e23e686683fde78d44b3cefd8c9494b65e69ca005865063e69\nTue Dec  2 11:19:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534 (22a92606f29f56e23e686683fde78d44b3cefd8c9494b65e69ca005865063e69)\n22a92606f29f56e23e686683fde78d44b3cefd8c9494b65e69ca005865063e69\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:40.299 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[d2138a2c-b3ff-466c-8748-ec873e5861e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:40.300 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72c6dc0c-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:19:40 np0005542249 nova_compute[254900]: 2025-12-02 11:19:40.303 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:40 np0005542249 kernel: tap72c6dc0c-90: left promiscuous mode
Dec  2 06:19:40 np0005542249 nova_compute[254900]: 2025-12-02 11:19:40.313 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:19:40 np0005542249 podman[268406]: 2025-12-02 11:19:40.313062642 +0000 UTC m=+0.082876698 container create b753cfd42cb42dfa1cc8819bce25d564cd6d2c490fb12b3c0f70d51af394609b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Dec  2 06:19:40 np0005542249 nova_compute[254900]: 2025-12-02 11:19:40.335 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:40.339 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[40c6db13-6a80-4678-9135-73591b7667fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:40.355 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[0613564d-b38e-4865-88c4-f1a62b71facf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:40.357 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[fd16f1ec-3b7e-4555-a256-120969f026ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:40 np0005542249 podman[268406]: 2025-12-02 11:19:40.281578765 +0000 UTC m=+0.051392841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:19:40 np0005542249 systemd[1]: Started libpod-conmon-b753cfd42cb42dfa1cc8819bce25d564cd6d2c490fb12b3c0f70d51af394609b.scope.
Dec  2 06:19:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:40.379 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[3b92f2ec-5b51-46fe-8ad1-636f317807f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456926, 'reachable_time': 41665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268436, 'error': None, 'target': 'ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:40 np0005542249 nova_compute[254900]: 2025-12-02 11:19:40.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:19:40 np0005542249 systemd[1]: run-netns-ovnmeta\x2d72c6dc0c\x2d99bf\x2d460e\x2daaba\x2d86258fa10534.mount: Deactivated successfully.
Dec  2 06:19:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:40.382 164036 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-72c6dc0c-99bf-460e-aaba-86258fa10534 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 06:19:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:40.383 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[b7fb9d7a-6e6f-40ca-a733-5b0e040d65c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:40.385 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 90a1736a-6da6-48ad-8cfb-3ba7e7538576 in datapath 72c6dc0c-99bf-460e-aaba-86258fa10534 unbound from our chassis#033[00m
Dec  2 06:19:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:40.387 163757 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 72c6dc0c-99bf-460e-aaba-86258fa10534, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 06:19:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:40.388 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[cfdf3366-d192-40a4-a25d-2b33c63111f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:40.388 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 90a1736a-6da6-48ad-8cfb-3ba7e7538576 in datapath 72c6dc0c-99bf-460e-aaba-86258fa10534 unbound from our chassis#033[00m
Dec  2 06:19:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:40.389 163757 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 72c6dc0c-99bf-460e-aaba-86258fa10534, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 06:19:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:40.390 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[4c8be81d-6200-4e8b-807f-16a7d15c1078]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:19:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:40.396 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:23:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:25:d0:a1:75:ed'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:19:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:40.397 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 06:19:40 np0005542249 nova_compute[254900]: 2025-12-02 11:19:40.396 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:40 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:19:40 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4b4a92a90953bd4dab64baf5cca6d008d001946e2ffd1c3856f170a6e264ecc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:19:40 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4b4a92a90953bd4dab64baf5cca6d008d001946e2ffd1c3856f170a6e264ecc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:19:40 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4b4a92a90953bd4dab64baf5cca6d008d001946e2ffd1c3856f170a6e264ecc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:19:40 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4b4a92a90953bd4dab64baf5cca6d008d001946e2ffd1c3856f170a6e264ecc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:19:40 np0005542249 podman[268406]: 2025-12-02 11:19:40.424056038 +0000 UTC m=+0.193870174 container init b753cfd42cb42dfa1cc8819bce25d564cd6d2c490fb12b3c0f70d51af394609b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_darwin, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:19:40 np0005542249 podman[268406]: 2025-12-02 11:19:40.438590821 +0000 UTC m=+0.208404907 container start b753cfd42cb42dfa1cc8819bce25d564cd6d2c490fb12b3c0f70d51af394609b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:19:40 np0005542249 podman[268406]: 2025-12-02 11:19:40.443221572 +0000 UTC m=+0.213035648 container attach b753cfd42cb42dfa1cc8819bce25d564cd6d2c490fb12b3c0f70d51af394609b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_darwin, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  2 06:19:40 np0005542249 nova_compute[254900]: 2025-12-02 11:19:40.600 254904 INFO nova.virt.libvirt.driver [None req-c4c7d6b8-ca7f-4b8a-982f-7f3264717192 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Deleting instance files /var/lib/nova/instances/4981d357-863a-45cb-a3e6-6d8e85010252_del#033[00m
Dec  2 06:19:40 np0005542249 nova_compute[254900]: 2025-12-02 11:19:40.601 254904 INFO nova.virt.libvirt.driver [None req-c4c7d6b8-ca7f-4b8a-982f-7f3264717192 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Deletion of /var/lib/nova/instances/4981d357-863a-45cb-a3e6-6d8e85010252_del complete#033[00m
Dec  2 06:19:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:19:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1389099346' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:19:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:19:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1389099346' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:19:40 np0005542249 nova_compute[254900]: 2025-12-02 11:19:40.671 254904 INFO nova.compute.manager [None req-c4c7d6b8-ca7f-4b8a-982f-7f3264717192 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:19:40 np0005542249 nova_compute[254900]: 2025-12-02 11:19:40.672 254904 DEBUG oslo.service.loopingcall [None req-c4c7d6b8-ca7f-4b8a-982f-7f3264717192 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:19:40 np0005542249 nova_compute[254900]: 2025-12-02 11:19:40.672 254904 DEBUG nova.compute.manager [-] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:19:40 np0005542249 nova_compute[254900]: 2025-12-02 11:19:40.673 254904 DEBUG nova.network.neutron [-] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:19:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Dec  2 06:19:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Dec  2 06:19:41 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Dec  2 06:19:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.338 254904 DEBUG nova.network.neutron [-] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.363 254904 INFO nova.compute.manager [-] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Took 0.69 seconds to deallocate network for instance.#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.412 254904 DEBUG oslo_concurrency.lockutils [None req-c4c7d6b8-ca7f-4b8a-982f-7f3264717192 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.413 254904 DEBUG oslo_concurrency.lockutils [None req-c4c7d6b8-ca7f-4b8a-982f-7f3264717192 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.459 254904 DEBUG oslo_concurrency.processutils [None req-c4c7d6b8-ca7f-4b8a-982f-7f3264717192 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:19:41 np0005542249 objective_darwin[268437]: {
Dec  2 06:19:41 np0005542249 objective_darwin[268437]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:19:41 np0005542249 objective_darwin[268437]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:19:41 np0005542249 objective_darwin[268437]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:19:41 np0005542249 objective_darwin[268437]:        "osd_id": 0,
Dec  2 06:19:41 np0005542249 objective_darwin[268437]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:19:41 np0005542249 objective_darwin[268437]:        "type": "bluestore"
Dec  2 06:19:41 np0005542249 objective_darwin[268437]:    },
Dec  2 06:19:41 np0005542249 objective_darwin[268437]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:19:41 np0005542249 objective_darwin[268437]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:19:41 np0005542249 objective_darwin[268437]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:19:41 np0005542249 objective_darwin[268437]:        "osd_id": 2,
Dec  2 06:19:41 np0005542249 objective_darwin[268437]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:19:41 np0005542249 objective_darwin[268437]:        "type": "bluestore"
Dec  2 06:19:41 np0005542249 objective_darwin[268437]:    },
Dec  2 06:19:41 np0005542249 objective_darwin[268437]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:19:41 np0005542249 objective_darwin[268437]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:19:41 np0005542249 objective_darwin[268437]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:19:41 np0005542249 objective_darwin[268437]:        "osd_id": 1,
Dec  2 06:19:41 np0005542249 objective_darwin[268437]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:19:41 np0005542249 objective_darwin[268437]:        "type": "bluestore"
Dec  2 06:19:41 np0005542249 objective_darwin[268437]:    }
Dec  2 06:19:41 np0005542249 objective_darwin[268437]: }
Dec  2 06:19:41 np0005542249 systemd[1]: libpod-b753cfd42cb42dfa1cc8819bce25d564cd6d2c490fb12b3c0f70d51af394609b.scope: Deactivated successfully.
Dec  2 06:19:41 np0005542249 podman[268406]: 2025-12-02 11:19:41.611790827 +0000 UTC m=+1.381604883 container died b753cfd42cb42dfa1cc8819bce25d564cd6d2c490fb12b3c0f70d51af394609b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:19:41 np0005542249 systemd[1]: libpod-b753cfd42cb42dfa1cc8819bce25d564cd6d2c490fb12b3c0f70d51af394609b.scope: Consumed 1.161s CPU time.
Dec  2 06:19:41 np0005542249 systemd[1]: var-lib-containers-storage-overlay-d4b4a92a90953bd4dab64baf5cca6d008d001946e2ffd1c3856f170a6e264ecc-merged.mount: Deactivated successfully.
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.658 254904 DEBUG nova.compute.manager [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Received event network-vif-unplugged-90a1736a-6da6-48ad-8cfb-3ba7e7538576 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.659 254904 DEBUG oslo_concurrency.lockutils [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.659 254904 DEBUG oslo_concurrency.lockutils [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.660 254904 DEBUG oslo_concurrency.lockutils [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.660 254904 DEBUG nova.compute.manager [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] No waiting events found dispatching network-vif-unplugged-90a1736a-6da6-48ad-8cfb-3ba7e7538576 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.661 254904 WARNING nova.compute.manager [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Received unexpected event network-vif-unplugged-90a1736a-6da6-48ad-8cfb-3ba7e7538576 for instance with vm_state deleted and task_state None.#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.661 254904 DEBUG nova.compute.manager [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Received event network-vif-plugged-90a1736a-6da6-48ad-8cfb-3ba7e7538576 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.662 254904 DEBUG oslo_concurrency.lockutils [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.662 254904 DEBUG oslo_concurrency.lockutils [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.662 254904 DEBUG oslo_concurrency.lockutils [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.663 254904 DEBUG nova.compute.manager [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] No waiting events found dispatching network-vif-plugged-90a1736a-6da6-48ad-8cfb-3ba7e7538576 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.664 254904 WARNING nova.compute.manager [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Received unexpected event network-vif-plugged-90a1736a-6da6-48ad-8cfb-3ba7e7538576 for instance with vm_state deleted and task_state None.#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.664 254904 DEBUG nova.compute.manager [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Received event network-vif-plugged-90a1736a-6da6-48ad-8cfb-3ba7e7538576 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.666 254904 DEBUG oslo_concurrency.lockutils [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.666 254904 DEBUG oslo_concurrency.lockutils [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.667 254904 DEBUG oslo_concurrency.lockutils [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.667 254904 DEBUG nova.compute.manager [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] No waiting events found dispatching network-vif-plugged-90a1736a-6da6-48ad-8cfb-3ba7e7538576 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.667 254904 WARNING nova.compute.manager [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Received unexpected event network-vif-plugged-90a1736a-6da6-48ad-8cfb-3ba7e7538576 for instance with vm_state deleted and task_state None.#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.667 254904 DEBUG nova.compute.manager [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Received event network-vif-plugged-90a1736a-6da6-48ad-8cfb-3ba7e7538576 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.668 254904 DEBUG oslo_concurrency.lockutils [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.668 254904 DEBUG oslo_concurrency.lockutils [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.668 254904 DEBUG oslo_concurrency.lockutils [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.668 254904 DEBUG nova.compute.manager [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] No waiting events found dispatching network-vif-plugged-90a1736a-6da6-48ad-8cfb-3ba7e7538576 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.669 254904 WARNING nova.compute.manager [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Received unexpected event network-vif-plugged-90a1736a-6da6-48ad-8cfb-3ba7e7538576 for instance with vm_state deleted and task_state None.#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.669 254904 DEBUG nova.compute.manager [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Received event network-vif-unplugged-90a1736a-6da6-48ad-8cfb-3ba7e7538576 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.669 254904 DEBUG oslo_concurrency.lockutils [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.669 254904 DEBUG oslo_concurrency.lockutils [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.670 254904 DEBUG oslo_concurrency.lockutils [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.670 254904 DEBUG nova.compute.manager [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] No waiting events found dispatching network-vif-unplugged-90a1736a-6da6-48ad-8cfb-3ba7e7538576 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.670 254904 WARNING nova.compute.manager [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Received unexpected event network-vif-unplugged-90a1736a-6da6-48ad-8cfb-3ba7e7538576 for instance with vm_state deleted and task_state None.#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.670 254904 DEBUG nova.compute.manager [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Received event network-vif-plugged-90a1736a-6da6-48ad-8cfb-3ba7e7538576 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.670 254904 DEBUG oslo_concurrency.lockutils [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.671 254904 DEBUG oslo_concurrency.lockutils [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.671 254904 DEBUG oslo_concurrency.lockutils [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "4981d357-863a-45cb-a3e6-6d8e85010252-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.671 254904 DEBUG nova.compute.manager [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] No waiting events found dispatching network-vif-plugged-90a1736a-6da6-48ad-8cfb-3ba7e7538576 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.671 254904 WARNING nova.compute.manager [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Received unexpected event network-vif-plugged-90a1736a-6da6-48ad-8cfb-3ba7e7538576 for instance with vm_state deleted and task_state None.#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.671 254904 DEBUG nova.compute.manager [req-a9389068-c4ef-4742-890e-b5f4e9cd9ec6 req-66b0a125-4dd5-4f8c-9d1a-2eca26571676 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Received event network-vif-deleted-90a1736a-6da6-48ad-8cfb-3ba7e7538576 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:19:41 np0005542249 podman[268406]: 2025-12-02 11:19:41.685136554 +0000 UTC m=+1.454950590 container remove b753cfd42cb42dfa1cc8819bce25d564cd6d2c490fb12b3c0f70d51af394609b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_darwin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:19:41 np0005542249 systemd[1]: libpod-conmon-b753cfd42cb42dfa1cc8819bce25d564cd6d2c490fb12b3c0f70d51af394609b.scope: Deactivated successfully.
Dec  2 06:19:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:19:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:19:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:19:41 np0005542249 podman[268490]: 2025-12-02 11:19:41.75195004 +0000 UTC m=+0.093576210 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 06:19:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:19:41 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 9c9e4577-f4d4-41b4-9adf-d1e131d69433 does not exist
Dec  2 06:19:41 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev a4164c8b-4212-4d7c-ad9b-da9273889396 does not exist
Dec  2 06:19:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1102: 321 pgs: 321 active+clean; 273 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 7.6 MiB/s rd, 5.1 MiB/s wr, 213 op/s
Dec  2 06:19:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:19:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/370486588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.963 254904 DEBUG oslo_concurrency.processutils [None req-c4c7d6b8-ca7f-4b8a-982f-7f3264717192 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.974 254904 DEBUG nova.compute.provider_tree [None req-c4c7d6b8-ca7f-4b8a-982f-7f3264717192 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:19:41 np0005542249 nova_compute[254900]: 2025-12-02 11:19:41.990 254904 DEBUG nova.scheduler.client.report [None req-c4c7d6b8-ca7f-4b8a-982f-7f3264717192 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:19:42 np0005542249 nova_compute[254900]: 2025-12-02 11:19:42.009 254904 DEBUG oslo_concurrency.lockutils [None req-c4c7d6b8-ca7f-4b8a-982f-7f3264717192 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:42 np0005542249 nova_compute[254900]: 2025-12-02 11:19:42.031 254904 INFO nova.scheduler.client.report [None req-c4c7d6b8-ca7f-4b8a-982f-7f3264717192 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Deleted allocations for instance 4981d357-863a-45cb-a3e6-6d8e85010252#033[00m
Dec  2 06:19:42 np0005542249 nova_compute[254900]: 2025-12-02 11:19:42.115 254904 DEBUG oslo_concurrency.lockutils [None req-c4c7d6b8-ca7f-4b8a-982f-7f3264717192 505334fe5eb749e19ba727c8b2d04594 0587e0fe146043ba857b2d8002ab0a3b - - default default] Lock "4981d357-863a-45cb-a3e6-6d8e85010252" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.319s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:19:42 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:19:42 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:19:42 np0005542249 nova_compute[254900]: 2025-12-02 11:19:42.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:19:42 np0005542249 nova_compute[254900]: 2025-12-02 11:19:42.994 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Dec  2 06:19:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Dec  2 06:19:43 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Dec  2 06:19:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:19:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/34587546' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:19:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:19:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/34587546' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:19:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1104: 321 pgs: 321 active+clean; 181 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 6.8 MiB/s rd, 1.4 MiB/s wr, 306 op/s
Dec  2 06:19:45 np0005542249 nova_compute[254900]: 2025-12-02 11:19:45.088 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1105: 321 pgs: 321 active+clean; 123 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 7.0 KiB/s wr, 256 op/s
Dec  2 06:19:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:19:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Dec  2 06:19:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Dec  2 06:19:46 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Dec  2 06:19:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:19:46 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2239091747' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:19:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Dec  2 06:19:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Dec  2 06:19:47 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Dec  2 06:19:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1108: 321 pgs: 321 active+clean; 170 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 6.2 MiB/s rd, 11 MiB/s wr, 377 op/s
Dec  2 06:19:47 np0005542249 nova_compute[254900]: 2025-12-02 11:19:47.996 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Dec  2 06:19:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Dec  2 06:19:48 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Dec  2 06:19:49 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:19:49.399 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:19:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1110: 321 pgs: 321 active+clean; 254 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 24 MiB/s wr, 225 op/s
Dec  2 06:19:50 np0005542249 nova_compute[254900]: 2025-12-02 11:19:50.093 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:19:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3557659278' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:19:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:19:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3557659278' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:19:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:19:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1373909031' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:19:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:19:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1373909031' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:19:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:19:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Dec  2 06:19:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Dec  2 06:19:51 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Dec  2 06:19:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:19:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3263544184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:19:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1112: 321 pgs: 321 active+clean; 254 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 130 KiB/s rd, 25 MiB/s wr, 196 op/s
Dec  2 06:19:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Dec  2 06:19:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Dec  2 06:19:52 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Dec  2 06:19:53 np0005542249 nova_compute[254900]: 2025-12-02 11:19:52.999 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1114: 321 pgs: 321 active+clean; 339 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 100 KiB/s rd, 33 MiB/s wr, 147 op/s
Dec  2 06:19:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Dec  2 06:19:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Dec  2 06:19:54 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Dec  2 06:19:55 np0005542249 nova_compute[254900]: 2025-12-02 11:19:55.054 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764674380.0537508, 4981d357-863a-45cb-a3e6-6d8e85010252 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:19:55 np0005542249 nova_compute[254900]: 2025-12-02 11:19:55.055 254904 INFO nova.compute.manager [-] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:19:55 np0005542249 nova_compute[254900]: 2025-12-02 11:19:55.085 254904 DEBUG nova.compute.manager [None req-6ce9165d-e051-406e-a9c8-ac82f8a0fe96 - - - - - -] [instance: 4981d357-863a-45cb-a3e6-6d8e85010252] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:19:55 np0005542249 nova_compute[254900]: 2025-12-02 11:19:55.132 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:19:55 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3516627800' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:19:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:19:55 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3516627800' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:19:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Dec  2 06:19:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Dec  2 06:19:55 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Dec  2 06:19:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1117: 321 pgs: 321 active+clean; 400 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 120 KiB/s rd, 42 MiB/s wr, 180 op/s
Dec  2 06:19:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:19:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:19:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:19:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:19:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:19:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:19:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:19:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Dec  2 06:19:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Dec  2 06:19:56 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Dec  2 06:19:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:19:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3877383668' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:19:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:19:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3877383668' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:19:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1119: 321 pgs: 321 active+clean; 600 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 165 KiB/s rd, 61 MiB/s wr, 238 op/s
Dec  2 06:19:58 np0005542249 nova_compute[254900]: 2025-12-02 11:19:58.004 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:19:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Dec  2 06:19:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Dec  2 06:19:59 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Dec  2 06:19:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1121: 321 pgs: 321 active+clean; 672 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 159 KiB/s rd, 49 MiB/s wr, 224 op/s
Dec  2 06:20:00 np0005542249 podman[268572]: 2025-12-02 11:20:00.015601365 +0000 UTC m=+0.089495993 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  2 06:20:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:20:00 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/698649754' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:20:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:20:00 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/698649754' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:20:00 np0005542249 nova_compute[254900]: 2025-12-02 11:20:00.173 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:20:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Dec  2 06:20:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Dec  2 06:20:01 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Dec  2 06:20:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1123: 321 pgs: 321 active+clean; 672 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 45 MiB/s wr, 205 op/s
Dec  2 06:20:03 np0005542249 nova_compute[254900]: 2025-12-02 11:20:03.005 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1124: 321 pgs: 321 active+clean; 800 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 141 KiB/s rd, 54 MiB/s wr, 199 op/s
Dec  2 06:20:04 np0005542249 nova_compute[254900]: 2025-12-02 11:20:04.381 254904 DEBUG oslo_concurrency.lockutils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Acquiring lock "7287d707-15a8-4a81-b795-38db84b35b54" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:04 np0005542249 nova_compute[254900]: 2025-12-02 11:20:04.382 254904 DEBUG oslo_concurrency.lockutils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Lock "7287d707-15a8-4a81-b795-38db84b35b54" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:04 np0005542249 nova_compute[254900]: 2025-12-02 11:20:04.413 254904 DEBUG nova.compute.manager [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:20:04 np0005542249 nova_compute[254900]: 2025-12-02 11:20:04.543 254904 DEBUG oslo_concurrency.lockutils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:04 np0005542249 nova_compute[254900]: 2025-12-02 11:20:04.544 254904 DEBUG oslo_concurrency.lockutils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:04 np0005542249 nova_compute[254900]: 2025-12-02 11:20:04.557 254904 DEBUG nova.virt.hardware [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:20:04 np0005542249 nova_compute[254900]: 2025-12-02 11:20:04.558 254904 INFO nova.compute.claims [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:20:04 np0005542249 nova_compute[254900]: 2025-12-02 11:20:04.673 254904 DEBUG oslo_concurrency.processutils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:20:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:20:05 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1395524850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:20:05 np0005542249 nova_compute[254900]: 2025-12-02 11:20:05.169 254904 DEBUG oslo_concurrency.processutils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:20:05 np0005542249 nova_compute[254900]: 2025-12-02 11:20:05.177 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:05 np0005542249 nova_compute[254900]: 2025-12-02 11:20:05.180 254904 DEBUG nova.compute.provider_tree [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:20:05 np0005542249 nova_compute[254900]: 2025-12-02 11:20:05.199 254904 DEBUG nova.scheduler.client.report [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:20:05 np0005542249 nova_compute[254900]: 2025-12-02 11:20:05.240 254904 DEBUG oslo_concurrency.lockutils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:05 np0005542249 nova_compute[254900]: 2025-12-02 11:20:05.241 254904 DEBUG nova.compute.manager [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:20:05 np0005542249 nova_compute[254900]: 2025-12-02 11:20:05.308 254904 DEBUG nova.compute.manager [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:20:05 np0005542249 nova_compute[254900]: 2025-12-02 11:20:05.309 254904 DEBUG nova.network.neutron [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:20:05 np0005542249 nova_compute[254900]: 2025-12-02 11:20:05.347 254904 INFO nova.virt.libvirt.driver [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:20:05 np0005542249 nova_compute[254900]: 2025-12-02 11:20:05.372 254904 DEBUG nova.compute.manager [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:20:05 np0005542249 nova_compute[254900]: 2025-12-02 11:20:05.494 254904 DEBUG nova.compute.manager [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:20:05 np0005542249 nova_compute[254900]: 2025-12-02 11:20:05.496 254904 DEBUG nova.virt.libvirt.driver [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:20:05 np0005542249 nova_compute[254900]: 2025-12-02 11:20:05.496 254904 INFO nova.virt.libvirt.driver [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Creating image(s)#033[00m
Dec  2 06:20:05 np0005542249 nova_compute[254900]: 2025-12-02 11:20:05.535 254904 DEBUG nova.storage.rbd_utils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] rbd image 7287d707-15a8-4a81-b795-38db84b35b54_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:20:05 np0005542249 nova_compute[254900]: 2025-12-02 11:20:05.573 254904 DEBUG nova.storage.rbd_utils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] rbd image 7287d707-15a8-4a81-b795-38db84b35b54_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:20:05 np0005542249 nova_compute[254900]: 2025-12-02 11:20:05.611 254904 DEBUG nova.storage.rbd_utils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] rbd image 7287d707-15a8-4a81-b795-38db84b35b54_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:20:05 np0005542249 nova_compute[254900]: 2025-12-02 11:20:05.618 254904 DEBUG oslo_concurrency.processutils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:20:05 np0005542249 nova_compute[254900]: 2025-12-02 11:20:05.713 254904 DEBUG oslo_concurrency.processutils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:20:05 np0005542249 nova_compute[254900]: 2025-12-02 11:20:05.715 254904 DEBUG oslo_concurrency.lockutils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Acquiring lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:05 np0005542249 nova_compute[254900]: 2025-12-02 11:20:05.716 254904 DEBUG oslo_concurrency.lockutils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:05 np0005542249 nova_compute[254900]: 2025-12-02 11:20:05.717 254904 DEBUG oslo_concurrency.lockutils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:05 np0005542249 nova_compute[254900]: 2025-12-02 11:20:05.753 254904 DEBUG nova.storage.rbd_utils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] rbd image 7287d707-15a8-4a81-b795-38db84b35b54_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:20:05 np0005542249 nova_compute[254900]: 2025-12-02 11:20:05.762 254904 DEBUG oslo_concurrency.processutils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 7287d707-15a8-4a81-b795-38db84b35b54_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:20:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1125: 321 pgs: 321 active+clean; 920 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 40 MiB/s wr, 106 op/s
Dec  2 06:20:06 np0005542249 nova_compute[254900]: 2025-12-02 11:20:06.086 254904 DEBUG oslo_concurrency.processutils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 7287d707-15a8-4a81-b795-38db84b35b54_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:20:06 np0005542249 nova_compute[254900]: 2025-12-02 11:20:06.170 254904 DEBUG nova.storage.rbd_utils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] resizing rbd image 7287d707-15a8-4a81-b795-38db84b35b54_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  2 06:20:06 np0005542249 nova_compute[254900]: 2025-12-02 11:20:06.310 254904 DEBUG nova.objects.instance [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Lazy-loading 'migration_context' on Instance uuid 7287d707-15a8-4a81-b795-38db84b35b54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:20:06 np0005542249 nova_compute[254900]: 2025-12-02 11:20:06.330 254904 DEBUG nova.virt.libvirt.driver [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 06:20:06 np0005542249 nova_compute[254900]: 2025-12-02 11:20:06.331 254904 DEBUG nova.virt.libvirt.driver [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Ensure instance console log exists: /var/lib/nova/instances/7287d707-15a8-4a81-b795-38db84b35b54/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:20:06 np0005542249 nova_compute[254900]: 2025-12-02 11:20:06.331 254904 DEBUG oslo_concurrency.lockutils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:06 np0005542249 nova_compute[254900]: 2025-12-02 11:20:06.332 254904 DEBUG oslo_concurrency.lockutils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:06 np0005542249 nova_compute[254900]: 2025-12-02 11:20:06.332 254904 DEBUG oslo_concurrency.lockutils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:20:07 np0005542249 nova_compute[254900]: 2025-12-02 11:20:07.125 254904 DEBUG nova.network.neutron [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Dec  2 06:20:07 np0005542249 nova_compute[254900]: 2025-12-02 11:20:07.126 254904 DEBUG nova.compute.manager [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:20:07 np0005542249 nova_compute[254900]: 2025-12-02 11:20:07.128 254904 DEBUG nova.virt.libvirt.driver [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'image_id': '5a40f66c-ab43-47dd-9880-e59f9fa2c60e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:20:07 np0005542249 nova_compute[254900]: 2025-12-02 11:20:07.135 254904 WARNING nova.virt.libvirt.driver [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:20:07 np0005542249 nova_compute[254900]: 2025-12-02 11:20:07.148 254904 DEBUG nova.virt.libvirt.host [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:20:07 np0005542249 nova_compute[254900]: 2025-12-02 11:20:07.148 254904 DEBUG nova.virt.libvirt.host [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:20:07 np0005542249 nova_compute[254900]: 2025-12-02 11:20:07.154 254904 DEBUG nova.virt.libvirt.host [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:20:07 np0005542249 nova_compute[254900]: 2025-12-02 11:20:07.155 254904 DEBUG nova.virt.libvirt.host [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:20:07 np0005542249 nova_compute[254900]: 2025-12-02 11:20:07.155 254904 DEBUG nova.virt.libvirt.driver [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:20:07 np0005542249 nova_compute[254900]: 2025-12-02 11:20:07.155 254904 DEBUG nova.virt.hardware [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:20:07 np0005542249 nova_compute[254900]: 2025-12-02 11:20:07.156 254904 DEBUG nova.virt.hardware [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:20:07 np0005542249 nova_compute[254900]: 2025-12-02 11:20:07.156 254904 DEBUG nova.virt.hardware [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:20:07 np0005542249 nova_compute[254900]: 2025-12-02 11:20:07.156 254904 DEBUG nova.virt.hardware [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:20:07 np0005542249 nova_compute[254900]: 2025-12-02 11:20:07.156 254904 DEBUG nova.virt.hardware [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:20:07 np0005542249 nova_compute[254900]: 2025-12-02 11:20:07.156 254904 DEBUG nova.virt.hardware [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:20:07 np0005542249 nova_compute[254900]: 2025-12-02 11:20:07.157 254904 DEBUG nova.virt.hardware [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:20:07 np0005542249 nova_compute[254900]: 2025-12-02 11:20:07.157 254904 DEBUG nova.virt.hardware [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:20:07 np0005542249 nova_compute[254900]: 2025-12-02 11:20:07.157 254904 DEBUG nova.virt.hardware [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:20:07 np0005542249 nova_compute[254900]: 2025-12-02 11:20:07.157 254904 DEBUG nova.virt.hardware [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:20:07 np0005542249 nova_compute[254900]: 2025-12-02 11:20:07.157 254904 DEBUG nova.virt.hardware [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:20:07 np0005542249 nova_compute[254900]: 2025-12-02 11:20:07.160 254904 DEBUG oslo_concurrency.processutils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:20:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Dec  2 06:20:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Dec  2 06:20:07 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Dec  2 06:20:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:20:07 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2219091382' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:20:07 np0005542249 nova_compute[254900]: 2025-12-02 11:20:07.745 254904 DEBUG oslo_concurrency.processutils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:20:07 np0005542249 nova_compute[254900]: 2025-12-02 11:20:07.768 254904 DEBUG nova.storage.rbd_utils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] rbd image 7287d707-15a8-4a81-b795-38db84b35b54_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:20:07 np0005542249 nova_compute[254900]: 2025-12-02 11:20:07.772 254904 DEBUG oslo_concurrency.processutils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:20:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1127: 321 pgs: 321 active+clean; 1.1 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 53 MiB/s wr, 89 op/s
Dec  2 06:20:08 np0005542249 nova_compute[254900]: 2025-12-02 11:20:08.008 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:20:08 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3137318857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:20:08 np0005542249 nova_compute[254900]: 2025-12-02 11:20:08.223 254904 DEBUG oslo_concurrency.processutils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:20:08 np0005542249 nova_compute[254900]: 2025-12-02 11:20:08.226 254904 DEBUG nova.objects.instance [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7287d707-15a8-4a81-b795-38db84b35b54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:20:08 np0005542249 nova_compute[254900]: 2025-12-02 11:20:08.244 254904 DEBUG nova.virt.libvirt.driver [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:20:08 np0005542249 nova_compute[254900]:  <uuid>7287d707-15a8-4a81-b795-38db84b35b54</uuid>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:  <name>instance-00000006</name>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      <nova:name>tempest-VolumesNegativeTest-instance-1974160610</nova:name>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:20:07</nova:creationTime>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:20:08 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:        <nova:user uuid="b28c2187420b4a3f82756cc3a55fa30e">tempest-VolumesNegativeTest-1474406488-project-member</nova:user>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:        <nova:project uuid="dc995559349d422db9731a0bc10a9115">tempest-VolumesNegativeTest-1474406488</nova:project>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      <nova:root type="image" uuid="5a40f66c-ab43-47dd-9880-e59f9fa2c60e"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      <nova:ports/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      <entry name="serial">7287d707-15a8-4a81-b795-38db84b35b54</entry>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      <entry name="uuid">7287d707-15a8-4a81-b795-38db84b35b54</entry>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/7287d707-15a8-4a81-b795-38db84b35b54_disk">
Dec  2 06:20:08 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:20:08 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/7287d707-15a8-4a81-b795-38db84b35b54_disk.config">
Dec  2 06:20:08 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:20:08 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/7287d707-15a8-4a81-b795-38db84b35b54/console.log" append="off"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:20:08 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:20:08 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:20:08 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:20:08 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:20:08 np0005542249 nova_compute[254900]: 2025-12-02 11:20:08.323 254904 DEBUG nova.virt.libvirt.driver [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:20:08 np0005542249 nova_compute[254900]: 2025-12-02 11:20:08.324 254904 DEBUG nova.virt.libvirt.driver [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:20:08 np0005542249 nova_compute[254900]: 2025-12-02 11:20:08.324 254904 INFO nova.virt.libvirt.driver [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Using config drive#033[00m
Dec  2 06:20:08 np0005542249 nova_compute[254900]: 2025-12-02 11:20:08.359 254904 DEBUG nova.storage.rbd_utils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] rbd image 7287d707-15a8-4a81-b795-38db84b35b54_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:20:08 np0005542249 podman[268844]: 2025-12-02 11:20:08.410852894 +0000 UTC m=+0.118062763 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  2 06:20:08 np0005542249 nova_compute[254900]: 2025-12-02 11:20:08.683 254904 INFO nova.virt.libvirt.driver [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Creating config drive at /var/lib/nova/instances/7287d707-15a8-4a81-b795-38db84b35b54/disk.config#033[00m
Dec  2 06:20:08 np0005542249 nova_compute[254900]: 2025-12-02 11:20:08.693 254904 DEBUG oslo_concurrency.processutils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7287d707-15a8-4a81-b795-38db84b35b54/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7fvquqez execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:20:08 np0005542249 nova_compute[254900]: 2025-12-02 11:20:08.719 254904 DEBUG oslo_concurrency.lockutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Acquiring lock "7bedde1c-9243-4d63-b574-154d2b7e78ef" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:08 np0005542249 nova_compute[254900]: 2025-12-02 11:20:08.721 254904 DEBUG oslo_concurrency.lockutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "7bedde1c-9243-4d63-b574-154d2b7e78ef" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:08 np0005542249 nova_compute[254900]: 2025-12-02 11:20:08.740 254904 DEBUG nova.compute.manager [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:20:08 np0005542249 nova_compute[254900]: 2025-12-02 11:20:08.826 254904 DEBUG oslo_concurrency.processutils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7287d707-15a8-4a81-b795-38db84b35b54/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7fvquqez" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:20:08 np0005542249 nova_compute[254900]: 2025-12-02 11:20:08.864 254904 DEBUG nova.storage.rbd_utils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] rbd image 7287d707-15a8-4a81-b795-38db84b35b54_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:20:08 np0005542249 nova_compute[254900]: 2025-12-02 11:20:08.869 254904 DEBUG oslo_concurrency.processutils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7287d707-15a8-4a81-b795-38db84b35b54/disk.config 7287d707-15a8-4a81-b795-38db84b35b54_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:20:08 np0005542249 nova_compute[254900]: 2025-12-02 11:20:08.904 254904 DEBUG oslo_concurrency.lockutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:08 np0005542249 nova_compute[254900]: 2025-12-02 11:20:08.906 254904 DEBUG oslo_concurrency.lockutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:08 np0005542249 nova_compute[254900]: 2025-12-02 11:20:08.918 254904 DEBUG nova.virt.hardware [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:20:08 np0005542249 nova_compute[254900]: 2025-12-02 11:20:08.918 254904 INFO nova.compute.claims [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.064 254904 DEBUG oslo_concurrency.processutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.096 254904 DEBUG oslo_concurrency.processutils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7287d707-15a8-4a81-b795-38db84b35b54/disk.config 7287d707-15a8-4a81-b795-38db84b35b54_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.227s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.098 254904 INFO nova.virt.libvirt.driver [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Deleting local config drive /var/lib/nova/instances/7287d707-15a8-4a81-b795-38db84b35b54/disk.config because it was imported into RBD.#033[00m
Dec  2 06:20:09 np0005542249 systemd-machined[216222]: New machine qemu-6-instance-00000006.
Dec  2 06:20:09 np0005542249 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Dec  2 06:20:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:20:09 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2044754638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.564 254904 DEBUG oslo_concurrency.processutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.569 254904 DEBUG nova.compute.provider_tree [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.584 254904 DEBUG nova.scheduler.client.report [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.588 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674409.5880544, 7287d707-15a8-4a81-b795-38db84b35b54 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.588 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.589 254904 DEBUG nova.compute.manager [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.590 254904 DEBUG nova.virt.libvirt.driver [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.593 254904 INFO nova.virt.libvirt.driver [-] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Instance spawned successfully.#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.593 254904 DEBUG nova.virt.libvirt.driver [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.616 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.618 254904 DEBUG oslo_concurrency.lockutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.619 254904 DEBUG nova.compute.manager [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.623 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.627 254904 DEBUG nova.virt.libvirt.driver [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.627 254904 DEBUG nova.virt.libvirt.driver [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.628 254904 DEBUG nova.virt.libvirt.driver [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.628 254904 DEBUG nova.virt.libvirt.driver [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.628 254904 DEBUG nova.virt.libvirt.driver [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.629 254904 DEBUG nova.virt.libvirt.driver [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.657 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.658 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674409.5883605, 7287d707-15a8-4a81-b795-38db84b35b54 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.658 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] VM Started (Lifecycle Event)#033[00m
Dec  2 06:20:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Dec  2 06:20:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Dec  2 06:20:09 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.698 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.702 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.712 254904 DEBUG nova.compute.manager [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.713 254904 DEBUG nova.network.neutron [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.733 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.738 254904 INFO nova.virt.libvirt.driver [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.748 254904 INFO nova.compute.manager [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Took 4.25 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.749 254904 DEBUG nova.compute.manager [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.758 254904 DEBUG nova.compute.manager [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.832 254904 INFO nova.compute.manager [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Took 5.35 seconds to build instance.#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.853 254904 DEBUG oslo_concurrency.lockutils [None req-f07dba48-9a85-4fc2-89fd-722257693415 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Lock "7287d707-15a8-4a81-b795-38db84b35b54" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.886 254904 DEBUG nova.compute.manager [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.887 254904 DEBUG nova.virt.libvirt.driver [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.887 254904 INFO nova.virt.libvirt.driver [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Creating image(s)#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.910 254904 DEBUG nova.storage.rbd_utils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] rbd image 7bedde1c-9243-4d63-b574-154d2b7e78ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:20:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1129: 321 pgs: 321 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 91 KiB/s rd, 58 MiB/s wr, 145 op/s
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.939 254904 DEBUG nova.storage.rbd_utils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] rbd image 7bedde1c-9243-4d63-b574-154d2b7e78ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.970 254904 DEBUG nova.storage.rbd_utils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] rbd image 7bedde1c-9243-4d63-b574-154d2b7e78ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:20:09 np0005542249 nova_compute[254900]: 2025-12-02 11:20:09.975 254904 DEBUG oslo_concurrency.processutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.055 254904 DEBUG oslo_concurrency.processutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.056 254904 DEBUG oslo_concurrency.lockutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Acquiring lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.057 254904 DEBUG oslo_concurrency.lockutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.057 254904 DEBUG oslo_concurrency.lockutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.080 254904 DEBUG nova.storage.rbd_utils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] rbd image 7bedde1c-9243-4d63-b574-154d2b7e78ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.084 254904 DEBUG oslo_concurrency.processutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 7bedde1c-9243-4d63-b574-154d2b7e78ef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.177 254904 DEBUG nova.policy [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '382055aacd254f8bb9b170628992619d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4beaae6889da4e57bb304963bae13143', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.223 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.402 254904 DEBUG oslo_concurrency.processutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 7bedde1c-9243-4d63-b574-154d2b7e78ef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.462 254904 DEBUG oslo_concurrency.lockutils [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Acquiring lock "7287d707-15a8-4a81-b795-38db84b35b54" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.463 254904 DEBUG oslo_concurrency.lockutils [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Lock "7287d707-15a8-4a81-b795-38db84b35b54" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.463 254904 DEBUG oslo_concurrency.lockutils [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Acquiring lock "7287d707-15a8-4a81-b795-38db84b35b54-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.464 254904 DEBUG oslo_concurrency.lockutils [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Lock "7287d707-15a8-4a81-b795-38db84b35b54-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.464 254904 DEBUG oslo_concurrency.lockutils [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Lock "7287d707-15a8-4a81-b795-38db84b35b54-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.465 254904 INFO nova.compute.manager [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Terminating instance#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.466 254904 DEBUG oslo_concurrency.lockutils [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Acquiring lock "refresh_cache-7287d707-15a8-4a81-b795-38db84b35b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.466 254904 DEBUG oslo_concurrency.lockutils [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Acquired lock "refresh_cache-7287d707-15a8-4a81-b795-38db84b35b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.466 254904 DEBUG nova.network.neutron [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.471 254904 DEBUG nova.storage.rbd_utils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] resizing rbd image 7bedde1c-9243-4d63-b574-154d2b7e78ef_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.555 254904 DEBUG nova.objects.instance [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lazy-loading 'migration_context' on Instance uuid 7bedde1c-9243-4d63-b574-154d2b7e78ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.587 254904 DEBUG nova.virt.libvirt.driver [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.587 254904 DEBUG nova.virt.libvirt.driver [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Ensure instance console log exists: /var/lib/nova/instances/7bedde1c-9243-4d63-b574-154d2b7e78ef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.587 254904 DEBUG oslo_concurrency.lockutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.588 254904 DEBUG oslo_concurrency.lockutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.588 254904 DEBUG oslo_concurrency.lockutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.737 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.777 254904 WARNING nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] While synchronizing instance power states, found 2 instances in the database and 1 instances on the hypervisor.#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.778 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Triggering sync for uuid 7287d707-15a8-4a81-b795-38db84b35b54 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.778 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Triggering sync for uuid 7bedde1c-9243-4d63-b574-154d2b7e78ef _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.778 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "7287d707-15a8-4a81-b795-38db84b35b54" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.778 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "7bedde1c-9243-4d63-b574-154d2b7e78ef" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:10 np0005542249 nova_compute[254900]: 2025-12-02 11:20:10.857 254904 DEBUG nova.network.neutron [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:20:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:20:10 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2217545062' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:20:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:20:10 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2217545062' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:20:11 np0005542249 nova_compute[254900]: 2025-12-02 11:20:11.193 254904 DEBUG nova.network.neutron [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:20:11 np0005542249 nova_compute[254900]: 2025-12-02 11:20:11.214 254904 DEBUG oslo_concurrency.lockutils [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Releasing lock "refresh_cache-7287d707-15a8-4a81-b795-38db84b35b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:20:11 np0005542249 nova_compute[254900]: 2025-12-02 11:20:11.216 254904 DEBUG nova.compute.manager [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:20:11 np0005542249 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Dec  2 06:20:11 np0005542249 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 2.062s CPU time.
Dec  2 06:20:11 np0005542249 nova_compute[254900]: 2025-12-02 11:20:11.279 254904 DEBUG nova.network.neutron [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Successfully created port: 30bc1a71-f71d-41f4-a599-eaa26706d00c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:20:11 np0005542249 systemd-machined[216222]: Machine qemu-6-instance-00000006 terminated.
Dec  2 06:20:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:20:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Dec  2 06:20:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Dec  2 06:20:11 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Dec  2 06:20:11 np0005542249 nova_compute[254900]: 2025-12-02 11:20:11.443 254904 INFO nova.virt.libvirt.driver [-] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Instance destroyed successfully.#033[00m
Dec  2 06:20:11 np0005542249 nova_compute[254900]: 2025-12-02 11:20:11.443 254904 DEBUG nova.objects.instance [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Lazy-loading 'resources' on Instance uuid 7287d707-15a8-4a81-b795-38db84b35b54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:20:11 np0005542249 nova_compute[254900]: 2025-12-02 11:20:11.901 254904 INFO nova.virt.libvirt.driver [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Deleting instance files /var/lib/nova/instances/7287d707-15a8-4a81-b795-38db84b35b54_del#033[00m
Dec  2 06:20:11 np0005542249 nova_compute[254900]: 2025-12-02 11:20:11.902 254904 INFO nova.virt.libvirt.driver [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Deletion of /var/lib/nova/instances/7287d707-15a8-4a81-b795-38db84b35b54_del complete#033[00m
Dec  2 06:20:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1131: 321 pgs: 321 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 36 MiB/s wr, 108 op/s
Dec  2 06:20:11 np0005542249 nova_compute[254900]: 2025-12-02 11:20:11.959 254904 INFO nova.compute.manager [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:20:11 np0005542249 nova_compute[254900]: 2025-12-02 11:20:11.960 254904 DEBUG oslo.service.loopingcall [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:20:11 np0005542249 nova_compute[254900]: 2025-12-02 11:20:11.961 254904 DEBUG nova.compute.manager [-] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:20:11 np0005542249 nova_compute[254900]: 2025-12-02 11:20:11.961 254904 DEBUG nova.network.neutron [-] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:20:12 np0005542249 podman[269194]: 2025-12-02 11:20:12.027921353 +0000 UTC m=+0.090593621 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  2 06:20:12 np0005542249 nova_compute[254900]: 2025-12-02 11:20:12.122 254904 DEBUG nova.network.neutron [-] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:20:12 np0005542249 nova_compute[254900]: 2025-12-02 11:20:12.146 254904 DEBUG nova.network.neutron [-] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:20:12 np0005542249 nova_compute[254900]: 2025-12-02 11:20:12.162 254904 INFO nova.compute.manager [-] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Took 0.20 seconds to deallocate network for instance.#033[00m
Dec  2 06:20:12 np0005542249 nova_compute[254900]: 2025-12-02 11:20:12.220 254904 DEBUG oslo_concurrency.lockutils [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:12 np0005542249 nova_compute[254900]: 2025-12-02 11:20:12.221 254904 DEBUG oslo_concurrency.lockutils [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:12 np0005542249 nova_compute[254900]: 2025-12-02 11:20:12.297 254904 DEBUG oslo_concurrency.processutils [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:20:12 np0005542249 nova_compute[254900]: 2025-12-02 11:20:12.383 254904 DEBUG nova.network.neutron [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Successfully updated port: 30bc1a71-f71d-41f4-a599-eaa26706d00c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:20:12 np0005542249 nova_compute[254900]: 2025-12-02 11:20:12.415 254904 DEBUG oslo_concurrency.lockutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Acquiring lock "refresh_cache-7bedde1c-9243-4d63-b574-154d2b7e78ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:20:12 np0005542249 nova_compute[254900]: 2025-12-02 11:20:12.416 254904 DEBUG oslo_concurrency.lockutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Acquired lock "refresh_cache-7bedde1c-9243-4d63-b574-154d2b7e78ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:20:12 np0005542249 nova_compute[254900]: 2025-12-02 11:20:12.416 254904 DEBUG nova.network.neutron [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:20:12 np0005542249 nova_compute[254900]: 2025-12-02 11:20:12.477 254904 DEBUG nova.compute.manager [req-e853baaa-0d90-487c-af3c-7c911c32a548 req-9e7050d6-a276-4d2b-bd64-70d5031e2f27 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Received event network-changed-30bc1a71-f71d-41f4-a599-eaa26706d00c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:20:12 np0005542249 nova_compute[254900]: 2025-12-02 11:20:12.477 254904 DEBUG nova.compute.manager [req-e853baaa-0d90-487c-af3c-7c911c32a548 req-9e7050d6-a276-4d2b-bd64-70d5031e2f27 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Refreshing instance network info cache due to event network-changed-30bc1a71-f71d-41f4-a599-eaa26706d00c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:20:12 np0005542249 nova_compute[254900]: 2025-12-02 11:20:12.479 254904 DEBUG oslo_concurrency.lockutils [req-e853baaa-0d90-487c-af3c-7c911c32a548 req-9e7050d6-a276-4d2b-bd64-70d5031e2f27 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-7bedde1c-9243-4d63-b574-154d2b7e78ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:20:12 np0005542249 nova_compute[254900]: 2025-12-02 11:20:12.650 254904 DEBUG nova.network.neutron [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:20:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:20:12 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1740069191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:20:12 np0005542249 nova_compute[254900]: 2025-12-02 11:20:12.859 254904 DEBUG oslo_concurrency.processutils [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:20:12 np0005542249 nova_compute[254900]: 2025-12-02 11:20:12.865 254904 DEBUG nova.compute.provider_tree [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:20:12 np0005542249 nova_compute[254900]: 2025-12-02 11:20:12.879 254904 DEBUG nova.scheduler.client.report [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:20:12 np0005542249 nova_compute[254900]: 2025-12-02 11:20:12.905 254904 DEBUG oslo_concurrency.lockutils [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:12 np0005542249 nova_compute[254900]: 2025-12-02 11:20:12.933 254904 INFO nova.scheduler.client.report [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Deleted allocations for instance 7287d707-15a8-4a81-b795-38db84b35b54#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.004 254904 DEBUG oslo_concurrency.lockutils [None req-6bcaac94-5e76-4954-a02a-fea820552485 b28c2187420b4a3f82756cc3a55fa30e dc995559349d422db9731a0bc10a9115 - - default default] Lock "7287d707-15a8-4a81-b795-38db84b35b54" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.006 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "7287d707-15a8-4a81-b795-38db84b35b54" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 2.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.007 254904 INFO nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.007 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "7287d707-15a8-4a81-b795-38db84b35b54" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.013 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.673 254904 DEBUG nova.network.neutron [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Updating instance_info_cache with network_info: [{"id": "30bc1a71-f71d-41f4-a599-eaa26706d00c", "address": "fa:16:3e:2c:3d:45", "network": {"id": "1468b032-015a-4fb8-a7c5-2a3b8aab9149", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-555435799-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4beaae6889da4e57bb304963bae13143", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30bc1a71-f7", "ovs_interfaceid": "30bc1a71-f71d-41f4-a599-eaa26706d00c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.699 254904 DEBUG oslo_concurrency.lockutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Releasing lock "refresh_cache-7bedde1c-9243-4d63-b574-154d2b7e78ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.699 254904 DEBUG nova.compute.manager [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Instance network_info: |[{"id": "30bc1a71-f71d-41f4-a599-eaa26706d00c", "address": "fa:16:3e:2c:3d:45", "network": {"id": "1468b032-015a-4fb8-a7c5-2a3b8aab9149", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-555435799-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4beaae6889da4e57bb304963bae13143", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30bc1a71-f7", "ovs_interfaceid": "30bc1a71-f71d-41f4-a599-eaa26706d00c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.699 254904 DEBUG oslo_concurrency.lockutils [req-e853baaa-0d90-487c-af3c-7c911c32a548 req-9e7050d6-a276-4d2b-bd64-70d5031e2f27 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-7bedde1c-9243-4d63-b574-154d2b7e78ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.700 254904 DEBUG nova.network.neutron [req-e853baaa-0d90-487c-af3c-7c911c32a548 req-9e7050d6-a276-4d2b-bd64-70d5031e2f27 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Refreshing network info cache for port 30bc1a71-f71d-41f4-a599-eaa26706d00c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.703 254904 DEBUG nova.virt.libvirt.driver [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Start _get_guest_xml network_info=[{"id": "30bc1a71-f71d-41f4-a599-eaa26706d00c", "address": "fa:16:3e:2c:3d:45", "network": {"id": "1468b032-015a-4fb8-a7c5-2a3b8aab9149", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-555435799-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4beaae6889da4e57bb304963bae13143", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30bc1a71-f7", "ovs_interfaceid": "30bc1a71-f71d-41f4-a599-eaa26706d00c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'image_id': '5a40f66c-ab43-47dd-9880-e59f9fa2c60e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.709 254904 WARNING nova.virt.libvirt.driver [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.714 254904 DEBUG nova.virt.libvirt.host [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.715 254904 DEBUG nova.virt.libvirt.host [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.717 254904 DEBUG nova.virt.libvirt.host [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.718 254904 DEBUG nova.virt.libvirt.host [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.718 254904 DEBUG nova.virt.libvirt.driver [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.718 254904 DEBUG nova.virt.hardware [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.719 254904 DEBUG nova.virt.hardware [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.719 254904 DEBUG nova.virt.hardware [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.719 254904 DEBUG nova.virt.hardware [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.720 254904 DEBUG nova.virt.hardware [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.720 254904 DEBUG nova.virt.hardware [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.720 254904 DEBUG nova.virt.hardware [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.720 254904 DEBUG nova.virt.hardware [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.720 254904 DEBUG nova.virt.hardware [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.720 254904 DEBUG nova.virt.hardware [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.721 254904 DEBUG nova.virt.hardware [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:20:13 np0005542249 nova_compute[254900]: 2025-12-02 11:20:13.724 254904 DEBUG oslo_concurrency.processutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:20:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1132: 321 pgs: 321 active+clean; 496 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 15 MiB/s wr, 299 op/s
Dec  2 06:20:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:20:14 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1332944933' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.184 254904 DEBUG oslo_concurrency.processutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.219 254904 DEBUG nova.storage.rbd_utils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] rbd image 7bedde1c-9243-4d63-b574-154d2b7e78ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.224 254904 DEBUG oslo_concurrency.processutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.442 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.622 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:20:14 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1181176320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.686 254904 DEBUG oslo_concurrency.processutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.689 254904 DEBUG nova.virt.libvirt.vif [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:20:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-166470318',display_name='tempest-VolumesBackupsTest-instance-166470318',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-166470318',id=7,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJziU88L3iquFvjPgSOTyxd8RyTwABv58QhKI/jQBGJ54tLDGdc0mEfjzrnF83TzJsQhvdxGNgZurLJMr9epI63g8qPyQiU643zIfxVH4sr++AbUveV1NoDmmqkFt+Gw/Q==',key_name='tempest-keypair-1500356688',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4beaae6889da4e57bb304963bae13143',ramdisk_id='',reservation_id='r-co0ky5dt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-458528599',owner_user_name='tempest-VolumesBackupsTest-458528599-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:20:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='382055aacd254f8bb9b170628992619d',uuid=7bedde1c-9243-4d63-b574-154d2b7e78ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30bc1a71-f71d-41f4-a599-eaa26706d00c", "address": "fa:16:3e:2c:3d:45", "network": {"id": "1468b032-015a-4fb8-a7c5-2a3b8aab9149", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-555435799-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4beaae6889da4e57bb304963bae13143", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30bc1a71-f7", "ovs_interfaceid": "30bc1a71-f71d-41f4-a599-eaa26706d00c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.689 254904 DEBUG nova.network.os_vif_util [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Converting VIF {"id": "30bc1a71-f71d-41f4-a599-eaa26706d00c", "address": "fa:16:3e:2c:3d:45", "network": {"id": "1468b032-015a-4fb8-a7c5-2a3b8aab9149", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-555435799-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4beaae6889da4e57bb304963bae13143", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30bc1a71-f7", "ovs_interfaceid": "30bc1a71-f71d-41f4-a599-eaa26706d00c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.691 254904 DEBUG nova.network.os_vif_util [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:3d:45,bridge_name='br-int',has_traffic_filtering=True,id=30bc1a71-f71d-41f4-a599-eaa26706d00c,network=Network(1468b032-015a-4fb8-a7c5-2a3b8aab9149),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30bc1a71-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.693 254904 DEBUG nova.objects.instance [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7bedde1c-9243-4d63-b574-154d2b7e78ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:20:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.714 254904 DEBUG nova.virt.libvirt.driver [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:20:14 np0005542249 nova_compute[254900]:  <uuid>7bedde1c-9243-4d63-b574-154d2b7e78ef</uuid>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:  <name>instance-00000007</name>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <nova:name>tempest-VolumesBackupsTest-instance-166470318</nova:name>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:20:13</nova:creationTime>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:20:14 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:        <nova:user uuid="382055aacd254f8bb9b170628992619d">tempest-VolumesBackupsTest-458528599-project-member</nova:user>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:        <nova:project uuid="4beaae6889da4e57bb304963bae13143">tempest-VolumesBackupsTest-458528599</nova:project>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <nova:root type="image" uuid="5a40f66c-ab43-47dd-9880-e59f9fa2c60e"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:        <nova:port uuid="30bc1a71-f71d-41f4-a599-eaa26706d00c">
Dec  2 06:20:14 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <entry name="serial">7bedde1c-9243-4d63-b574-154d2b7e78ef</entry>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <entry name="uuid">7bedde1c-9243-4d63-b574-154d2b7e78ef</entry>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/7bedde1c-9243-4d63-b574-154d2b7e78ef_disk">
Dec  2 06:20:14 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:20:14 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/7bedde1c-9243-4d63-b574-154d2b7e78ef_disk.config">
Dec  2 06:20:14 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:20:14 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:2c:3d:45"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <target dev="tap30bc1a71-f7"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/7bedde1c-9243-4d63-b574-154d2b7e78ef/console.log" append="off"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:20:14 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:20:14 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:20:14 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:20:14 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.716 254904 DEBUG nova.compute.manager [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Preparing to wait for external event network-vif-plugged-30bc1a71-f71d-41f4-a599-eaa26706d00c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.717 254904 DEBUG oslo_concurrency.lockutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Acquiring lock "7bedde1c-9243-4d63-b574-154d2b7e78ef-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.717 254904 DEBUG oslo_concurrency.lockutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "7bedde1c-9243-4d63-b574-154d2b7e78ef-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.717 254904 DEBUG oslo_concurrency.lockutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "7bedde1c-9243-4d63-b574-154d2b7e78ef-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.718 254904 DEBUG nova.virt.libvirt.vif [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:20:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-166470318',display_name='tempest-VolumesBackupsTest-instance-166470318',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-166470318',id=7,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJziU88L3iquFvjPgSOTyxd8RyTwABv58QhKI/jQBGJ54tLDGdc0mEfjzrnF83TzJsQhvdxGNgZurLJMr9epI63g8qPyQiU643zIfxVH4sr++AbUveV1NoDmmqkFt+Gw/Q==',key_name='tempest-keypair-1500356688',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4beaae6889da4e57bb304963bae13143',ramdisk_id='',reservation_id='r-co0ky5dt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-458528599',owner_user_name='tempest-VolumesBackupsTest-458528599-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:20:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='382055aacd254f8bb9b170628992619d',uuid=7bedde1c-9243-4d63-b574-154d2b7e78ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30bc1a71-f71d-41f4-a599-eaa26706d00c", "address": "fa:16:3e:2c:3d:45", "network": {"id": "1468b032-015a-4fb8-a7c5-2a3b8aab9149", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-555435799-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4beaae6889da4e57bb304963bae13143", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30bc1a71-f7", "ovs_interfaceid": "30bc1a71-f71d-41f4-a599-eaa26706d00c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.719 254904 DEBUG nova.network.os_vif_util [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Converting VIF {"id": "30bc1a71-f71d-41f4-a599-eaa26706d00c", "address": "fa:16:3e:2c:3d:45", "network": {"id": "1468b032-015a-4fb8-a7c5-2a3b8aab9149", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-555435799-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4beaae6889da4e57bb304963bae13143", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30bc1a71-f7", "ovs_interfaceid": "30bc1a71-f71d-41f4-a599-eaa26706d00c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.720 254904 DEBUG nova.network.os_vif_util [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:3d:45,bridge_name='br-int',has_traffic_filtering=True,id=30bc1a71-f71d-41f4-a599-eaa26706d00c,network=Network(1468b032-015a-4fb8-a7c5-2a3b8aab9149),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30bc1a71-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.720 254904 DEBUG os_vif [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:3d:45,bridge_name='br-int',has_traffic_filtering=True,id=30bc1a71-f71d-41f4-a599-eaa26706d00c,network=Network(1468b032-015a-4fb8-a7c5-2a3b8aab9149),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30bc1a71-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.721 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.721 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.722 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.724 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.725 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30bc1a71-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.725 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap30bc1a71-f7, col_values=(('external_ids', {'iface-id': '30bc1a71-f71d-41f4-a599-eaa26706d00c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2c:3d:45', 'vm-uuid': '7bedde1c-9243-4d63-b574-154d2b7e78ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:20:14 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.765 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:14 np0005542249 NetworkManager[48987]: <info>  [1764674414.7667] manager: (tap30bc1a71-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.770 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.776 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.777 254904 INFO os_vif [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:3d:45,bridge_name='br-int',has_traffic_filtering=True,id=30bc1a71-f71d-41f4-a599-eaa26706d00c,network=Network(1468b032-015a-4fb8-a7c5-2a3b8aab9149),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30bc1a71-f7')#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.829 254904 DEBUG nova.virt.libvirt.driver [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.830 254904 DEBUG nova.virt.libvirt.driver [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.830 254904 DEBUG nova.virt.libvirt.driver [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] No VIF found with MAC fa:16:3e:2c:3d:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.831 254904 INFO nova.virt.libvirt.driver [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Using config drive#033[00m
Dec  2 06:20:14 np0005542249 nova_compute[254900]: 2025-12-02 11:20:14.853 254904 DEBUG nova.storage.rbd_utils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] rbd image 7bedde1c-9243-4d63-b574-154d2b7e78ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:20:15 np0005542249 nova_compute[254900]: 2025-12-02 11:20:15.402 254904 DEBUG nova.network.neutron [req-e853baaa-0d90-487c-af3c-7c911c32a548 req-9e7050d6-a276-4d2b-bd64-70d5031e2f27 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Updated VIF entry in instance network info cache for port 30bc1a71-f71d-41f4-a599-eaa26706d00c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:20:15 np0005542249 nova_compute[254900]: 2025-12-02 11:20:15.403 254904 DEBUG nova.network.neutron [req-e853baaa-0d90-487c-af3c-7c911c32a548 req-9e7050d6-a276-4d2b-bd64-70d5031e2f27 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Updating instance_info_cache with network_info: [{"id": "30bc1a71-f71d-41f4-a599-eaa26706d00c", "address": "fa:16:3e:2c:3d:45", "network": {"id": "1468b032-015a-4fb8-a7c5-2a3b8aab9149", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-555435799-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4beaae6889da4e57bb304963bae13143", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30bc1a71-f7", "ovs_interfaceid": "30bc1a71-f71d-41f4-a599-eaa26706d00c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:20:15 np0005542249 nova_compute[254900]: 2025-12-02 11:20:15.423 254904 DEBUG oslo_concurrency.lockutils [req-e853baaa-0d90-487c-af3c-7c911c32a548 req-9e7050d6-a276-4d2b-bd64-70d5031e2f27 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-7bedde1c-9243-4d63-b574-154d2b7e78ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:20:15 np0005542249 nova_compute[254900]: 2025-12-02 11:20:15.530 254904 INFO nova.virt.libvirt.driver [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Creating config drive at /var/lib/nova/instances/7bedde1c-9243-4d63-b574-154d2b7e78ef/disk.config#033[00m
Dec  2 06:20:15 np0005542249 nova_compute[254900]: 2025-12-02 11:20:15.543 254904 DEBUG oslo_concurrency.processutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7bedde1c-9243-4d63-b574-154d2b7e78ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpynlsukx3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:20:15 np0005542249 nova_compute[254900]: 2025-12-02 11:20:15.696 254904 DEBUG oslo_concurrency.processutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7bedde1c-9243-4d63-b574-154d2b7e78ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpynlsukx3" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:20:15 np0005542249 nova_compute[254900]: 2025-12-02 11:20:15.747 254904 DEBUG nova.storage.rbd_utils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] rbd image 7bedde1c-9243-4d63-b574-154d2b7e78ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:20:15 np0005542249 nova_compute[254900]: 2025-12-02 11:20:15.753 254904 DEBUG oslo_concurrency.processutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7bedde1c-9243-4d63-b574-154d2b7e78ef/disk.config 7bedde1c-9243-4d63-b574-154d2b7e78ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:20:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1134: 321 pgs: 321 active+clean; 134 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.4 MiB/s wr, 323 op/s
Dec  2 06:20:15 np0005542249 nova_compute[254900]: 2025-12-02 11:20:15.962 254904 DEBUG oslo_concurrency.processutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7bedde1c-9243-4d63-b574-154d2b7e78ef/disk.config 7bedde1c-9243-4d63-b574-154d2b7e78ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:20:15 np0005542249 nova_compute[254900]: 2025-12-02 11:20:15.964 254904 INFO nova.virt.libvirt.driver [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Deleting local config drive /var/lib/nova/instances/7bedde1c-9243-4d63-b574-154d2b7e78ef/disk.config because it was imported into RBD.#033[00m
Dec  2 06:20:16 np0005542249 kernel: tap30bc1a71-f7: entered promiscuous mode
Dec  2 06:20:16 np0005542249 NetworkManager[48987]: <info>  [1764674416.0398] manager: (tap30bc1a71-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Dec  2 06:20:16 np0005542249 nova_compute[254900]: 2025-12-02 11:20:16.039 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:16 np0005542249 ovn_controller[153849]: 2025-12-02T11:20:16Z|00076|binding|INFO|Claiming lport 30bc1a71-f71d-41f4-a599-eaa26706d00c for this chassis.
Dec  2 06:20:16 np0005542249 ovn_controller[153849]: 2025-12-02T11:20:16Z|00077|binding|INFO|30bc1a71-f71d-41f4-a599-eaa26706d00c: Claiming fa:16:3e:2c:3d:45 10.100.0.9
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.058 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:3d:45 10.100.0.9'], port_security=['fa:16:3e:2c:3d:45 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7bedde1c-9243-4d63-b574-154d2b7e78ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1468b032-015a-4fb8-a7c5-2a3b8aab9149', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4beaae6889da4e57bb304963bae13143', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8e7537ea-9e59-4980-91e7-bcb71205eead', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b54a1343-251a-464a-be0b-78e322a858d0, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=30bc1a71-f71d-41f4-a599-eaa26706d00c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.061 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 30bc1a71-f71d-41f4-a599-eaa26706d00c in datapath 1468b032-015a-4fb8-a7c5-2a3b8aab9149 bound to our chassis#033[00m
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.063 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1468b032-015a-4fb8-a7c5-2a3b8aab9149#033[00m
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.084 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[a06d6c4b-2eb6-4cbd-a6c9-7e239bea0936]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.086 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1468b032-01 in ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 06:20:16 np0005542249 systemd-machined[216222]: New machine qemu-7-instance-00000007.
Dec  2 06:20:16 np0005542249 systemd-udevd[269372]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.092 262398 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1468b032-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.092 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[fab304e9-e9e7-4ab7-99c6-015519b1cf1c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.094 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[50c2f5ca-01b8-4718-8b55-94986587f513]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:16 np0005542249 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Dec  2 06:20:16 np0005542249 NetworkManager[48987]: <info>  [1764674416.1145] device (tap30bc1a71-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:20:16 np0005542249 NetworkManager[48987]: <info>  [1764674416.1161] device (tap30bc1a71-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.126 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[b4255878-83a5-459e-8c5c-448513d1f1e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.161 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[ec51f014-a5e9-49b1-8ee4-91f44ac8d7b2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:16 np0005542249 nova_compute[254900]: 2025-12-02 11:20:16.169 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:16 np0005542249 nova_compute[254900]: 2025-12-02 11:20:16.179 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:16 np0005542249 ovn_controller[153849]: 2025-12-02T11:20:16Z|00078|binding|INFO|Setting lport 30bc1a71-f71d-41f4-a599-eaa26706d00c ovn-installed in OVS
Dec  2 06:20:16 np0005542249 ovn_controller[153849]: 2025-12-02T11:20:16Z|00079|binding|INFO|Setting lport 30bc1a71-f71d-41f4-a599-eaa26706d00c up in Southbound
Dec  2 06:20:16 np0005542249 nova_compute[254900]: 2025-12-02 11:20:16.182 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.212 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[9ea91e89-78bd-47d3-b800-c287a424edd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.220 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[c395c0da-7a5c-409d-a652-b97747a07aa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:16 np0005542249 NetworkManager[48987]: <info>  [1764674416.2223] manager: (tap1468b032-00): new Veth device (/org/freedesktop/NetworkManager/Devices/50)
Dec  2 06:20:16 np0005542249 systemd-udevd[269375]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.267 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[d2e1c3bf-2af3-4461-90cf-2bd6c94e4082]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.272 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[c2be888c-ef98-42c5-ae4b-d5baa2e387bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:16 np0005542249 NetworkManager[48987]: <info>  [1764674416.3048] device (tap1468b032-00): carrier: link connected
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.314 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[16115317-f381-4862-bf2f-23c242292bed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.342 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[7eceabf3-6eb1-444a-92eb-11b6cce2ae9e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1468b032-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:08:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460912, 'reachable_time': 39575, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269404, 'error': None, 'target': 'ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.366 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[69f807ad-0503-4177-9b93-34cb375c8131]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef3:83e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460912, 'tstamp': 460912}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269405, 'error': None, 'target': 'ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.390 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[0e3d9b61-ff18-40f9-b943-569ca9e7fede]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1468b032-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:08:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460912, 'reachable_time': 39575, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269406, 'error': None, 'target': 'ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.434 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[403ce3ca-7e16-4e2a-bcb5-bee198026a72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.527 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[7f5c4650-ca84-4741-99db-790ad091b20a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.529 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1468b032-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.530 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.530 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1468b032-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:20:16 np0005542249 kernel: tap1468b032-00: entered promiscuous mode
Dec  2 06:20:16 np0005542249 nova_compute[254900]: 2025-12-02 11:20:16.533 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:16 np0005542249 NetworkManager[48987]: <info>  [1764674416.5378] manager: (tap1468b032-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.542 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1468b032-00, col_values=(('external_ids', {'iface-id': '25616577-ef53-4f7d-aa59-a58ea4a0e3b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:20:16 np0005542249 ovn_controller[153849]: 2025-12-02T11:20:16Z|00080|binding|INFO|Releasing lport 25616577-ef53-4f7d-aa59-a58ea4a0e3b2 from this chassis (sb_readonly=0)
Dec  2 06:20:16 np0005542249 nova_compute[254900]: 2025-12-02 11:20:16.544 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:16 np0005542249 nova_compute[254900]: 2025-12-02 11:20:16.546 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.549 163757 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1468b032-015a-4fb8-a7c5-2a3b8aab9149.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1468b032-015a-4fb8-a7c5-2a3b8aab9149.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.551 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[85c972a7-b59a-4b3f-8fbb-45f08a7422cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.552 163757 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: global
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]:    log         /dev/log local0 debug
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]:    log-tag     haproxy-metadata-proxy-1468b032-015a-4fb8-a7c5-2a3b8aab9149
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]:    user        root
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]:    group       root
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]:    maxconn     1024
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]:    pidfile     /var/lib/neutron/external/pids/1468b032-015a-4fb8-a7c5-2a3b8aab9149.pid.haproxy
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]:    daemon
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: defaults
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]:    log global
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]:    mode http
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]:    option httplog
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]:    option dontlognull
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]:    option http-server-close
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]:    option forwardfor
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]:    retries                 3
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]:    timeout http-request    30s
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]:    timeout connect         30s
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]:    timeout client          32s
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]:    timeout server          32s
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]:    timeout http-keep-alive 30s
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: listen listener
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]:    bind 169.254.169.254:80
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]:    http-request add-header X-OVN-Network-ID 1468b032-015a-4fb8-a7c5-2a3b8aab9149
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 06:20:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:16.553 163757 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149', 'env', 'PROCESS_TAG=haproxy-1468b032-015a-4fb8-a7c5-2a3b8aab9149', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1468b032-015a-4fb8-a7c5-2a3b8aab9149.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 06:20:16 np0005542249 nova_compute[254900]: 2025-12-02 11:20:16.581 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:16 np0005542249 nova_compute[254900]: 2025-12-02 11:20:16.804 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674416.80348, 7bedde1c-9243-4d63-b574-154d2b7e78ef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:20:16 np0005542249 nova_compute[254900]: 2025-12-02 11:20:16.804 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] VM Started (Lifecycle Event)#033[00m
Dec  2 06:20:16 np0005542249 nova_compute[254900]: 2025-12-02 11:20:16.846 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:20:16 np0005542249 nova_compute[254900]: 2025-12-02 11:20:16.855 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674416.8037035, 7bedde1c-9243-4d63-b574-154d2b7e78ef => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:20:16 np0005542249 nova_compute[254900]: 2025-12-02 11:20:16.855 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:20:16 np0005542249 nova_compute[254900]: 2025-12-02 11:20:16.872 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:20:16 np0005542249 nova_compute[254900]: 2025-12-02 11:20:16.878 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:20:16 np0005542249 nova_compute[254900]: 2025-12-02 11:20:16.897 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:20:17 np0005542249 podman[269480]: 2025-12-02 11:20:17.058615657 +0000 UTC m=+0.081989765 container create 36038f884934d930b8924d476ebad5734f327dae41d87e13d284765030219e2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:20:17 np0005542249 podman[269480]: 2025-12-02 11:20:17.016260774 +0000 UTC m=+0.039634922 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:20:17 np0005542249 systemd[1]: Started libpod-conmon-36038f884934d930b8924d476ebad5734f327dae41d87e13d284765030219e2a.scope.
Dec  2 06:20:17 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:20:17 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e0802a1f363f85f2753b7c8339ab6985771196885cbd18802ef3eb74ae8a95b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 06:20:17 np0005542249 podman[269480]: 2025-12-02 11:20:17.202905978 +0000 UTC m=+0.226280096 container init 36038f884934d930b8924d476ebad5734f327dae41d87e13d284765030219e2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  2 06:20:17 np0005542249 podman[269480]: 2025-12-02 11:20:17.217725708 +0000 UTC m=+0.241099806 container start 36038f884934d930b8924d476ebad5734f327dae41d87e13d284765030219e2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.251 254904 DEBUG nova.compute.manager [req-679dbd0a-0288-4d11-a751-5a837b5e9d53 req-9cc7e7b3-4d9c-402a-977d-5e7f2a328337 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Received event network-vif-plugged-30bc1a71-f71d-41f4-a599-eaa26706d00c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.252 254904 DEBUG oslo_concurrency.lockutils [req-679dbd0a-0288-4d11-a751-5a837b5e9d53 req-9cc7e7b3-4d9c-402a-977d-5e7f2a328337 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "7bedde1c-9243-4d63-b574-154d2b7e78ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.252 254904 DEBUG oslo_concurrency.lockutils [req-679dbd0a-0288-4d11-a751-5a837b5e9d53 req-9cc7e7b3-4d9c-402a-977d-5e7f2a328337 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "7bedde1c-9243-4d63-b574-154d2b7e78ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.252 254904 DEBUG oslo_concurrency.lockutils [req-679dbd0a-0288-4d11-a751-5a837b5e9d53 req-9cc7e7b3-4d9c-402a-977d-5e7f2a328337 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "7bedde1c-9243-4d63-b574-154d2b7e78ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.253 254904 DEBUG nova.compute.manager [req-679dbd0a-0288-4d11-a751-5a837b5e9d53 req-9cc7e7b3-4d9c-402a-977d-5e7f2a328337 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Processing event network-vif-plugged-30bc1a71-f71d-41f4-a599-eaa26706d00c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.253 254904 DEBUG nova.compute.manager [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.259 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674417.2589934, 7bedde1c-9243-4d63-b574-154d2b7e78ef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.260 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.262 254904 DEBUG nova.virt.libvirt.driver [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.266 254904 INFO nova.virt.libvirt.driver [-] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Instance spawned successfully.#033[00m
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.266 254904 DEBUG nova.virt.libvirt.driver [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 06:20:17 np0005542249 neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149[269495]: [NOTICE]   (269499) : New worker (269501) forked
Dec  2 06:20:17 np0005542249 neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149[269495]: [NOTICE]   (269499) : Loading success.
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.284 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.290 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.295 254904 DEBUG nova.virt.libvirt.driver [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.295 254904 DEBUG nova.virt.libvirt.driver [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.296 254904 DEBUG nova.virt.libvirt.driver [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.296 254904 DEBUG nova.virt.libvirt.driver [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.296 254904 DEBUG nova.virt.libvirt.driver [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.297 254904 DEBUG nova.virt.libvirt.driver [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.308 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.354 254904 INFO nova.compute.manager [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Took 7.47 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.355 254904 DEBUG nova.compute.manager [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.418 254904 INFO nova.compute.manager [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Took 8.61 seconds to build instance.#033[00m
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.435 254904 DEBUG oslo_concurrency.lockutils [None req-ecf94087-bc32-4e27-9f30-65eab17359cd 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "7bedde1c-9243-4d63-b574-154d2b7e78ef" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.436 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "7bedde1c-9243-4d63-b574-154d2b7e78ef" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 6.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.436 254904 INFO nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:20:17 np0005542249 nova_compute[254900]: 2025-12-02 11:20:17.436 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "7bedde1c-9243-4d63-b574-154d2b7e78ef" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Dec  2 06:20:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Dec  2 06:20:17 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Dec  2 06:20:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1136: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 134 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.3 MiB/s wr, 344 op/s
Dec  2 06:20:18 np0005542249 nova_compute[254900]: 2025-12-02 11:20:18.013 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Dec  2 06:20:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Dec  2 06:20:18 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Dec  2 06:20:19 np0005542249 nova_compute[254900]: 2025-12-02 11:20:19.379 254904 DEBUG nova.compute.manager [req-7200dc96-cc8b-4be6-9120-48298bb061cc req-ad336239-7514-4999-a426-1203ec537293 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Received event network-vif-plugged-30bc1a71-f71d-41f4-a599-eaa26706d00c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:20:19 np0005542249 nova_compute[254900]: 2025-12-02 11:20:19.380 254904 DEBUG oslo_concurrency.lockutils [req-7200dc96-cc8b-4be6-9120-48298bb061cc req-ad336239-7514-4999-a426-1203ec537293 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "7bedde1c-9243-4d63-b574-154d2b7e78ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:19 np0005542249 nova_compute[254900]: 2025-12-02 11:20:19.380 254904 DEBUG oslo_concurrency.lockutils [req-7200dc96-cc8b-4be6-9120-48298bb061cc req-ad336239-7514-4999-a426-1203ec537293 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "7bedde1c-9243-4d63-b574-154d2b7e78ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:19 np0005542249 nova_compute[254900]: 2025-12-02 11:20:19.381 254904 DEBUG oslo_concurrency.lockutils [req-7200dc96-cc8b-4be6-9120-48298bb061cc req-ad336239-7514-4999-a426-1203ec537293 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "7bedde1c-9243-4d63-b574-154d2b7e78ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:19 np0005542249 nova_compute[254900]: 2025-12-02 11:20:19.381 254904 DEBUG nova.compute.manager [req-7200dc96-cc8b-4be6-9120-48298bb061cc req-ad336239-7514-4999-a426-1203ec537293 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] No waiting events found dispatching network-vif-plugged-30bc1a71-f71d-41f4-a599-eaa26706d00c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:20:19 np0005542249 nova_compute[254900]: 2025-12-02 11:20:19.381 254904 WARNING nova.compute.manager [req-7200dc96-cc8b-4be6-9120-48298bb061cc req-ad336239-7514-4999-a426-1203ec537293 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Received unexpected event network-vif-plugged-30bc1a71-f71d-41f4-a599-eaa26706d00c for instance with vm_state active and task_state None.#033[00m
Dec  2 06:20:19 np0005542249 nova_compute[254900]: 2025-12-02 11:20:19.813 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:19.834 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:19.835 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:19.836 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1138: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 134 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 232 op/s
Dec  2 06:20:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Dec  2 06:20:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Dec  2 06:20:20 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Dec  2 06:20:21 np0005542249 ovn_controller[153849]: 2025-12-02T11:20:21Z|00081|binding|INFO|Releasing lport 25616577-ef53-4f7d-aa59-a58ea4a0e3b2 from this chassis (sb_readonly=0)
Dec  2 06:20:21 np0005542249 NetworkManager[48987]: <info>  [1764674421.1312] manager: (patch-br-int-to-provnet-236defd8-cd9f-40fb-be9d-c3108d5fe981): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Dec  2 06:20:21 np0005542249 NetworkManager[48987]: <info>  [1764674421.1324] manager: (patch-provnet-236defd8-cd9f-40fb-be9d-c3108d5fe981-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Dec  2 06:20:21 np0005542249 nova_compute[254900]: 2025-12-02 11:20:21.129 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:21 np0005542249 ovn_controller[153849]: 2025-12-02T11:20:21Z|00082|binding|INFO|Releasing lport 25616577-ef53-4f7d-aa59-a58ea4a0e3b2 from this chassis (sb_readonly=0)
Dec  2 06:20:21 np0005542249 nova_compute[254900]: 2025-12-02 11:20:21.155 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:21 np0005542249 nova_compute[254900]: 2025-12-02 11:20:21.164 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:20:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Dec  2 06:20:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Dec  2 06:20:21 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Dec  2 06:20:21 np0005542249 nova_compute[254900]: 2025-12-02 11:20:21.871 254904 DEBUG nova.compute.manager [req-93958d14-af12-4f9d-a108-f6b00524ce5e req-50547e04-dc36-419c-abfd-301056951643 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Received event network-changed-30bc1a71-f71d-41f4-a599-eaa26706d00c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:20:21 np0005542249 nova_compute[254900]: 2025-12-02 11:20:21.872 254904 DEBUG nova.compute.manager [req-93958d14-af12-4f9d-a108-f6b00524ce5e req-50547e04-dc36-419c-abfd-301056951643 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Refreshing instance network info cache due to event network-changed-30bc1a71-f71d-41f4-a599-eaa26706d00c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:20:21 np0005542249 nova_compute[254900]: 2025-12-02 11:20:21.872 254904 DEBUG oslo_concurrency.lockutils [req-93958d14-af12-4f9d-a108-f6b00524ce5e req-50547e04-dc36-419c-abfd-301056951643 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-7bedde1c-9243-4d63-b574-154d2b7e78ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:20:21 np0005542249 nova_compute[254900]: 2025-12-02 11:20:21.872 254904 DEBUG oslo_concurrency.lockutils [req-93958d14-af12-4f9d-a108-f6b00524ce5e req-50547e04-dc36-419c-abfd-301056951643 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-7bedde1c-9243-4d63-b574-154d2b7e78ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:20:21 np0005542249 nova_compute[254900]: 2025-12-02 11:20:21.873 254904 DEBUG nova.network.neutron [req-93958d14-af12-4f9d-a108-f6b00524ce5e req-50547e04-dc36-419c-abfd-301056951643 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Refreshing network info cache for port 30bc1a71-f71d-41f4-a599-eaa26706d00c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:20:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1141: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 134 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 48 KiB/s wr, 179 op/s
Dec  2 06:20:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:20:22 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3509972140' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:20:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:20:22 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3509972140' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:20:23 np0005542249 nova_compute[254900]: 2025-12-02 11:20:23.012 254904 DEBUG nova.network.neutron [req-93958d14-af12-4f9d-a108-f6b00524ce5e req-50547e04-dc36-419c-abfd-301056951643 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Updated VIF entry in instance network info cache for port 30bc1a71-f71d-41f4-a599-eaa26706d00c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:20:23 np0005542249 nova_compute[254900]: 2025-12-02 11:20:23.013 254904 DEBUG nova.network.neutron [req-93958d14-af12-4f9d-a108-f6b00524ce5e req-50547e04-dc36-419c-abfd-301056951643 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Updating instance_info_cache with network_info: [{"id": "30bc1a71-f71d-41f4-a599-eaa26706d00c", "address": "fa:16:3e:2c:3d:45", "network": {"id": "1468b032-015a-4fb8-a7c5-2a3b8aab9149", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-555435799-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4beaae6889da4e57bb304963bae13143", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30bc1a71-f7", "ovs_interfaceid": "30bc1a71-f71d-41f4-a599-eaa26706d00c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:20:23 np0005542249 nova_compute[254900]: 2025-12-02 11:20:23.015 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:23 np0005542249 nova_compute[254900]: 2025-12-02 11:20:23.035 254904 DEBUG oslo_concurrency.lockutils [req-93958d14-af12-4f9d-a108-f6b00524ce5e req-50547e04-dc36-419c-abfd-301056951643 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-7bedde1c-9243-4d63-b574-154d2b7e78ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:20:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:20:23 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/634573378' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:20:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:20:23 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/634573378' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:20:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1142: 321 pgs: 321 active+clean; 134 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.7 KiB/s wr, 234 op/s
Dec  2 06:20:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:20:24 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3463977824' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:20:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:20:24 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3463977824' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:20:24 np0005542249 nova_compute[254900]: 2025-12-02 11:20:24.853 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:24 np0005542249 ceph-osd[88961]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  2 06:20:24 np0005542249 ceph-osd[88961]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 11K writes, 41K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 3294 syncs, 3.38 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5517 writes, 17K keys, 5517 commit groups, 1.0 writes per commit group, ingest: 9.70 MB, 0.02 MB/s#012Interval WAL: 5517 writes, 2420 syncs, 2.28 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  2 06:20:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1143: 321 pgs: 321 active+clean; 134 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 7.2 KiB/s wr, 243 op/s
Dec  2 06:20:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:20:26
Dec  2 06:20:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:20:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:20:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'images', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', '.mgr']
Dec  2 06:20:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:20:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:20:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:20:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:20:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Dec  2 06:20:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Dec  2 06:20:26 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Dec  2 06:20:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:20:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:20:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:20:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:20:26 np0005542249 nova_compute[254900]: 2025-12-02 11:20:26.441 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764674411.4402585, 7287d707-15a8-4a81-b795-38db84b35b54 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:20:26 np0005542249 nova_compute[254900]: 2025-12-02 11:20:26.441 254904 INFO nova.compute.manager [-] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:20:26 np0005542249 nova_compute[254900]: 2025-12-02 11:20:26.460 254904 DEBUG nova.compute.manager [None req-fbf05c00-aace-4a47-a9f3-7b3392d153e1 - - - - - -] [instance: 7287d707-15a8-4a81-b795-38db84b35b54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:20:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:20:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:20:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:20:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:20:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:20:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:20:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:20:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:20:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:20:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:20:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1145: 321 pgs: 321 active+clean; 134 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 7.9 KiB/s wr, 221 op/s
Dec  2 06:20:28 np0005542249 nova_compute[254900]: 2025-12-02 11:20:28.018 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:29 np0005542249 ovn_controller[153849]: 2025-12-02T11:20:29Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2c:3d:45 10.100.0.9
Dec  2 06:20:29 np0005542249 ovn_controller[153849]: 2025-12-02T11:20:29Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2c:3d:45 10.100.0.9
Dec  2 06:20:29 np0005542249 nova_compute[254900]: 2025-12-02 11:20:29.888 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1146: 321 pgs: 321 active+clean; 143 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 873 KiB/s wr, 200 op/s
Dec  2 06:20:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:20:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3045824750' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:20:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:20:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3045824750' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:20:30 np0005542249 ceph-osd[89966]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  2 06:20:30 np0005542249 ceph-osd[89966]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 12K writes, 47K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 12K writes, 3681 syncs, 3.39 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5692 writes, 19K keys, 5692 commit groups, 1.0 writes per commit group, ingest: 11.30 MB, 0.02 MB/s#012Interval WAL: 5692 writes, 2435 syncs, 2.34 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  2 06:20:31 np0005542249 podman[269512]: 2025-12-02 11:20:31.023965551 +0000 UTC m=+0.093774585 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  2 06:20:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:20:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1077507589' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:20:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:20:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1077507589' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:20:31 np0005542249 nova_compute[254900]: 2025-12-02 11:20:31.324 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:20:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1147: 321 pgs: 321 active+clean; 143 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 749 KiB/s wr, 172 op/s
Dec  2 06:20:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:20:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4060007017' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:20:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:20:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4060007017' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:20:33 np0005542249 nova_compute[254900]: 2025-12-02 11:20:33.021 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1148: 321 pgs: 321 active+clean; 166 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 410 KiB/s rd, 2.5 MiB/s wr, 179 op/s
Dec  2 06:20:34 np0005542249 nova_compute[254900]: 2025-12-02 11:20:34.216 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:34 np0005542249 nova_compute[254900]: 2025-12-02 11:20:34.423 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:20:34 np0005542249 nova_compute[254900]: 2025-12-02 11:20:34.890 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1149: 321 pgs: 321 active+clean; 167 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 458 KiB/s rd, 2.6 MiB/s wr, 162 op/s
Dec  2 06:20:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:20:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:20:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:20:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:20:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007571745580191443 of space, bias 1.0, pg target 0.2271523674057433 quantized to 32 (current 32)
Dec  2 06:20:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:20:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00034676012974539297 of space, bias 1.0, pg target 0.1040280389236179 quantized to 32 (current 32)
Dec  2 06:20:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:20:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  2 06:20:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:20:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  2 06:20:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:20:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:20:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:20:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:20:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:20:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:20:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:20:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:20:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:20:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:20:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:20:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:20:36 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  2 06:20:36 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 3095 syncs, 3.47 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5062 writes, 17K keys, 5062 commit groups, 1.0 writes per commit group, ingest: 10.95 MB, 0.02 MB/s#012Interval WAL: 5062 writes, 2214 syncs, 2.29 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  2 06:20:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:20:36 np0005542249 nova_compute[254900]: 2025-12-02 11:20:36.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:20:36 np0005542249 nova_compute[254900]: 2025-12-02 11:20:36.382 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:20:36 np0005542249 nova_compute[254900]: 2025-12-02 11:20:36.382 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 06:20:36 np0005542249 nova_compute[254900]: 2025-12-02 11:20:36.743 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "refresh_cache-7bedde1c-9243-4d63-b574-154d2b7e78ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:20:36 np0005542249 nova_compute[254900]: 2025-12-02 11:20:36.744 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquired lock "refresh_cache-7bedde1c-9243-4d63-b574-154d2b7e78ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:20:36 np0005542249 nova_compute[254900]: 2025-12-02 11:20:36.744 254904 DEBUG nova.network.neutron [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 06:20:36 np0005542249 nova_compute[254900]: 2025-12-02 11:20:36.744 254904 DEBUG nova.objects.instance [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7bedde1c-9243-4d63-b574-154d2b7e78ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:20:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:20:37.020716) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674437020781, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2400, "num_deletes": 270, "total_data_size": 3346281, "memory_usage": 3403760, "flush_reason": "Manual Compaction"}
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674437044113, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3289889, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21317, "largest_seqno": 23716, "table_properties": {"data_size": 3278910, "index_size": 7089, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 24063, "raw_average_key_size": 21, "raw_value_size": 3256472, "raw_average_value_size": 2902, "num_data_blocks": 310, "num_entries": 1122, "num_filter_entries": 1122, "num_deletions": 270, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764674274, "oldest_key_time": 1764674274, "file_creation_time": 1764674437, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 23444 microseconds, and 8412 cpu microseconds.
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:20:37.044164) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3289889 bytes OK
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:20:37.044187) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:20:37.045699) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:20:37.045787) EVENT_LOG_v1 {"time_micros": 1764674437045782, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:20:37.045822) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3335847, prev total WAL file size 3335847, number of live WAL files 2.
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:20:37.047788) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3212KB)], [50(7437KB)]
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674437047898, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10905801, "oldest_snapshot_seqno": -1}
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5095 keys, 9137058 bytes, temperature: kUnknown
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674437114200, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9137058, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9099269, "index_size": 23919, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12805, "raw_key_size": 125553, "raw_average_key_size": 24, "raw_value_size": 9003636, "raw_average_value_size": 1767, "num_data_blocks": 991, "num_entries": 5095, "num_filter_entries": 5095, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672515, "oldest_key_time": 0, "file_creation_time": 1764674437, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:20:37.114642) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9137058 bytes
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:20:37.116485) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.2 rd, 137.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.3 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 5635, records dropped: 540 output_compression: NoCompression
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:20:37.116517) EVENT_LOG_v1 {"time_micros": 1764674437116501, "job": 26, "event": "compaction_finished", "compaction_time_micros": 66418, "compaction_time_cpu_micros": 27254, "output_level": 6, "num_output_files": 1, "total_output_size": 9137058, "num_input_records": 5635, "num_output_records": 5095, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674437117668, "job": 26, "event": "table_file_deletion", "file_number": 52}
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674437120623, "job": 26, "event": "table_file_deletion", "file_number": 50}
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:20:37.047633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:20:37.120795) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:20:37.120807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:20:37.120810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:20:37.120813) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:20:37 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:20:37.120817) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:20:37 np0005542249 ceph-mgr[75372]: [devicehealth INFO root] Check health
Dec  2 06:20:37 np0005542249 nova_compute[254900]: 2025-12-02 11:20:37.594 254904 DEBUG oslo_concurrency.lockutils [None req-2fc4ed8f-deed-47db-b330-8d283e9c4d7f 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Acquiring lock "7bedde1c-9243-4d63-b574-154d2b7e78ef" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:37 np0005542249 nova_compute[254900]: 2025-12-02 11:20:37.595 254904 DEBUG oslo_concurrency.lockutils [None req-2fc4ed8f-deed-47db-b330-8d283e9c4d7f 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "7bedde1c-9243-4d63-b574-154d2b7e78ef" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:37 np0005542249 nova_compute[254900]: 2025-12-02 11:20:37.612 254904 DEBUG nova.objects.instance [None req-2fc4ed8f-deed-47db-b330-8d283e9c4d7f 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lazy-loading 'flavor' on Instance uuid 7bedde1c-9243-4d63-b574-154d2b7e78ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:20:37 np0005542249 nova_compute[254900]: 2025-12-02 11:20:37.641 254904 INFO nova.virt.libvirt.driver [None req-2fc4ed8f-deed-47db-b330-8d283e9c4d7f 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Ignoring supplied device name: /dev/vdb#033[00m
Dec  2 06:20:37 np0005542249 nova_compute[254900]: 2025-12-02 11:20:37.661 254904 DEBUG oslo_concurrency.lockutils [None req-2fc4ed8f-deed-47db-b330-8d283e9c4d7f 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "7bedde1c-9243-4d63-b574-154d2b7e78ef" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1151: 321 pgs: 321 active+clean; 167 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 885 KiB/s rd, 2.6 MiB/s wr, 129 op/s
Dec  2 06:20:37 np0005542249 nova_compute[254900]: 2025-12-02 11:20:37.942 254904 DEBUG oslo_concurrency.lockutils [None req-2fc4ed8f-deed-47db-b330-8d283e9c4d7f 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Acquiring lock "7bedde1c-9243-4d63-b574-154d2b7e78ef" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:37 np0005542249 nova_compute[254900]: 2025-12-02 11:20:37.943 254904 DEBUG oslo_concurrency.lockutils [None req-2fc4ed8f-deed-47db-b330-8d283e9c4d7f 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "7bedde1c-9243-4d63-b574-154d2b7e78ef" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:37 np0005542249 nova_compute[254900]: 2025-12-02 11:20:37.944 254904 INFO nova.compute.manager [None req-2fc4ed8f-deed-47db-b330-8d283e9c4d7f 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Attaching volume e2cfc1f7-8a02-48aa-8dc1-ed0119fc09c1 to /dev/vdb#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.022 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.097 254904 DEBUG nova.network.neutron [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Updating instance_info_cache with network_info: [{"id": "30bc1a71-f71d-41f4-a599-eaa26706d00c", "address": "fa:16:3e:2c:3d:45", "network": {"id": "1468b032-015a-4fb8-a7c5-2a3b8aab9149", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-555435799-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4beaae6889da4e57bb304963bae13143", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30bc1a71-f7", "ovs_interfaceid": "30bc1a71-f71d-41f4-a599-eaa26706d00c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.113 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Releasing lock "refresh_cache-7bedde1c-9243-4d63-b574-154d2b7e78ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.113 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.114 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.114 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.139 254904 DEBUG os_brick.utils [None req-2fc4ed8f-deed-47db-b330-8d283e9c4d7f 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.141 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.153 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.153 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[e5bb24b8-ff9f-4d0a-90fa-4278af1afbe8]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.155 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.164 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.165 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[15f3b251-fbe1-43dd-8a97-6078fbca5e96]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.167 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.176 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.176 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[34422622-54bc-4104-baf2-3e12a93b9252]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.178 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[5753dd6f-c5e8-4a83-b8fa-4d93437a6581]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.179 254904 DEBUG oslo_concurrency.processutils [None req-2fc4ed8f-deed-47db-b330-8d283e9c4d7f 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.203 254904 DEBUG oslo_concurrency.processutils [None req-2fc4ed8f-deed-47db-b330-8d283e9c4d7f 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.208 254904 DEBUG os_brick.initiator.connectors.lightos [None req-2fc4ed8f-deed-47db-b330-8d283e9c4d7f 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.209 254904 DEBUG os_brick.initiator.connectors.lightos [None req-2fc4ed8f-deed-47db-b330-8d283e9c4d7f 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.209 254904 DEBUG os_brick.initiator.connectors.lightos [None req-2fc4ed8f-deed-47db-b330-8d283e9c4d7f 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.210 254904 DEBUG os_brick.utils [None req-2fc4ed8f-deed-47db-b330-8d283e9c4d7f 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] <== get_connector_properties: return (69ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.211 254904 DEBUG nova.virt.block_device [None req-2fc4ed8f-deed-47db-b330-8d283e9c4d7f 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Updating existing volume attachment record: d72f1738-7c23-4731-b6af-efb4f644c1af _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.383 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.384 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.384 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.385 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:20:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:20:38 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/661110612' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:20:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:20:38 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3926319315' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:20:38 np0005542249 nova_compute[254900]: 2025-12-02 11:20:38.992 254904 DEBUG nova.objects.instance [None req-2fc4ed8f-deed-47db-b330-8d283e9c4d7f 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lazy-loading 'flavor' on Instance uuid 7bedde1c-9243-4d63-b574-154d2b7e78ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:20:39 np0005542249 nova_compute[254900]: 2025-12-02 11:20:39.033 254904 DEBUG nova.virt.libvirt.driver [None req-2fc4ed8f-deed-47db-b330-8d283e9c4d7f 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Attempting to attach volume e2cfc1f7-8a02-48aa-8dc1-ed0119fc09c1 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Dec  2 06:20:39 np0005542249 podman[269542]: 2025-12-02 11:20:39.034345287 +0000 UTC m=+0.116850482 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:20:39 np0005542249 nova_compute[254900]: 2025-12-02 11:20:39.036 254904 DEBUG nova.virt.libvirt.guest [None req-2fc4ed8f-deed-47db-b330-8d283e9c4d7f 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] attach device xml: <disk type="network" device="disk">
Dec  2 06:20:39 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:20:39 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-e2cfc1f7-8a02-48aa-8dc1-ed0119fc09c1">
Dec  2 06:20:39 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:20:39 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:20:39 np0005542249 nova_compute[254900]:  <auth username="openstack">
Dec  2 06:20:39 np0005542249 nova_compute[254900]:    <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:20:39 np0005542249 nova_compute[254900]:  </auth>
Dec  2 06:20:39 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:20:39 np0005542249 nova_compute[254900]:  <serial>e2cfc1f7-8a02-48aa-8dc1-ed0119fc09c1</serial>
Dec  2 06:20:39 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:20:39 np0005542249 nova_compute[254900]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  2 06:20:39 np0005542249 nova_compute[254900]: 2025-12-02 11:20:39.220 254904 DEBUG nova.virt.libvirt.driver [None req-2fc4ed8f-deed-47db-b330-8d283e9c4d7f 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:20:39 np0005542249 nova_compute[254900]: 2025-12-02 11:20:39.220 254904 DEBUG nova.virt.libvirt.driver [None req-2fc4ed8f-deed-47db-b330-8d283e9c4d7f 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:20:39 np0005542249 nova_compute[254900]: 2025-12-02 11:20:39.221 254904 DEBUG nova.virt.libvirt.driver [None req-2fc4ed8f-deed-47db-b330-8d283e9c4d7f 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:20:39 np0005542249 nova_compute[254900]: 2025-12-02 11:20:39.221 254904 DEBUG nova.virt.libvirt.driver [None req-2fc4ed8f-deed-47db-b330-8d283e9c4d7f 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] No VIF found with MAC fa:16:3e:2c:3d:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:20:39 np0005542249 nova_compute[254900]: 2025-12-02 11:20:39.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:20:39 np0005542249 nova_compute[254900]: 2025-12-02 11:20:39.425 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:39 np0005542249 nova_compute[254900]: 2025-12-02 11:20:39.426 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:39 np0005542249 nova_compute[254900]: 2025-12-02 11:20:39.426 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:39 np0005542249 nova_compute[254900]: 2025-12-02 11:20:39.426 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:20:39 np0005542249 nova_compute[254900]: 2025-12-02 11:20:39.426 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:20:39 np0005542249 nova_compute[254900]: 2025-12-02 11:20:39.587 254904 DEBUG oslo_concurrency.lockutils [None req-2fc4ed8f-deed-47db-b330-8d283e9c4d7f 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "7bedde1c-9243-4d63-b574-154d2b7e78ef" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:20:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2055481223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:20:39 np0005542249 nova_compute[254900]: 2025-12-02 11:20:39.874 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:20:39 np0005542249 nova_compute[254900]: 2025-12-02 11:20:39.927 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1152: 321 pgs: 321 active+clean; 167 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 145 op/s
Dec  2 06:20:40 np0005542249 nova_compute[254900]: 2025-12-02 11:20:40.003 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:20:40 np0005542249 nova_compute[254900]: 2025-12-02 11:20:40.004 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:20:40 np0005542249 nova_compute[254900]: 2025-12-02 11:20:40.005 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:20:40 np0005542249 nova_compute[254900]: 2025-12-02 11:20:40.211 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:20:40 np0005542249 nova_compute[254900]: 2025-12-02 11:20:40.212 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4437MB free_disk=59.94272994995117GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:20:40 np0005542249 nova_compute[254900]: 2025-12-02 11:20:40.212 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:40 np0005542249 nova_compute[254900]: 2025-12-02 11:20:40.213 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:40 np0005542249 nova_compute[254900]: 2025-12-02 11:20:40.334 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Instance 7bedde1c-9243-4d63-b574-154d2b7e78ef actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 06:20:40 np0005542249 nova_compute[254900]: 2025-12-02 11:20:40.334 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:20:40 np0005542249 nova_compute[254900]: 2025-12-02 11:20:40.335 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:20:40 np0005542249 nova_compute[254900]: 2025-12-02 11:20:40.396 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:20:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:20:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2842566659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:20:40 np0005542249 nova_compute[254900]: 2025-12-02 11:20:40.832 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:20:40 np0005542249 nova_compute[254900]: 2025-12-02 11:20:40.842 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:20:40 np0005542249 nova_compute[254900]: 2025-12-02 11:20:40.879 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:20:40 np0005542249 nova_compute[254900]: 2025-12-02 11:20:40.926 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:20:40 np0005542249 nova_compute[254900]: 2025-12-02 11:20:40.927 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:20:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/891985598' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:20:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:20:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1153: 321 pgs: 321 active+clean; 347 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 19 MiB/s wr, 168 op/s
Dec  2 06:20:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Dec  2 06:20:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Dec  2 06:20:42 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Dec  2 06:20:42 np0005542249 podman[269661]: 2025-12-02 11:20:42.224990591 +0000 UTC m=+0.103576672 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:20:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:20:42 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:20:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:20:42 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:20:43 np0005542249 nova_compute[254900]: 2025-12-02 11:20:43.024 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Dec  2 06:20:43 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:20:43 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:20:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Dec  2 06:20:43 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Dec  2 06:20:43 np0005542249 nova_compute[254900]: 2025-12-02 11:20:43.927 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:20:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1156: 321 pgs: 321 active+clean; 545 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 53 MiB/s wr, 263 op/s
Dec  2 06:20:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:20:44 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:20:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:20:44 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:20:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:20:44 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1097527395' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:20:44 np0005542249 nova_compute[254900]: 2025-12-02 11:20:44.931 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:45 np0005542249 podman[270166]: 2025-12-02 11:20:45.013576613 +0000 UTC m=+0.046513923 container create 38c407d097cddad6564b9bcaf6c1d79c3ee95d03f183dee2edf6bea1991c9389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yonath, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  2 06:20:45 np0005542249 podman[270166]: 2025-12-02 11:20:44.987396955 +0000 UTC m=+0.020334295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:20:45 np0005542249 systemd[1]: Started libpod-conmon-38c407d097cddad6564b9bcaf6c1d79c3ee95d03f183dee2edf6bea1991c9389.scope.
Dec  2 06:20:45 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:20:45 np0005542249 podman[270166]: 2025-12-02 11:20:45.164640753 +0000 UTC m=+0.197578083 container init 38c407d097cddad6564b9bcaf6c1d79c3ee95d03f183dee2edf6bea1991c9389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yonath, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:20:45 np0005542249 podman[270166]: 2025-12-02 11:20:45.176610597 +0000 UTC m=+0.209547907 container start 38c407d097cddad6564b9bcaf6c1d79c3ee95d03f183dee2edf6bea1991c9389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yonath, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  2 06:20:45 np0005542249 magical_yonath[270182]: 167 167
Dec  2 06:20:45 np0005542249 systemd[1]: libpod-38c407d097cddad6564b9bcaf6c1d79c3ee95d03f183dee2edf6bea1991c9389.scope: Deactivated successfully.
Dec  2 06:20:45 np0005542249 podman[270166]: 2025-12-02 11:20:45.225019259 +0000 UTC m=+0.257956589 container attach 38c407d097cddad6564b9bcaf6c1d79c3ee95d03f183dee2edf6bea1991c9389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:20:45 np0005542249 podman[270166]: 2025-12-02 11:20:45.225612904 +0000 UTC m=+0.258550214 container died 38c407d097cddad6564b9bcaf6c1d79c3ee95d03f183dee2edf6bea1991c9389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 06:20:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Dec  2 06:20:45 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:20:45 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:20:45 np0005542249 systemd[1]: var-lib-containers-storage-overlay-6c58465f5ac7e7bdf29b884828aba7090acc2a6ca9d037fec1546123970ee9f1-merged.mount: Deactivated successfully.
Dec  2 06:20:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Dec  2 06:20:45 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Dec  2 06:20:45 np0005542249 podman[270166]: 2025-12-02 11:20:45.358561287 +0000 UTC m=+0.391498587 container remove 38c407d097cddad6564b9bcaf6c1d79c3ee95d03f183dee2edf6bea1991c9389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yonath, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  2 06:20:45 np0005542249 systemd[1]: libpod-conmon-38c407d097cddad6564b9bcaf6c1d79c3ee95d03f183dee2edf6bea1991c9389.scope: Deactivated successfully.
Dec  2 06:20:45 np0005542249 podman[270206]: 2025-12-02 11:20:45.600849674 +0000 UTC m=+0.071654004 container create 8b8400171512dd57fc647c07343cca91375908cb38e7af4c31d1c2c50bb41991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:20:45 np0005542249 podman[270206]: 2025-12-02 11:20:45.561336656 +0000 UTC m=+0.032141056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:20:45 np0005542249 systemd[1]: Started libpod-conmon-8b8400171512dd57fc647c07343cca91375908cb38e7af4c31d1c2c50bb41991.scope.
Dec  2 06:20:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:20:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2595919615' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:20:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:20:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2595919615' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:20:45 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:20:45 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f049954c9fe74c6271a1a9b44dc6965efe887c4e9da2257d594c731c6ff0af6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:20:45 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f049954c9fe74c6271a1a9b44dc6965efe887c4e9da2257d594c731c6ff0af6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:20:45 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f049954c9fe74c6271a1a9b44dc6965efe887c4e9da2257d594c731c6ff0af6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:20:45 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f049954c9fe74c6271a1a9b44dc6965efe887c4e9da2257d594c731c6ff0af6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:20:45 np0005542249 podman[270206]: 2025-12-02 11:20:45.736370585 +0000 UTC m=+0.207174945 container init 8b8400171512dd57fc647c07343cca91375908cb38e7af4c31d1c2c50bb41991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  2 06:20:45 np0005542249 podman[270206]: 2025-12-02 11:20:45.748839732 +0000 UTC m=+0.219644072 container start 8b8400171512dd57fc647c07343cca91375908cb38e7af4c31d1c2c50bb41991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  2 06:20:45 np0005542249 podman[270206]: 2025-12-02 11:20:45.769074795 +0000 UTC m=+0.239879185 container attach 8b8400171512dd57fc647c07343cca91375908cb38e7af4c31d1c2c50bb41991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_euclid, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  2 06:20:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1158: 321 pgs: 321 active+clean; 733 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 180 KiB/s rd, 90 MiB/s wr, 302 op/s
Dec  2 06:20:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Dec  2 06:20:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Dec  2 06:20:46 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Dec  2 06:20:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:20:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1187896651' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:20:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:20:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1187896651' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]: [
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:    {
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:        "available": false,
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:        "ceph_device": false,
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:        "lsm_data": {},
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:        "lvs": [],
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:        "path": "/dev/sr0",
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:        "rejected_reasons": [
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:            "Insufficient space (<5GB)",
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:            "Has a FileSystem"
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:        ],
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:        "sys_api": {
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:            "actuators": null,
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:            "device_nodes": "sr0",
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:            "devname": "sr0",
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:            "human_readable_size": "482.00 KB",
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:            "id_bus": "ata",
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:            "model": "QEMU DVD-ROM",
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:            "nr_requests": "2",
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:            "parent": "/dev/sr0",
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:            "partitions": {},
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:            "path": "/dev/sr0",
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:            "removable": "1",
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:            "rev": "2.5+",
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:            "ro": "0",
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:            "rotational": "1",
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:            "sas_address": "",
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:            "sas_device_handle": "",
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:            "scheduler_mode": "mq-deadline",
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:            "sectors": 0,
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:            "sectorsize": "2048",
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:            "size": 493568.0,
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:            "support_discard": "2048",
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:            "type": "disk",
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:            "vendor": "QEMU"
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:        }
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]:    }
Dec  2 06:20:47 np0005542249 fervent_euclid[270222]: ]
Dec  2 06:20:47 np0005542249 systemd[1]: libpod-8b8400171512dd57fc647c07343cca91375908cb38e7af4c31d1c2c50bb41991.scope: Deactivated successfully.
Dec  2 06:20:47 np0005542249 systemd[1]: libpod-8b8400171512dd57fc647c07343cca91375908cb38e7af4c31d1c2c50bb41991.scope: Consumed 1.995s CPU time.
Dec  2 06:20:47 np0005542249 podman[272131]: 2025-12-02 11:20:47.750280752 +0000 UTC m=+0.033366358 container died 8b8400171512dd57fc647c07343cca91375908cb38e7af4c31d1c2c50bb41991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  2 06:20:47 np0005542249 systemd[1]: var-lib-containers-storage-overlay-2f049954c9fe74c6271a1a9b44dc6965efe887c4e9da2257d594c731c6ff0af6-merged.mount: Deactivated successfully.
Dec  2 06:20:47 np0005542249 podman[272131]: 2025-12-02 11:20:47.811361646 +0000 UTC m=+0.094447212 container remove 8b8400171512dd57fc647c07343cca91375908cb38e7af4c31d1c2c50bb41991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Dec  2 06:20:47 np0005542249 systemd[1]: libpod-conmon-8b8400171512dd57fc647c07343cca91375908cb38e7af4c31d1c2c50bb41991.scope: Deactivated successfully.
Dec  2 06:20:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:20:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:20:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:20:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:20:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:20:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:20:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:20:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:20:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:20:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:20:47 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev e733cb57-cdab-4af4-9d76-8804a30ff381 does not exist
Dec  2 06:20:47 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 4195bf39-5d83-415e-9648-f7fb604acf09 does not exist
Dec  2 06:20:47 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 8f248984-c8b6-4ef5-a218-d977014670f1 does not exist
Dec  2 06:20:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:20:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:20:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:20:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:20:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:20:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:20:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1160: 321 pgs: 321 active+clean; 1.1 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 130 MiB/s wr, 550 op/s
Dec  2 06:20:48 np0005542249 nova_compute[254900]: 2025-12-02 11:20:48.027 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Dec  2 06:20:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Dec  2 06:20:48 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Dec  2 06:20:48 np0005542249 podman[272286]: 2025-12-02 11:20:48.825271587 +0000 UTC m=+0.081130283 container create 59152f0228cd06a200191a3e68aae2451c279d4ee0edb2b31747fd6c4048bdc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_albattani, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  2 06:20:48 np0005542249 podman[272286]: 2025-12-02 11:20:48.793901763 +0000 UTC m=+0.049760519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:20:48 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:20:48 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:20:48 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:20:48 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:20:48 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:20:48 np0005542249 systemd[1]: Started libpod-conmon-59152f0228cd06a200191a3e68aae2451c279d4ee0edb2b31747fd6c4048bdc8.scope.
Dec  2 06:20:48 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:20:48 np0005542249 podman[272286]: 2025-12-02 11:20:48.958674203 +0000 UTC m=+0.214532949 container init 59152f0228cd06a200191a3e68aae2451c279d4ee0edb2b31747fd6c4048bdc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  2 06:20:48 np0005542249 podman[272286]: 2025-12-02 11:20:48.971355256 +0000 UTC m=+0.227213962 container start 59152f0228cd06a200191a3e68aae2451c279d4ee0edb2b31747fd6c4048bdc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_albattani, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  2 06:20:48 np0005542249 podman[272286]: 2025-12-02 11:20:48.976315546 +0000 UTC m=+0.232174302 container attach 59152f0228cd06a200191a3e68aae2451c279d4ee0edb2b31747fd6c4048bdc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_albattani, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:20:48 np0005542249 cool_albattani[272302]: 167 167
Dec  2 06:20:48 np0005542249 systemd[1]: libpod-59152f0228cd06a200191a3e68aae2451c279d4ee0edb2b31747fd6c4048bdc8.scope: Deactivated successfully.
Dec  2 06:20:48 np0005542249 podman[272286]: 2025-12-02 11:20:48.982780316 +0000 UTC m=+0.238639072 container died 59152f0228cd06a200191a3e68aae2451c279d4ee0edb2b31747fd6c4048bdc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_albattani, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:20:49 np0005542249 systemd[1]: var-lib-containers-storage-overlay-5962715f8e12f09d171deb7994fee950431c501fdcdf41e9e47d18152b24067a-merged.mount: Deactivated successfully.
Dec  2 06:20:49 np0005542249 podman[272286]: 2025-12-02 11:20:49.036751013 +0000 UTC m=+0.292609709 container remove 59152f0228cd06a200191a3e68aae2451c279d4ee0edb2b31747fd6c4048bdc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_albattani, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:20:49 np0005542249 systemd[1]: libpod-conmon-59152f0228cd06a200191a3e68aae2451c279d4ee0edb2b31747fd6c4048bdc8.scope: Deactivated successfully.
Dec  2 06:20:49 np0005542249 podman[272325]: 2025-12-02 11:20:49.353560078 +0000 UTC m=+0.071849299 container create a32362e6c3e797947e6cc16f881e742055696e66115f007c6e50bb28a5e94390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_albattani, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  2 06:20:49 np0005542249 systemd[1]: Started libpod-conmon-a32362e6c3e797947e6cc16f881e742055696e66115f007c6e50bb28a5e94390.scope.
Dec  2 06:20:49 np0005542249 podman[272325]: 2025-12-02 11:20:49.324937616 +0000 UTC m=+0.043226917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:20:49 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:49.413 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:23:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:25:d0:a1:75:ed'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:20:49 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:49.415 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 06:20:49 np0005542249 nova_compute[254900]: 2025-12-02 11:20:49.453 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:49 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:20:49 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1375aa6b76951659a3aff33dc9414c38bae3696c00df881ded864813f9ea9797/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:20:49 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1375aa6b76951659a3aff33dc9414c38bae3696c00df881ded864813f9ea9797/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:20:49 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1375aa6b76951659a3aff33dc9414c38bae3696c00df881ded864813f9ea9797/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:20:49 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1375aa6b76951659a3aff33dc9414c38bae3696c00df881ded864813f9ea9797/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:20:49 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1375aa6b76951659a3aff33dc9414c38bae3696c00df881ded864813f9ea9797/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:20:49 np0005542249 podman[272325]: 2025-12-02 11:20:49.500940761 +0000 UTC m=+0.219230072 container init a32362e6c3e797947e6cc16f881e742055696e66115f007c6e50bb28a5e94390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_albattani, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  2 06:20:49 np0005542249 podman[272325]: 2025-12-02 11:20:49.514464766 +0000 UTC m=+0.232754017 container start a32362e6c3e797947e6cc16f881e742055696e66115f007c6e50bb28a5e94390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_albattani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  2 06:20:49 np0005542249 podman[272325]: 2025-12-02 11:20:49.520687519 +0000 UTC m=+0.238976820 container attach a32362e6c3e797947e6cc16f881e742055696e66115f007c6e50bb28a5e94390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_albattani, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:20:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:20:49 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2282270389' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:20:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Dec  2 06:20:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Dec  2 06:20:49 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Dec  2 06:20:49 np0005542249 nova_compute[254900]: 2025-12-02 11:20:49.934 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1163: 321 pgs: 321 active+clean; 923 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 256 KiB/s rd, 110 MiB/s wr, 427 op/s
Dec  2 06:20:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:20:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/69507259' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:20:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:20:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/69507259' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:20:50 np0005542249 jolly_albattani[272342]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:20:50 np0005542249 jolly_albattani[272342]: --> relative data size: 1.0
Dec  2 06:20:50 np0005542249 jolly_albattani[272342]: --> All data devices are unavailable
Dec  2 06:20:50 np0005542249 systemd[1]: libpod-a32362e6c3e797947e6cc16f881e742055696e66115f007c6e50bb28a5e94390.scope: Deactivated successfully.
Dec  2 06:20:50 np0005542249 systemd[1]: libpod-a32362e6c3e797947e6cc16f881e742055696e66115f007c6e50bb28a5e94390.scope: Consumed 1.226s CPU time.
Dec  2 06:20:50 np0005542249 podman[272325]: 2025-12-02 11:20:50.814945926 +0000 UTC m=+1.533235197 container died a32362e6c3e797947e6cc16f881e742055696e66115f007c6e50bb28a5e94390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:20:50 np0005542249 systemd[1]: var-lib-containers-storage-overlay-1375aa6b76951659a3aff33dc9414c38bae3696c00df881ded864813f9ea9797-merged.mount: Deactivated successfully.
Dec  2 06:20:50 np0005542249 podman[272325]: 2025-12-02 11:20:50.892366701 +0000 UTC m=+1.610655902 container remove a32362e6c3e797947e6cc16f881e742055696e66115f007c6e50bb28a5e94390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:20:50 np0005542249 systemd[1]: libpod-conmon-a32362e6c3e797947e6cc16f881e742055696e66115f007c6e50bb28a5e94390.scope: Deactivated successfully.
Dec  2 06:20:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Dec  2 06:20:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Dec  2 06:20:50 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Dec  2 06:20:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:20:51 np0005542249 podman[272526]: 2025-12-02 11:20:51.872229188 +0000 UTC m=+0.063485430 container create 9f526738c1430d91cb10ec6521518b2d7b4e6448347b2363c6493652f52e6602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:20:51 np0005542249 systemd[1]: Started libpod-conmon-9f526738c1430d91cb10ec6521518b2d7b4e6448347b2363c6493652f52e6602.scope.
Dec  2 06:20:51 np0005542249 podman[272526]: 2025-12-02 11:20:51.839492707 +0000 UTC m=+0.030748989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:20:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Dec  2 06:20:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Dec  2 06:20:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1165: 321 pgs: 321 active+clean; 559 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 92 MiB/s wr, 471 op/s
Dec  2 06:20:51 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Dec  2 06:20:51 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:20:51 np0005542249 podman[272526]: 2025-12-02 11:20:51.983554902 +0000 UTC m=+0.174811154 container init 9f526738c1430d91cb10ec6521518b2d7b4e6448347b2363c6493652f52e6602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:20:51 np0005542249 podman[272526]: 2025-12-02 11:20:51.998951067 +0000 UTC m=+0.190207319 container start 9f526738c1430d91cb10ec6521518b2d7b4e6448347b2363c6493652f52e6602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bose, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:20:52 np0005542249 podman[272526]: 2025-12-02 11:20:52.003911358 +0000 UTC m=+0.195167610 container attach 9f526738c1430d91cb10ec6521518b2d7b4e6448347b2363c6493652f52e6602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bose, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  2 06:20:52 np0005542249 youthful_bose[272542]: 167 167
Dec  2 06:20:52 np0005542249 systemd[1]: libpod-9f526738c1430d91cb10ec6521518b2d7b4e6448347b2363c6493652f52e6602.scope: Deactivated successfully.
Dec  2 06:20:52 np0005542249 podman[272526]: 2025-12-02 11:20:52.009776141 +0000 UTC m=+0.201032373 container died 9f526738c1430d91cb10ec6521518b2d7b4e6448347b2363c6493652f52e6602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bose, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:20:52 np0005542249 systemd[1]: var-lib-containers-storage-overlay-f26d3d8b080da197b7e829f072a09896bc4ef988dcdb17ea013b0e1bfce04169-merged.mount: Deactivated successfully.
Dec  2 06:20:52 np0005542249 podman[272526]: 2025-12-02 11:20:52.073437214 +0000 UTC m=+0.264693456 container remove 9f526738c1430d91cb10ec6521518b2d7b4e6448347b2363c6493652f52e6602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  2 06:20:52 np0005542249 systemd[1]: libpod-conmon-9f526738c1430d91cb10ec6521518b2d7b4e6448347b2363c6493652f52e6602.scope: Deactivated successfully.
Dec  2 06:20:52 np0005542249 podman[272566]: 2025-12-02 11:20:52.328745593 +0000 UTC m=+0.064500717 container create ebb289610c6bbba3f218bc7c30c5263f610d21fa2761ccbc812e73686ea7671f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:20:52 np0005542249 systemd[1]: Started libpod-conmon-ebb289610c6bbba3f218bc7c30c5263f610d21fa2761ccbc812e73686ea7671f.scope.
Dec  2 06:20:52 np0005542249 podman[272566]: 2025-12-02 11:20:52.309552658 +0000 UTC m=+0.045307802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:20:52 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:20:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29d1513f095a38cbb8ecdca5968588be4f81970e21e70a04ec78fa852de237b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:20:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29d1513f095a38cbb8ecdca5968588be4f81970e21e70a04ec78fa852de237b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:20:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29d1513f095a38cbb8ecdca5968588be4f81970e21e70a04ec78fa852de237b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:20:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29d1513f095a38cbb8ecdca5968588be4f81970e21e70a04ec78fa852de237b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:20:52 np0005542249 podman[272566]: 2025-12-02 11:20:52.438040574 +0000 UTC m=+0.173795708 container init ebb289610c6bbba3f218bc7c30c5263f610d21fa2761ccbc812e73686ea7671f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Dec  2 06:20:52 np0005542249 podman[272566]: 2025-12-02 11:20:52.449602398 +0000 UTC m=+0.185357552 container start ebb289610c6bbba3f218bc7c30c5263f610d21fa2761ccbc812e73686ea7671f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sammet, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  2 06:20:52 np0005542249 podman[272566]: 2025-12-02 11:20:52.453561522 +0000 UTC m=+0.189316666 container attach ebb289610c6bbba3f218bc7c30c5263f610d21fa2761ccbc812e73686ea7671f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sammet, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:20:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Dec  2 06:20:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Dec  2 06:20:52 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Dec  2 06:20:53 np0005542249 nova_compute[254900]: 2025-12-02 11:20:53.029 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]: {
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:    "0": [
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:        {
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "devices": [
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "/dev/loop3"
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            ],
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "lv_name": "ceph_lv0",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "lv_size": "21470642176",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "name": "ceph_lv0",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "tags": {
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.cluster_name": "ceph",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.crush_device_class": "",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.encrypted": "0",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.osd_id": "0",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.type": "block",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.vdo": "0"
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            },
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "type": "block",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "vg_name": "ceph_vg0"
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:        }
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:    ],
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:    "1": [
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:        {
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "devices": [
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "/dev/loop4"
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            ],
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "lv_name": "ceph_lv1",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "lv_size": "21470642176",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "name": "ceph_lv1",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "tags": {
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.cluster_name": "ceph",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.crush_device_class": "",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.encrypted": "0",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.osd_id": "1",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.type": "block",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.vdo": "0"
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            },
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "type": "block",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "vg_name": "ceph_vg1"
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:        }
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:    ],
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:    "2": [
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:        {
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "devices": [
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "/dev/loop5"
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            ],
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "lv_name": "ceph_lv2",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "lv_size": "21470642176",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "name": "ceph_lv2",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "tags": {
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.cluster_name": "ceph",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.crush_device_class": "",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.encrypted": "0",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.osd_id": "2",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.type": "block",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:                "ceph.vdo": "0"
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            },
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "type": "block",
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:            "vg_name": "ceph_vg2"
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:        }
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]:    ]
Dec  2 06:20:53 np0005542249 eloquent_sammet[272582]: }
Dec  2 06:20:53 np0005542249 systemd[1]: libpod-ebb289610c6bbba3f218bc7c30c5263f610d21fa2761ccbc812e73686ea7671f.scope: Deactivated successfully.
Dec  2 06:20:53 np0005542249 podman[272566]: 2025-12-02 11:20:53.423928659 +0000 UTC m=+1.159683813 container died ebb289610c6bbba3f218bc7c30c5263f610d21fa2761ccbc812e73686ea7671f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sammet, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:20:53 np0005542249 systemd[1]: var-lib-containers-storage-overlay-29d1513f095a38cbb8ecdca5968588be4f81970e21e70a04ec78fa852de237b2-merged.mount: Deactivated successfully.
Dec  2 06:20:53 np0005542249 podman[272566]: 2025-12-02 11:20:53.511166191 +0000 UTC m=+1.246921315 container remove ebb289610c6bbba3f218bc7c30c5263f610d21fa2761ccbc812e73686ea7671f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sammet, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:20:53 np0005542249 systemd[1]: libpod-conmon-ebb289610c6bbba3f218bc7c30c5263f610d21fa2761ccbc812e73686ea7671f.scope: Deactivated successfully.
Dec  2 06:20:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1168: 321 pgs: 321 active+clean; 167 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 7.5 KiB/s wr, 196 op/s
Dec  2 06:20:54 np0005542249 podman[272745]: 2025-12-02 11:20:54.474347809 +0000 UTC m=+0.068107481 container create b891571fa04d5a5a073b3d13ce45f57595b77a1a7413557d2b0f04d9e0fcc204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  2 06:20:54 np0005542249 systemd[1]: Started libpod-conmon-b891571fa04d5a5a073b3d13ce45f57595b77a1a7413557d2b0f04d9e0fcc204.scope.
Dec  2 06:20:54 np0005542249 podman[272745]: 2025-12-02 11:20:54.446857466 +0000 UTC m=+0.040617188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:20:54 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:20:54 np0005542249 podman[272745]: 2025-12-02 11:20:54.59391033 +0000 UTC m=+0.187670002 container init b891571fa04d5a5a073b3d13ce45f57595b77a1a7413557d2b0f04d9e0fcc204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bhaskara, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:20:54 np0005542249 podman[272745]: 2025-12-02 11:20:54.606658735 +0000 UTC m=+0.200418407 container start b891571fa04d5a5a073b3d13ce45f57595b77a1a7413557d2b0f04d9e0fcc204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:20:54 np0005542249 podman[272745]: 2025-12-02 11:20:54.612046237 +0000 UTC m=+0.205805969 container attach b891571fa04d5a5a073b3d13ce45f57595b77a1a7413557d2b0f04d9e0fcc204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bhaskara, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 06:20:54 np0005542249 nova_compute[254900]: 2025-12-02 11:20:54.610 254904 DEBUG oslo_concurrency.lockutils [None req-d1383c4b-8e23-4097-8f19-b52c3ec5ac16 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Acquiring lock "7bedde1c-9243-4d63-b574-154d2b7e78ef" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:54 np0005542249 nova_compute[254900]: 2025-12-02 11:20:54.612 254904 DEBUG oslo_concurrency.lockutils [None req-d1383c4b-8e23-4097-8f19-b52c3ec5ac16 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "7bedde1c-9243-4d63-b574-154d2b7e78ef" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:54 np0005542249 busy_bhaskara[272762]: 167 167
Dec  2 06:20:54 np0005542249 systemd[1]: libpod-b891571fa04d5a5a073b3d13ce45f57595b77a1a7413557d2b0f04d9e0fcc204.scope: Deactivated successfully.
Dec  2 06:20:54 np0005542249 podman[272745]: 2025-12-02 11:20:54.616244427 +0000 UTC m=+0.210004099 container died b891571fa04d5a5a073b3d13ce45f57595b77a1a7413557d2b0f04d9e0fcc204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bhaskara, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:20:54 np0005542249 nova_compute[254900]: 2025-12-02 11:20:54.643 254904 INFO nova.compute.manager [None req-d1383c4b-8e23-4097-8f19-b52c3ec5ac16 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Detaching volume e2cfc1f7-8a02-48aa-8dc1-ed0119fc09c1#033[00m
Dec  2 06:20:54 np0005542249 systemd[1]: var-lib-containers-storage-overlay-93e9cd34c717842b9c794229a6fd15fc2db39e585df48f7e5ad4883ccaac94ff-merged.mount: Deactivated successfully.
Dec  2 06:20:54 np0005542249 podman[272745]: 2025-12-02 11:20:54.669631609 +0000 UTC m=+0.263391292 container remove b891571fa04d5a5a073b3d13ce45f57595b77a1a7413557d2b0f04d9e0fcc204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bhaskara, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:20:54 np0005542249 systemd[1]: libpod-conmon-b891571fa04d5a5a073b3d13ce45f57595b77a1a7413557d2b0f04d9e0fcc204.scope: Deactivated successfully.
Dec  2 06:20:54 np0005542249 nova_compute[254900]: 2025-12-02 11:20:54.822 254904 INFO nova.virt.block_device [None req-d1383c4b-8e23-4097-8f19-b52c3ec5ac16 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Attempting to driver detach volume e2cfc1f7-8a02-48aa-8dc1-ed0119fc09c1 from mountpoint /dev/vdb#033[00m
Dec  2 06:20:54 np0005542249 nova_compute[254900]: 2025-12-02 11:20:54.839 254904 DEBUG nova.virt.libvirt.driver [None req-d1383c4b-8e23-4097-8f19-b52c3ec5ac16 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Attempting to detach device vdb from instance 7bedde1c-9243-4d63-b574-154d2b7e78ef from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  2 06:20:54 np0005542249 nova_compute[254900]: 2025-12-02 11:20:54.841 254904 DEBUG nova.virt.libvirt.guest [None req-d1383c4b-8e23-4097-8f19-b52c3ec5ac16 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] detach device xml: <disk type="network" device="disk">
Dec  2 06:20:54 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:20:54 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-e2cfc1f7-8a02-48aa-8dc1-ed0119fc09c1">
Dec  2 06:20:54 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:20:54 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:20:54 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:20:54 np0005542249 nova_compute[254900]:  <serial>e2cfc1f7-8a02-48aa-8dc1-ed0119fc09c1</serial>
Dec  2 06:20:54 np0005542249 nova_compute[254900]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  2 06:20:54 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:20:54 np0005542249 nova_compute[254900]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  2 06:20:54 np0005542249 nova_compute[254900]: 2025-12-02 11:20:54.852 254904 INFO nova.virt.libvirt.driver [None req-d1383c4b-8e23-4097-8f19-b52c3ec5ac16 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Successfully detached device vdb from instance 7bedde1c-9243-4d63-b574-154d2b7e78ef from the persistent domain config.#033[00m
Dec  2 06:20:54 np0005542249 nova_compute[254900]: 2025-12-02 11:20:54.853 254904 DEBUG nova.virt.libvirt.driver [None req-d1383c4b-8e23-4097-8f19-b52c3ec5ac16 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 7bedde1c-9243-4d63-b574-154d2b7e78ef from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  2 06:20:54 np0005542249 nova_compute[254900]: 2025-12-02 11:20:54.855 254904 DEBUG nova.virt.libvirt.guest [None req-d1383c4b-8e23-4097-8f19-b52c3ec5ac16 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] detach device xml: <disk type="network" device="disk">
Dec  2 06:20:54 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:20:54 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-e2cfc1f7-8a02-48aa-8dc1-ed0119fc09c1">
Dec  2 06:20:54 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:20:54 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:20:54 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:20:54 np0005542249 nova_compute[254900]:  <serial>e2cfc1f7-8a02-48aa-8dc1-ed0119fc09c1</serial>
Dec  2 06:20:54 np0005542249 nova_compute[254900]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  2 06:20:54 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:20:54 np0005542249 nova_compute[254900]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  2 06:20:54 np0005542249 podman[272785]: 2025-12-02 11:20:54.936912072 +0000 UTC m=+0.070473732 container create eac934125faf8cfb1dc00f0dc711781d99788dff35b30b49e1e74b41f7bf6333 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kapitsa, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  2 06:20:54 np0005542249 nova_compute[254900]: 2025-12-02 11:20:54.939 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:54 np0005542249 nova_compute[254900]: 2025-12-02 11:20:54.979 254904 DEBUG nova.virt.libvirt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Received event <DeviceRemovedEvent: 1764674454.9790998, 7bedde1c-9243-4d63-b574-154d2b7e78ef => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  2 06:20:54 np0005542249 nova_compute[254900]: 2025-12-02 11:20:54.982 254904 DEBUG nova.virt.libvirt.driver [None req-d1383c4b-8e23-4097-8f19-b52c3ec5ac16 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 7bedde1c-9243-4d63-b574-154d2b7e78ef _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  2 06:20:54 np0005542249 systemd[1]: Started libpod-conmon-eac934125faf8cfb1dc00f0dc711781d99788dff35b30b49e1e74b41f7bf6333.scope.
Dec  2 06:20:54 np0005542249 nova_compute[254900]: 2025-12-02 11:20:54.985 254904 INFO nova.virt.libvirt.driver [None req-d1383c4b-8e23-4097-8f19-b52c3ec5ac16 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Successfully detached device vdb from instance 7bedde1c-9243-4d63-b574-154d2b7e78ef from the live domain config.#033[00m
Dec  2 06:20:55 np0005542249 podman[272785]: 2025-12-02 11:20:54.911055243 +0000 UTC m=+0.044616893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:20:55 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:20:55 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84b25a307b62e818ab7de3728c46d33063678f9f7987547ac1551d07c5129896/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:20:55 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84b25a307b62e818ab7de3728c46d33063678f9f7987547ac1551d07c5129896/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:20:55 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84b25a307b62e818ab7de3728c46d33063678f9f7987547ac1551d07c5129896/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:20:55 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84b25a307b62e818ab7de3728c46d33063678f9f7987547ac1551d07c5129896/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:20:55 np0005542249 podman[272785]: 2025-12-02 11:20:55.048468964 +0000 UTC m=+0.182030624 container init eac934125faf8cfb1dc00f0dc711781d99788dff35b30b49e1e74b41f7bf6333 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:20:55 np0005542249 podman[272785]: 2025-12-02 11:20:55.07873837 +0000 UTC m=+0.212300020 container start eac934125faf8cfb1dc00f0dc711781d99788dff35b30b49e1e74b41f7bf6333 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:20:55 np0005542249 podman[272785]: 2025-12-02 11:20:55.085773085 +0000 UTC m=+0.219334705 container attach eac934125faf8cfb1dc00f0dc711781d99788dff35b30b49e1e74b41f7bf6333 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  2 06:20:55 np0005542249 nova_compute[254900]: 2025-12-02 11:20:55.221 254904 DEBUG nova.objects.instance [None req-d1383c4b-8e23-4097-8f19-b52c3ec5ac16 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lazy-loading 'flavor' on Instance uuid 7bedde1c-9243-4d63-b574-154d2b7e78ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:20:55 np0005542249 nova_compute[254900]: 2025-12-02 11:20:55.279 254904 DEBUG oslo_concurrency.lockutils [None req-d1383c4b-8e23-4097-8f19-b52c3ec5ac16 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "7bedde1c-9243-4d63-b574-154d2b7e78ef" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1169: 321 pgs: 321 active+clean; 167 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 6.7 KiB/s wr, 186 op/s
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.029 254904 DEBUG oslo_concurrency.lockutils [None req-1428d991-bd49-416a-93ce-9348ba39592c 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Acquiring lock "7bedde1c-9243-4d63-b574-154d2b7e78ef" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.030 254904 DEBUG oslo_concurrency.lockutils [None req-1428d991-bd49-416a-93ce-9348ba39592c 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "7bedde1c-9243-4d63-b574-154d2b7e78ef" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.031 254904 DEBUG oslo_concurrency.lockutils [None req-1428d991-bd49-416a-93ce-9348ba39592c 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Acquiring lock "7bedde1c-9243-4d63-b574-154d2b7e78ef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.032 254904 DEBUG oslo_concurrency.lockutils [None req-1428d991-bd49-416a-93ce-9348ba39592c 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "7bedde1c-9243-4d63-b574-154d2b7e78ef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.032 254904 DEBUG oslo_concurrency.lockutils [None req-1428d991-bd49-416a-93ce-9348ba39592c 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "7bedde1c-9243-4d63-b574-154d2b7e78ef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.034 254904 INFO nova.compute.manager [None req-1428d991-bd49-416a-93ce-9348ba39592c 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Terminating instance#033[00m
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.036 254904 DEBUG nova.compute.manager [None req-1428d991-bd49-416a-93ce-9348ba39592c 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:20:56 np0005542249 kernel: tap30bc1a71-f7 (unregistering): left promiscuous mode
Dec  2 06:20:56 np0005542249 NetworkManager[48987]: <info>  [1764674456.1015] device (tap30bc1a71-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.121 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:56 np0005542249 ovn_controller[153849]: 2025-12-02T11:20:56Z|00083|binding|INFO|Releasing lport 30bc1a71-f71d-41f4-a599-eaa26706d00c from this chassis (sb_readonly=0)
Dec  2 06:20:56 np0005542249 ovn_controller[153849]: 2025-12-02T11:20:56Z|00084|binding|INFO|Setting lport 30bc1a71-f71d-41f4-a599-eaa26706d00c down in Southbound
Dec  2 06:20:56 np0005542249 ovn_controller[153849]: 2025-12-02T11:20:56Z|00085|binding|INFO|Removing iface tap30bc1a71-f7 ovn-installed in OVS
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.126 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:56 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:56.135 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:3d:45 10.100.0.9'], port_security=['fa:16:3e:2c:3d:45 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7bedde1c-9243-4d63-b574-154d2b7e78ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1468b032-015a-4fb8-a7c5-2a3b8aab9149', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4beaae6889da4e57bb304963bae13143', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8e7537ea-9e59-4980-91e7-bcb71205eead', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.214'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b54a1343-251a-464a-be0b-78e322a858d0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=30bc1a71-f71d-41f4-a599-eaa26706d00c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:20:56 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:56.137 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 30bc1a71-f71d-41f4-a599-eaa26706d00c in datapath 1468b032-015a-4fb8-a7c5-2a3b8aab9149 unbound from our chassis#033[00m
Dec  2 06:20:56 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:56.139 163757 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1468b032-015a-4fb8-a7c5-2a3b8aab9149, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 06:20:56 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:56.141 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[ed78f4c5-9d03-4981-a6ba-639101cd0db6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:56 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:56.142 163757 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149 namespace which is not needed anymore#033[00m
Dec  2 06:20:56 np0005542249 gallant_kapitsa[272803]: {
Dec  2 06:20:56 np0005542249 gallant_kapitsa[272803]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:20:56 np0005542249 gallant_kapitsa[272803]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:20:56 np0005542249 gallant_kapitsa[272803]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:20:56 np0005542249 gallant_kapitsa[272803]:        "osd_id": 0,
Dec  2 06:20:56 np0005542249 gallant_kapitsa[272803]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:20:56 np0005542249 gallant_kapitsa[272803]:        "type": "bluestore"
Dec  2 06:20:56 np0005542249 gallant_kapitsa[272803]:    },
Dec  2 06:20:56 np0005542249 gallant_kapitsa[272803]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:20:56 np0005542249 gallant_kapitsa[272803]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:20:56 np0005542249 gallant_kapitsa[272803]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:20:56 np0005542249 gallant_kapitsa[272803]:        "osd_id": 2,
Dec  2 06:20:56 np0005542249 gallant_kapitsa[272803]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:20:56 np0005542249 gallant_kapitsa[272803]:        "type": "bluestore"
Dec  2 06:20:56 np0005542249 gallant_kapitsa[272803]:    },
Dec  2 06:20:56 np0005542249 gallant_kapitsa[272803]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:20:56 np0005542249 gallant_kapitsa[272803]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:20:56 np0005542249 gallant_kapitsa[272803]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:20:56 np0005542249 gallant_kapitsa[272803]:        "osd_id": 1,
Dec  2 06:20:56 np0005542249 gallant_kapitsa[272803]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:20:56 np0005542249 gallant_kapitsa[272803]:        "type": "bluestore"
Dec  2 06:20:56 np0005542249 gallant_kapitsa[272803]:    }
Dec  2 06:20:56 np0005542249 gallant_kapitsa[272803]: }
Dec  2 06:20:56 np0005542249 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Dec  2 06:20:56 np0005542249 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 15.278s CPU time.
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.222 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:56 np0005542249 systemd[1]: libpod-eac934125faf8cfb1dc00f0dc711781d99788dff35b30b49e1e74b41f7bf6333.scope: Deactivated successfully.
Dec  2 06:20:56 np0005542249 systemd[1]: libpod-eac934125faf8cfb1dc00f0dc711781d99788dff35b30b49e1e74b41f7bf6333.scope: Consumed 1.146s CPU time.
Dec  2 06:20:56 np0005542249 podman[272785]: 2025-12-02 11:20:56.22697906 +0000 UTC m=+1.360540690 container died eac934125faf8cfb1dc00f0dc711781d99788dff35b30b49e1e74b41f7bf6333 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:20:56 np0005542249 systemd-machined[216222]: Machine qemu-7-instance-00000007 terminated.
Dec  2 06:20:56 np0005542249 systemd[1]: var-lib-containers-storage-overlay-84b25a307b62e818ab7de3728c46d33063678f9f7987547ac1551d07c5129896-merged.mount: Deactivated successfully.
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.283 254904 INFO nova.virt.libvirt.driver [-] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Instance destroyed successfully.#033[00m
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.284 254904 DEBUG nova.objects.instance [None req-1428d991-bd49-416a-93ce-9348ba39592c 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lazy-loading 'resources' on Instance uuid 7bedde1c-9243-4d63-b574-154d2b7e78ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:20:56 np0005542249 podman[272785]: 2025-12-02 11:20:56.293497437 +0000 UTC m=+1.427059057 container remove eac934125faf8cfb1dc00f0dc711781d99788dff35b30b49e1e74b41f7bf6333 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:20:56 np0005542249 systemd[1]: libpod-conmon-eac934125faf8cfb1dc00f0dc711781d99788dff35b30b49e1e74b41f7bf6333.scope: Deactivated successfully.
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.305 254904 DEBUG nova.virt.libvirt.vif [None req-1428d991-bd49-416a-93ce-9348ba39592c 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:20:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-166470318',display_name='tempest-VolumesBackupsTest-instance-166470318',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-166470318',id=7,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJziU88L3iquFvjPgSOTyxd8RyTwABv58QhKI/jQBGJ54tLDGdc0mEfjzrnF83TzJsQhvdxGNgZurLJMr9epI63g8qPyQiU643zIfxVH4sr++AbUveV1NoDmmqkFt+Gw/Q==',key_name='tempest-keypair-1500356688',keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:20:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4beaae6889da4e57bb304963bae13143',ramdisk_id='',reservation_id='r-co0ky5dt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-458528599',owner_user_name='tempest-VolumesBackupsTest-458528599-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:20:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='382055aacd254f8bb9b170628992619d',uuid=7bedde1c-9243-4d63-b574-154d2b7e78ef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "30bc1a71-f71d-41f4-a599-eaa26706d00c", "address": "fa:16:3e:2c:3d:45", "network": {"id": "1468b032-015a-4fb8-a7c5-2a3b8aab9149", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-555435799-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4beaae6889da4e57bb304963bae13143", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30bc1a71-f7", "ovs_interfaceid": "30bc1a71-f71d-41f4-a599-eaa26706d00c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.306 254904 DEBUG nova.network.os_vif_util [None req-1428d991-bd49-416a-93ce-9348ba39592c 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Converting VIF {"id": "30bc1a71-f71d-41f4-a599-eaa26706d00c", "address": "fa:16:3e:2c:3d:45", "network": {"id": "1468b032-015a-4fb8-a7c5-2a3b8aab9149", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-555435799-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4beaae6889da4e57bb304963bae13143", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30bc1a71-f7", "ovs_interfaceid": "30bc1a71-f71d-41f4-a599-eaa26706d00c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.308 254904 DEBUG nova.network.os_vif_util [None req-1428d991-bd49-416a-93ce-9348ba39592c 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2c:3d:45,bridge_name='br-int',has_traffic_filtering=True,id=30bc1a71-f71d-41f4-a599-eaa26706d00c,network=Network(1468b032-015a-4fb8-a7c5-2a3b8aab9149),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30bc1a71-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.309 254904 DEBUG os_vif [None req-1428d991-bd49-416a-93ce-9348ba39592c 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:3d:45,bridge_name='br-int',has_traffic_filtering=True,id=30bc1a71-f71d-41f4-a599-eaa26706d00c,network=Network(1468b032-015a-4fb8-a7c5-2a3b8aab9149),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30bc1a71-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.313 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.315 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30bc1a71-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.319 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.322 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.325 254904 INFO os_vif [None req-1428d991-bd49-416a-93ce-9348ba39592c 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:3d:45,bridge_name='br-int',has_traffic_filtering=True,id=30bc1a71-f71d-41f4-a599-eaa26706d00c,network=Network(1468b032-015a-4fb8-a7c5-2a3b8aab9149),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30bc1a71-f7')#033[00m
Dec  2 06:20:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:20:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:20:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:20:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:20:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.357 254904 DEBUG nova.compute.manager [req-4a5467da-4d15-4dbe-99df-8d0b1ad34df5 req-7e5f19d7-9bb2-4f6b-85f0-4e583b93cc88 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Received event network-vif-unplugged-30bc1a71-f71d-41f4-a599-eaa26706d00c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.358 254904 DEBUG oslo_concurrency.lockutils [req-4a5467da-4d15-4dbe-99df-8d0b1ad34df5 req-7e5f19d7-9bb2-4f6b-85f0-4e583b93cc88 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "7bedde1c-9243-4d63-b574-154d2b7e78ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.358 254904 DEBUG oslo_concurrency.lockutils [req-4a5467da-4d15-4dbe-99df-8d0b1ad34df5 req-7e5f19d7-9bb2-4f6b-85f0-4e583b93cc88 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "7bedde1c-9243-4d63-b574-154d2b7e78ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.359 254904 DEBUG oslo_concurrency.lockutils [req-4a5467da-4d15-4dbe-99df-8d0b1ad34df5 req-7e5f19d7-9bb2-4f6b-85f0-4e583b93cc88 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "7bedde1c-9243-4d63-b574-154d2b7e78ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.359 254904 DEBUG nova.compute.manager [req-4a5467da-4d15-4dbe-99df-8d0b1ad34df5 req-7e5f19d7-9bb2-4f6b-85f0-4e583b93cc88 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] No waiting events found dispatching network-vif-unplugged-30bc1a71-f71d-41f4-a599-eaa26706d00c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:20:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.359 254904 DEBUG nova.compute.manager [req-4a5467da-4d15-4dbe-99df-8d0b1ad34df5 req-7e5f19d7-9bb2-4f6b-85f0-4e583b93cc88 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Received event network-vif-unplugged-30bc1a71-f71d-41f4-a599-eaa26706d00c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 06:20:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Dec  2 06:20:56 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 486239aa-12fd-4aed-b5ba-2ee8b6f9113c does not exist
Dec  2 06:20:56 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 516b1648-58b4-45e9-80ad-3416b4bf8c04 does not exist
Dec  2 06:20:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Dec  2 06:20:56 np0005542249 neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149[269495]: [NOTICE]   (269499) : haproxy version is 2.8.14-c23fe91
Dec  2 06:20:56 np0005542249 neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149[269495]: [NOTICE]   (269499) : path to executable is /usr/sbin/haproxy
Dec  2 06:20:56 np0005542249 neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149[269495]: [WARNING]  (269499) : Exiting Master process...
Dec  2 06:20:56 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Dec  2 06:20:56 np0005542249 neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149[269495]: [ALERT]    (269499) : Current worker (269501) exited with code 143 (Terminated)
Dec  2 06:20:56 np0005542249 neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149[269495]: [WARNING]  (269499) : All workers exited. Exiting... (0)
Dec  2 06:20:56 np0005542249 systemd[1]: libpod-36038f884934d930b8924d476ebad5734f327dae41d87e13d284765030219e2a.scope: Deactivated successfully.
Dec  2 06:20:56 np0005542249 podman[272881]: 2025-12-02 11:20:56.380151254 +0000 UTC m=+0.070766879 container died 36038f884934d930b8924d476ebad5734f327dae41d87e13d284765030219e2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:20:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:20:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:20:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:20:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:20:56 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-36038f884934d930b8924d476ebad5734f327dae41d87e13d284765030219e2a-userdata-shm.mount: Deactivated successfully.
Dec  2 06:20:56 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:56.417 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:20:56 np0005542249 systemd[1]: var-lib-containers-storage-overlay-1e0802a1f363f85f2753b7c8339ab6985771196885cbd18802ef3eb74ae8a95b-merged.mount: Deactivated successfully.
Dec  2 06:20:56 np0005542249 podman[272881]: 2025-12-02 11:20:56.433510016 +0000 UTC m=+0.124125641 container cleanup 36038f884934d930b8924d476ebad5734f327dae41d87e13d284765030219e2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 06:20:56 np0005542249 systemd[1]: libpod-conmon-36038f884934d930b8924d476ebad5734f327dae41d87e13d284765030219e2a.scope: Deactivated successfully.
Dec  2 06:20:56 np0005542249 podman[272950]: 2025-12-02 11:20:56.522511755 +0000 UTC m=+0.055776836 container remove 36038f884934d930b8924d476ebad5734f327dae41d87e13d284765030219e2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  2 06:20:56 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:56.532 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[b3e1dcd8-5a5a-4494-a651-3fd04b6a2126]: (4, ('Tue Dec  2 11:20:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149 (36038f884934d930b8924d476ebad5734f327dae41d87e13d284765030219e2a)\n36038f884934d930b8924d476ebad5734f327dae41d87e13d284765030219e2a\nTue Dec  2 11:20:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149 (36038f884934d930b8924d476ebad5734f327dae41d87e13d284765030219e2a)\n36038f884934d930b8924d476ebad5734f327dae41d87e13d284765030219e2a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:56 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:56.535 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[db6b5044-8b79-4be7-af04-8fda5ec57e64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:56 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:56.536 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1468b032-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.538 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:56 np0005542249 kernel: tap1468b032-00: left promiscuous mode
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.566 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:56 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:56.569 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[61145e96-55ba-4d57-b984-ca12e0018b47]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:56 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:56.585 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[2750f761-892f-43ee-b10a-e62922930b71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:56 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:56.587 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[d83bccd1-bf08-4ec8-9459-5a154b42c06d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:56 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:56.609 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[deecd91c-4e54-4981-8d26-e7f39c30ee11]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460902, 'reachable_time': 44090, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272993, 'error': None, 'target': 'ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:56 np0005542249 systemd[1]: run-netns-ovnmeta\x2d1468b032\x2d015a\x2d4fb8\x2da7c5\x2d2a3b8aab9149.mount: Deactivated successfully.
Dec  2 06:20:56 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:56.615 164036 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1468b032-015a-4fb8-a7c5-2a3b8aab9149 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 06:20:56 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:20:56.615 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[6c193b13-5333-4ced-808a-f617ec336697]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.828 254904 INFO nova.virt.libvirt.driver [None req-1428d991-bd49-416a-93ce-9348ba39592c 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Deleting instance files /var/lib/nova/instances/7bedde1c-9243-4d63-b574-154d2b7e78ef_del#033[00m
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.829 254904 INFO nova.virt.libvirt.driver [None req-1428d991-bd49-416a-93ce-9348ba39592c 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Deletion of /var/lib/nova/instances/7bedde1c-9243-4d63-b574-154d2b7e78ef_del complete#033[00m
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.882 254904 INFO nova.compute.manager [None req-1428d991-bd49-416a-93ce-9348ba39592c 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.884 254904 DEBUG oslo.service.loopingcall [None req-1428d991-bd49-416a-93ce-9348ba39592c 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.885 254904 DEBUG nova.compute.manager [-] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:20:56 np0005542249 nova_compute[254900]: 2025-12-02 11:20:56.885 254904 DEBUG nova.network.neutron [-] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:20:57 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:20:57 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:20:57 np0005542249 nova_compute[254900]: 2025-12-02 11:20:57.943 254904 DEBUG nova.network.neutron [-] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:20:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1171: 321 pgs: 321 active+clean; 113 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 8.5 KiB/s wr, 162 op/s
Dec  2 06:20:57 np0005542249 nova_compute[254900]: 2025-12-02 11:20:57.973 254904 INFO nova.compute.manager [-] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Took 1.09 seconds to deallocate network for instance.#033[00m
Dec  2 06:20:58 np0005542249 nova_compute[254900]: 2025-12-02 11:20:58.030 254904 DEBUG oslo_concurrency.lockutils [None req-1428d991-bd49-416a-93ce-9348ba39592c 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:58 np0005542249 nova_compute[254900]: 2025-12-02 11:20:58.031 254904 DEBUG oslo_concurrency.lockutils [None req-1428d991-bd49-416a-93ce-9348ba39592c 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:58 np0005542249 nova_compute[254900]: 2025-12-02 11:20:58.033 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:58 np0005542249 nova_compute[254900]: 2025-12-02 11:20:58.089 254904 DEBUG nova.compute.manager [req-d456d6a5-15ff-4f48-af38-94424f90a655 req-006d0d62-a3c4-4bf1-a7bb-7cf3d8135f48 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Received event network-vif-deleted-30bc1a71-f71d-41f4-a599-eaa26706d00c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:20:58 np0005542249 nova_compute[254900]: 2025-12-02 11:20:58.103 254904 DEBUG oslo_concurrency.processutils [None req-1428d991-bd49-416a-93ce-9348ba39592c 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:20:58 np0005542249 nova_compute[254900]: 2025-12-02 11:20:58.415 254904 DEBUG nova.compute.manager [req-35576ed3-3e87-4bfa-bf93-044ee0a825ac req-d3f7acd1-d622-46a0-a58c-36da10a0d722 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Received event network-vif-plugged-30bc1a71-f71d-41f4-a599-eaa26706d00c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:20:58 np0005542249 nova_compute[254900]: 2025-12-02 11:20:58.416 254904 DEBUG oslo_concurrency.lockutils [req-35576ed3-3e87-4bfa-bf93-044ee0a825ac req-d3f7acd1-d622-46a0-a58c-36da10a0d722 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "7bedde1c-9243-4d63-b574-154d2b7e78ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:20:58 np0005542249 nova_compute[254900]: 2025-12-02 11:20:58.417 254904 DEBUG oslo_concurrency.lockutils [req-35576ed3-3e87-4bfa-bf93-044ee0a825ac req-d3f7acd1-d622-46a0-a58c-36da10a0d722 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "7bedde1c-9243-4d63-b574-154d2b7e78ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:20:58 np0005542249 nova_compute[254900]: 2025-12-02 11:20:58.417 254904 DEBUG oslo_concurrency.lockutils [req-35576ed3-3e87-4bfa-bf93-044ee0a825ac req-d3f7acd1-d622-46a0-a58c-36da10a0d722 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "7bedde1c-9243-4d63-b574-154d2b7e78ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:58 np0005542249 nova_compute[254900]: 2025-12-02 11:20:58.418 254904 DEBUG nova.compute.manager [req-35576ed3-3e87-4bfa-bf93-044ee0a825ac req-d3f7acd1-d622-46a0-a58c-36da10a0d722 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] No waiting events found dispatching network-vif-plugged-30bc1a71-f71d-41f4-a599-eaa26706d00c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:20:58 np0005542249 nova_compute[254900]: 2025-12-02 11:20:58.418 254904 WARNING nova.compute.manager [req-35576ed3-3e87-4bfa-bf93-044ee0a825ac req-d3f7acd1-d622-46a0-a58c-36da10a0d722 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Received unexpected event network-vif-plugged-30bc1a71-f71d-41f4-a599-eaa26706d00c for instance with vm_state deleted and task_state None.#033[00m
Dec  2 06:20:58 np0005542249 nova_compute[254900]: 2025-12-02 11:20:58.509 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:20:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:20:58 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1370005954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:20:58 np0005542249 nova_compute[254900]: 2025-12-02 11:20:58.584 254904 DEBUG oslo_concurrency.processutils [None req-1428d991-bd49-416a-93ce-9348ba39592c 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:20:58 np0005542249 nova_compute[254900]: 2025-12-02 11:20:58.594 254904 DEBUG nova.compute.provider_tree [None req-1428d991-bd49-416a-93ce-9348ba39592c 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:20:58 np0005542249 nova_compute[254900]: 2025-12-02 11:20:58.610 254904 DEBUG nova.scheduler.client.report [None req-1428d991-bd49-416a-93ce-9348ba39592c 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:20:58 np0005542249 nova_compute[254900]: 2025-12-02 11:20:58.630 254904 DEBUG oslo_concurrency.lockutils [None req-1428d991-bd49-416a-93ce-9348ba39592c 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:58 np0005542249 nova_compute[254900]: 2025-12-02 11:20:58.701 254904 INFO nova.scheduler.client.report [None req-1428d991-bd49-416a-93ce-9348ba39592c 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Deleted allocations for instance 7bedde1c-9243-4d63-b574-154d2b7e78ef#033[00m
Dec  2 06:20:58 np0005542249 nova_compute[254900]: 2025-12-02 11:20:58.817 254904 DEBUG oslo_concurrency.lockutils [None req-1428d991-bd49-416a-93ce-9348ba39592c 382055aacd254f8bb9b170628992619d 4beaae6889da4e57bb304963bae13143 - - default default] Lock "7bedde1c-9243-4d63-b574-154d2b7e78ef" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:20:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1172: 321 pgs: 321 active+clean; 105 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.1 MiB/s wr, 154 op/s
Dec  2 06:21:01 np0005542249 nova_compute[254900]: 2025-12-02 11:21:01.352 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:21:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Dec  2 06:21:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Dec  2 06:21:01 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Dec  2 06:21:01 np0005542249 nova_compute[254900]: 2025-12-02 11:21:01.758 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1174: 321 pgs: 321 active+clean; 126 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 144 op/s
Dec  2 06:21:02 np0005542249 podman[273017]: 2025-12-02 11:21:02.038207531 +0000 UTC m=+0.103405018 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  2 06:21:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Dec  2 06:21:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Dec  2 06:21:02 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Dec  2 06:21:03 np0005542249 nova_compute[254900]: 2025-12-02 11:21:03.036 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Dec  2 06:21:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Dec  2 06:21:03 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Dec  2 06:21:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1177: 321 pgs: 321 active+clean; 134 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 3.5 MiB/s wr, 89 op/s
Dec  2 06:21:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Dec  2 06:21:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Dec  2 06:21:04 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Dec  2 06:21:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:21:04 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4281634818' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:21:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:21:04 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4281634818' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:21:05 np0005542249 nova_compute[254900]: 2025-12-02 11:21:05.555 254904 DEBUG oslo_concurrency.lockutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Acquiring lock "5771f44f-b324-4d11-b452-a2c22e990c48" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:05 np0005542249 nova_compute[254900]: 2025-12-02 11:21:05.556 254904 DEBUG oslo_concurrency.lockutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "5771f44f-b324-4d11-b452-a2c22e990c48" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:05 np0005542249 nova_compute[254900]: 2025-12-02 11:21:05.574 254904 DEBUG nova.compute.manager [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:21:05 np0005542249 nova_compute[254900]: 2025-12-02 11:21:05.674 254904 DEBUG oslo_concurrency.lockutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:05 np0005542249 nova_compute[254900]: 2025-12-02 11:21:05.675 254904 DEBUG oslo_concurrency.lockutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:05 np0005542249 nova_compute[254900]: 2025-12-02 11:21:05.682 254904 DEBUG nova.virt.hardware [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:21:05 np0005542249 nova_compute[254900]: 2025-12-02 11:21:05.683 254904 INFO nova.compute.claims [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:21:05 np0005542249 nova_compute[254900]: 2025-12-02 11:21:05.787 254904 DEBUG oslo_concurrency.processutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:21:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1179: 321 pgs: 321 active+clean; 134 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.7 MiB/s wr, 92 op/s
Dec  2 06:21:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:21:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/370266250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:21:06 np0005542249 nova_compute[254900]: 2025-12-02 11:21:06.230 254904 DEBUG oslo_concurrency.processutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:21:06 np0005542249 nova_compute[254900]: 2025-12-02 11:21:06.238 254904 DEBUG nova.compute.provider_tree [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:21:06 np0005542249 nova_compute[254900]: 2025-12-02 11:21:06.258 254904 DEBUG nova.scheduler.client.report [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:21:06 np0005542249 nova_compute[254900]: 2025-12-02 11:21:06.284 254904 DEBUG oslo_concurrency.lockutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:06 np0005542249 nova_compute[254900]: 2025-12-02 11:21:06.285 254904 DEBUG nova.compute.manager [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:21:06 np0005542249 nova_compute[254900]: 2025-12-02 11:21:06.327 254904 DEBUG nova.compute.manager [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:21:06 np0005542249 nova_compute[254900]: 2025-12-02 11:21:06.329 254904 DEBUG nova.network.neutron [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:21:06 np0005542249 nova_compute[254900]: 2025-12-02 11:21:06.336 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:06 np0005542249 nova_compute[254900]: 2025-12-02 11:21:06.353 254904 INFO nova.virt.libvirt.driver [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:21:06 np0005542249 nova_compute[254900]: 2025-12-02 11:21:06.359 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:21:06 np0005542249 nova_compute[254900]: 2025-12-02 11:21:06.375 254904 DEBUG nova.compute.manager [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:21:06 np0005542249 nova_compute[254900]: 2025-12-02 11:21:06.481 254904 DEBUG nova.compute.manager [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:21:06 np0005542249 nova_compute[254900]: 2025-12-02 11:21:06.484 254904 DEBUG nova.virt.libvirt.driver [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:21:06 np0005542249 nova_compute[254900]: 2025-12-02 11:21:06.485 254904 INFO nova.virt.libvirt.driver [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Creating image(s)#033[00m
Dec  2 06:21:06 np0005542249 nova_compute[254900]: 2025-12-02 11:21:06.523 254904 DEBUG nova.storage.rbd_utils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] rbd image 5771f44f-b324-4d11-b452-a2c22e990c48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:21:06 np0005542249 nova_compute[254900]: 2025-12-02 11:21:06.560 254904 DEBUG nova.storage.rbd_utils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] rbd image 5771f44f-b324-4d11-b452-a2c22e990c48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:21:06 np0005542249 nova_compute[254900]: 2025-12-02 11:21:06.597 254904 DEBUG nova.storage.rbd_utils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] rbd image 5771f44f-b324-4d11-b452-a2c22e990c48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:21:06 np0005542249 nova_compute[254900]: 2025-12-02 11:21:06.603 254904 DEBUG oslo_concurrency.processutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:21:06 np0005542249 nova_compute[254900]: 2025-12-02 11:21:06.637 254904 DEBUG nova.policy [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e0796090ff07418b99397a7f13f11633', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fff78a31f26746918caf04706b12b741', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:21:06 np0005542249 nova_compute[254900]: 2025-12-02 11:21:06.699 254904 DEBUG oslo_concurrency.processutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:21:06 np0005542249 nova_compute[254900]: 2025-12-02 11:21:06.700 254904 DEBUG oslo_concurrency.lockutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Acquiring lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:06 np0005542249 nova_compute[254900]: 2025-12-02 11:21:06.701 254904 DEBUG oslo_concurrency.lockutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:06 np0005542249 nova_compute[254900]: 2025-12-02 11:21:06.702 254904 DEBUG oslo_concurrency.lockutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:06 np0005542249 nova_compute[254900]: 2025-12-02 11:21:06.734 254904 DEBUG nova.storage.rbd_utils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] rbd image 5771f44f-b324-4d11-b452-a2c22e990c48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:21:06 np0005542249 nova_compute[254900]: 2025-12-02 11:21:06.739 254904 DEBUG oslo_concurrency.processutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 5771f44f-b324-4d11-b452-a2c22e990c48_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:21:07 np0005542249 nova_compute[254900]: 2025-12-02 11:21:07.101 254904 DEBUG oslo_concurrency.processutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 5771f44f-b324-4d11-b452-a2c22e990c48_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:21:07 np0005542249 nova_compute[254900]: 2025-12-02 11:21:07.195 254904 DEBUG nova.storage.rbd_utils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] resizing rbd image 5771f44f-b324-4d11-b452-a2c22e990c48_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  2 06:21:07 np0005542249 nova_compute[254900]: 2025-12-02 11:21:07.332 254904 DEBUG nova.objects.instance [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lazy-loading 'migration_context' on Instance uuid 5771f44f-b324-4d11-b452-a2c22e990c48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:21:07 np0005542249 nova_compute[254900]: 2025-12-02 11:21:07.347 254904 DEBUG nova.virt.libvirt.driver [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 06:21:07 np0005542249 nova_compute[254900]: 2025-12-02 11:21:07.348 254904 DEBUG nova.virt.libvirt.driver [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Ensure instance console log exists: /var/lib/nova/instances/5771f44f-b324-4d11-b452-a2c22e990c48/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:21:07 np0005542249 nova_compute[254900]: 2025-12-02 11:21:07.348 254904 DEBUG oslo_concurrency.lockutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:07 np0005542249 nova_compute[254900]: 2025-12-02 11:21:07.349 254904 DEBUG oslo_concurrency.lockutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:07 np0005542249 nova_compute[254900]: 2025-12-02 11:21:07.349 254904 DEBUG oslo_concurrency.lockutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:07 np0005542249 nova_compute[254900]: 2025-12-02 11:21:07.720 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:07 np0005542249 nova_compute[254900]: 2025-12-02 11:21:07.944 254904 DEBUG nova.network.neutron [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Successfully created port: f393e1c3-9bd9-4748-a528-b99dd193646c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:21:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1180: 321 pgs: 321 active+clean; 142 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 411 KiB/s wr, 89 op/s
Dec  2 06:21:07 np0005542249 nova_compute[254900]: 2025-12-02 11:21:07.976 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:08 np0005542249 nova_compute[254900]: 2025-12-02 11:21:08.038 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:09 np0005542249 nova_compute[254900]: 2025-12-02 11:21:09.369 254904 DEBUG nova.network.neutron [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Successfully updated port: f393e1c3-9bd9-4748-a528-b99dd193646c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:21:09 np0005542249 nova_compute[254900]: 2025-12-02 11:21:09.389 254904 DEBUG oslo_concurrency.lockutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Acquiring lock "refresh_cache-5771f44f-b324-4d11-b452-a2c22e990c48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:21:09 np0005542249 nova_compute[254900]: 2025-12-02 11:21:09.390 254904 DEBUG oslo_concurrency.lockutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Acquired lock "refresh_cache-5771f44f-b324-4d11-b452-a2c22e990c48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:21:09 np0005542249 nova_compute[254900]: 2025-12-02 11:21:09.390 254904 DEBUG nova.network.neutron [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:21:09 np0005542249 nova_compute[254900]: 2025-12-02 11:21:09.566 254904 DEBUG nova.compute.manager [req-299a031f-a1b8-48d4-8733-f10cc6ea548b req-60be40d9-2c48-4959-b17e-b0daacf1a1b3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Received event network-changed-f393e1c3-9bd9-4748-a528-b99dd193646c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:21:09 np0005542249 nova_compute[254900]: 2025-12-02 11:21:09.567 254904 DEBUG nova.compute.manager [req-299a031f-a1b8-48d4-8733-f10cc6ea548b req-60be40d9-2c48-4959-b17e-b0daacf1a1b3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Refreshing instance network info cache due to event network-changed-f393e1c3-9bd9-4748-a528-b99dd193646c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:21:09 np0005542249 nova_compute[254900]: 2025-12-02 11:21:09.567 254904 DEBUG oslo_concurrency.lockutils [req-299a031f-a1b8-48d4-8733-f10cc6ea548b req-60be40d9-2c48-4959-b17e-b0daacf1a1b3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-5771f44f-b324-4d11-b452-a2c22e990c48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:21:09 np0005542249 nova_compute[254900]: 2025-12-02 11:21:09.636 254904 DEBUG nova.network.neutron [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:21:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1181: 321 pgs: 321 active+clean; 158 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 1.4 MiB/s wr, 91 op/s
Dec  2 06:21:10 np0005542249 podman[273226]: 2025-12-02 11:21:10.120288301 +0000 UTC m=+0.191548533 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  2 06:21:10 np0005542249 nova_compute[254900]: 2025-12-02 11:21:10.564 254904 DEBUG nova.network.neutron [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Updating instance_info_cache with network_info: [{"id": "f393e1c3-9bd9-4748-a528-b99dd193646c", "address": "fa:16:3e:4e:4e:d3", "network": {"id": "df73b9ab-de8d-40fa-9bf0-aa773bb32d3a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-950743948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff78a31f26746918caf04706b12b741", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf393e1c3-9b", "ovs_interfaceid": "f393e1c3-9bd9-4748-a528-b99dd193646c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:21:10 np0005542249 nova_compute[254900]: 2025-12-02 11:21:10.588 254904 DEBUG oslo_concurrency.lockutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Releasing lock "refresh_cache-5771f44f-b324-4d11-b452-a2c22e990c48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:21:10 np0005542249 nova_compute[254900]: 2025-12-02 11:21:10.589 254904 DEBUG nova.compute.manager [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Instance network_info: |[{"id": "f393e1c3-9bd9-4748-a528-b99dd193646c", "address": "fa:16:3e:4e:4e:d3", "network": {"id": "df73b9ab-de8d-40fa-9bf0-aa773bb32d3a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-950743948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff78a31f26746918caf04706b12b741", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf393e1c3-9b", "ovs_interfaceid": "f393e1c3-9bd9-4748-a528-b99dd193646c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:21:10 np0005542249 nova_compute[254900]: 2025-12-02 11:21:10.590 254904 DEBUG oslo_concurrency.lockutils [req-299a031f-a1b8-48d4-8733-f10cc6ea548b req-60be40d9-2c48-4959-b17e-b0daacf1a1b3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-5771f44f-b324-4d11-b452-a2c22e990c48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:21:10 np0005542249 nova_compute[254900]: 2025-12-02 11:21:10.590 254904 DEBUG nova.network.neutron [req-299a031f-a1b8-48d4-8733-f10cc6ea548b req-60be40d9-2c48-4959-b17e-b0daacf1a1b3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Refreshing network info cache for port f393e1c3-9bd9-4748-a528-b99dd193646c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:21:10 np0005542249 nova_compute[254900]: 2025-12-02 11:21:10.595 254904 DEBUG nova.virt.libvirt.driver [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Start _get_guest_xml network_info=[{"id": "f393e1c3-9bd9-4748-a528-b99dd193646c", "address": "fa:16:3e:4e:4e:d3", "network": {"id": "df73b9ab-de8d-40fa-9bf0-aa773bb32d3a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-950743948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff78a31f26746918caf04706b12b741", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf393e1c3-9b", "ovs_interfaceid": "f393e1c3-9bd9-4748-a528-b99dd193646c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'image_id': '5a40f66c-ab43-47dd-9880-e59f9fa2c60e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:21:10 np0005542249 nova_compute[254900]: 2025-12-02 11:21:10.604 254904 WARNING nova.virt.libvirt.driver [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:21:10 np0005542249 nova_compute[254900]: 2025-12-02 11:21:10.614 254904 DEBUG nova.virt.libvirt.host [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:21:10 np0005542249 nova_compute[254900]: 2025-12-02 11:21:10.615 254904 DEBUG nova.virt.libvirt.host [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:21:10 np0005542249 nova_compute[254900]: 2025-12-02 11:21:10.621 254904 DEBUG nova.virt.libvirt.host [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:21:10 np0005542249 nova_compute[254900]: 2025-12-02 11:21:10.622 254904 DEBUG nova.virt.libvirt.host [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:21:10 np0005542249 nova_compute[254900]: 2025-12-02 11:21:10.622 254904 DEBUG nova.virt.libvirt.driver [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:21:10 np0005542249 nova_compute[254900]: 2025-12-02 11:21:10.623 254904 DEBUG nova.virt.hardware [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:21:10 np0005542249 nova_compute[254900]: 2025-12-02 11:21:10.624 254904 DEBUG nova.virt.hardware [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:21:10 np0005542249 nova_compute[254900]: 2025-12-02 11:21:10.624 254904 DEBUG nova.virt.hardware [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:21:10 np0005542249 nova_compute[254900]: 2025-12-02 11:21:10.624 254904 DEBUG nova.virt.hardware [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:21:10 np0005542249 nova_compute[254900]: 2025-12-02 11:21:10.625 254904 DEBUG nova.virt.hardware [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:21:10 np0005542249 nova_compute[254900]: 2025-12-02 11:21:10.625 254904 DEBUG nova.virt.hardware [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:21:10 np0005542249 nova_compute[254900]: 2025-12-02 11:21:10.626 254904 DEBUG nova.virt.hardware [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:21:10 np0005542249 nova_compute[254900]: 2025-12-02 11:21:10.626 254904 DEBUG nova.virt.hardware [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:21:10 np0005542249 nova_compute[254900]: 2025-12-02 11:21:10.627 254904 DEBUG nova.virt.hardware [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:21:10 np0005542249 nova_compute[254900]: 2025-12-02 11:21:10.627 254904 DEBUG nova.virt.hardware [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:21:10 np0005542249 nova_compute[254900]: 2025-12-02 11:21:10.628 254904 DEBUG nova.virt.hardware [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:21:10 np0005542249 nova_compute[254900]: 2025-12-02 11:21:10.633 254904 DEBUG oslo_concurrency.processutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:21:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:21:11 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1744107646' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.131 254904 DEBUG oslo_concurrency.processutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.160 254904 DEBUG nova.storage.rbd_utils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] rbd image 5771f44f-b324-4d11-b452-a2c22e990c48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.165 254904 DEBUG oslo_concurrency.processutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.280 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764674456.2794049, 7bedde1c-9243-4d63-b574-154d2b7e78ef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.281 254904 INFO nova.compute.manager [-] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.302 254904 DEBUG nova.compute.manager [None req-89e2ed76-c03e-486a-b17e-1afaaaafc94a - - - - - -] [instance: 7bedde1c-9243-4d63-b574-154d2b7e78ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.362 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:21:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Dec  2 06:21:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Dec  2 06:21:11 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Dec  2 06:21:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:21:11 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/603472396' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.713 254904 DEBUG oslo_concurrency.processutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.715 254904 DEBUG nova.virt.libvirt.vif [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-465048395',display_name='tempest-VolumesSnapshotTestJSON-instance-465048395',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-465048395',id=8,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPU7qPrCI8kbBPSW9jQCNKEm8UHqJFIzgbHmvMHzii2gZdSdpoJnxyFr3zMkvJqvqJLI4PHcvcTY2SVNDguBmTrW9hzDpT7tGfwq75CHGE67RRM5E2fn/JKC+AdldivSGQ==',key_name='tempest-keypair-5305996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fff78a31f26746918caf04706b12b741',ramdisk_id='',reservation_id='r-un4zyblu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-1610940554',owner_user_name='tempest-VolumesSnapshotTestJSON-1610940554-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:21:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e0796090ff07418b99397a7f13f11633',uuid=5771f44f-b324-4d11-b452-a2c22e990c48,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f393e1c3-9bd9-4748-a528-b99dd193646c", "address": "fa:16:3e:4e:4e:d3", "network": {"id": "df73b9ab-de8d-40fa-9bf0-aa773bb32d3a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-950743948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff78a31f26746918caf04706b12b741", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf393e1c3-9b", "ovs_interfaceid": "f393e1c3-9bd9-4748-a528-b99dd193646c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.715 254904 DEBUG nova.network.os_vif_util [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Converting VIF {"id": "f393e1c3-9bd9-4748-a528-b99dd193646c", "address": "fa:16:3e:4e:4e:d3", "network": {"id": "df73b9ab-de8d-40fa-9bf0-aa773bb32d3a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-950743948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff78a31f26746918caf04706b12b741", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf393e1c3-9b", "ovs_interfaceid": "f393e1c3-9bd9-4748-a528-b99dd193646c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.716 254904 DEBUG nova.network.os_vif_util [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:4e:d3,bridge_name='br-int',has_traffic_filtering=True,id=f393e1c3-9bd9-4748-a528-b99dd193646c,network=Network(df73b9ab-de8d-40fa-9bf0-aa773bb32d3a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf393e1c3-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.717 254904 DEBUG nova.objects.instance [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5771f44f-b324-4d11-b452-a2c22e990c48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.734 254904 DEBUG nova.virt.libvirt.driver [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:21:11 np0005542249 nova_compute[254900]:  <uuid>5771f44f-b324-4d11-b452-a2c22e990c48</uuid>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:  <name>instance-00000008</name>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <nova:name>tempest-VolumesSnapshotTestJSON-instance-465048395</nova:name>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:21:10</nova:creationTime>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:21:11 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:        <nova:user uuid="e0796090ff07418b99397a7f13f11633">tempest-VolumesSnapshotTestJSON-1610940554-project-member</nova:user>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:        <nova:project uuid="fff78a31f26746918caf04706b12b741">tempest-VolumesSnapshotTestJSON-1610940554</nova:project>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <nova:root type="image" uuid="5a40f66c-ab43-47dd-9880-e59f9fa2c60e"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:        <nova:port uuid="f393e1c3-9bd9-4748-a528-b99dd193646c">
Dec  2 06:21:11 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <entry name="serial">5771f44f-b324-4d11-b452-a2c22e990c48</entry>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <entry name="uuid">5771f44f-b324-4d11-b452-a2c22e990c48</entry>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/5771f44f-b324-4d11-b452-a2c22e990c48_disk">
Dec  2 06:21:11 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:21:11 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/5771f44f-b324-4d11-b452-a2c22e990c48_disk.config">
Dec  2 06:21:11 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:21:11 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:4e:4e:d3"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <target dev="tapf393e1c3-9b"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/5771f44f-b324-4d11-b452-a2c22e990c48/console.log" append="off"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:21:11 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:21:11 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:21:11 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:21:11 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.735 254904 DEBUG nova.compute.manager [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Preparing to wait for external event network-vif-plugged-f393e1c3-9bd9-4748-a528-b99dd193646c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.736 254904 DEBUG oslo_concurrency.lockutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Acquiring lock "5771f44f-b324-4d11-b452-a2c22e990c48-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.737 254904 DEBUG oslo_concurrency.lockutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "5771f44f-b324-4d11-b452-a2c22e990c48-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.738 254904 DEBUG oslo_concurrency.lockutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "5771f44f-b324-4d11-b452-a2c22e990c48-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.739 254904 DEBUG nova.virt.libvirt.vif [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-465048395',display_name='tempest-VolumesSnapshotTestJSON-instance-465048395',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-465048395',id=8,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPU7qPrCI8kbBPSW9jQCNKEm8UHqJFIzgbHmvMHzii2gZdSdpoJnxyFr3zMkvJqvqJLI4PHcvcTY2SVNDguBmTrW9hzDpT7tGfwq75CHGE67RRM5E2fn/JKC+AdldivSGQ==',key_name='tempest-keypair-5305996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fff78a31f26746918caf04706b12b741',ramdisk_id='',reservation_id='r-un4zyblu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-1610940554',owner_user_name='tempest-VolumesSnapshotTestJSON-1610940554-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:21:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e0796090ff07418b99397a7f13f11633',uuid=5771f44f-b324-4d11-b452-a2c22e990c48,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f393e1c3-9bd9-4748-a528-b99dd193646c", "address": "fa:16:3e:4e:4e:d3", "network": {"id": "df73b9ab-de8d-40fa-9bf0-aa773bb32d3a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-950743948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff78a31f26746918caf04706b12b741", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf393e1c3-9b", "ovs_interfaceid": "f393e1c3-9bd9-4748-a528-b99dd193646c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.740 254904 DEBUG nova.network.os_vif_util [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Converting VIF {"id": "f393e1c3-9bd9-4748-a528-b99dd193646c", "address": "fa:16:3e:4e:4e:d3", "network": {"id": "df73b9ab-de8d-40fa-9bf0-aa773bb32d3a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-950743948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff78a31f26746918caf04706b12b741", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf393e1c3-9b", "ovs_interfaceid": "f393e1c3-9bd9-4748-a528-b99dd193646c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.741 254904 DEBUG nova.network.os_vif_util [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:4e:d3,bridge_name='br-int',has_traffic_filtering=True,id=f393e1c3-9bd9-4748-a528-b99dd193646c,network=Network(df73b9ab-de8d-40fa-9bf0-aa773bb32d3a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf393e1c3-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.742 254904 DEBUG os_vif [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:4e:d3,bridge_name='br-int',has_traffic_filtering=True,id=f393e1c3-9bd9-4748-a528-b99dd193646c,network=Network(df73b9ab-de8d-40fa-9bf0-aa773bb32d3a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf393e1c3-9b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.748 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.749 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.750 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.754 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.754 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf393e1c3-9b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.755 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf393e1c3-9b, col_values=(('external_ids', {'iface-id': 'f393e1c3-9bd9-4748-a528-b99dd193646c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4e:4e:d3', 'vm-uuid': '5771f44f-b324-4d11-b452-a2c22e990c48'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.758 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:11 np0005542249 NetworkManager[48987]: <info>  [1764674471.7592] manager: (tapf393e1c3-9b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.762 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.767 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.769 254904 INFO os_vif [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:4e:d3,bridge_name='br-int',has_traffic_filtering=True,id=f393e1c3-9bd9-4748-a528-b99dd193646c,network=Network(df73b9ab-de8d-40fa-9bf0-aa773bb32d3a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf393e1c3-9b')#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.845 254904 DEBUG nova.virt.libvirt.driver [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.846 254904 DEBUG nova.virt.libvirt.driver [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.846 254904 DEBUG nova.virt.libvirt.driver [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] No VIF found with MAC fa:16:3e:4e:4e:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.847 254904 INFO nova.virt.libvirt.driver [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Using config drive#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.874 254904 DEBUG nova.storage.rbd_utils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] rbd image 5771f44f-b324-4d11-b452-a2c22e990c48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.907 254904 DEBUG nova.network.neutron [req-299a031f-a1b8-48d4-8733-f10cc6ea548b req-60be40d9-2c48-4959-b17e-b0daacf1a1b3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Updated VIF entry in instance network info cache for port f393e1c3-9bd9-4748-a528-b99dd193646c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.907 254904 DEBUG nova.network.neutron [req-299a031f-a1b8-48d4-8733-f10cc6ea548b req-60be40d9-2c48-4959-b17e-b0daacf1a1b3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Updating instance_info_cache with network_info: [{"id": "f393e1c3-9bd9-4748-a528-b99dd193646c", "address": "fa:16:3e:4e:4e:d3", "network": {"id": "df73b9ab-de8d-40fa-9bf0-aa773bb32d3a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-950743948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff78a31f26746918caf04706b12b741", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf393e1c3-9b", "ovs_interfaceid": "f393e1c3-9bd9-4748-a528-b99dd193646c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:21:11 np0005542249 nova_compute[254900]: 2025-12-02 11:21:11.930 254904 DEBUG oslo_concurrency.lockutils [req-299a031f-a1b8-48d4-8733-f10cc6ea548b req-60be40d9-2c48-4959-b17e-b0daacf1a1b3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-5771f44f-b324-4d11-b452-a2c22e990c48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:21:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1183: 321 pgs: 321 active+clean; 180 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 2.7 MiB/s wr, 80 op/s
Dec  2 06:21:12 np0005542249 nova_compute[254900]: 2025-12-02 11:21:12.260 254904 INFO nova.virt.libvirt.driver [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Creating config drive at /var/lib/nova/instances/5771f44f-b324-4d11-b452-a2c22e990c48/disk.config#033[00m
Dec  2 06:21:12 np0005542249 nova_compute[254900]: 2025-12-02 11:21:12.270 254904 DEBUG oslo_concurrency.processutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5771f44f-b324-4d11-b452-a2c22e990c48/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5q19a61q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:21:12 np0005542249 nova_compute[254900]: 2025-12-02 11:21:12.417 254904 DEBUG oslo_concurrency.processutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5771f44f-b324-4d11-b452-a2c22e990c48/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5q19a61q" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:21:12 np0005542249 nova_compute[254900]: 2025-12-02 11:21:12.454 254904 DEBUG nova.storage.rbd_utils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] rbd image 5771f44f-b324-4d11-b452-a2c22e990c48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:21:12 np0005542249 nova_compute[254900]: 2025-12-02 11:21:12.459 254904 DEBUG oslo_concurrency.processutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5771f44f-b324-4d11-b452-a2c22e990c48/disk.config 5771f44f-b324-4d11-b452-a2c22e990c48_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:21:12 np0005542249 nova_compute[254900]: 2025-12-02 11:21:12.658 254904 DEBUG oslo_concurrency.processutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5771f44f-b324-4d11-b452-a2c22e990c48/disk.config 5771f44f-b324-4d11-b452-a2c22e990c48_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:21:12 np0005542249 nova_compute[254900]: 2025-12-02 11:21:12.660 254904 INFO nova.virt.libvirt.driver [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Deleting local config drive /var/lib/nova/instances/5771f44f-b324-4d11-b452-a2c22e990c48/disk.config because it was imported into RBD.#033[00m
Dec  2 06:21:12 np0005542249 kernel: tapf393e1c3-9b: entered promiscuous mode
Dec  2 06:21:12 np0005542249 NetworkManager[48987]: <info>  [1764674472.7444] manager: (tapf393e1c3-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Dec  2 06:21:12 np0005542249 ovn_controller[153849]: 2025-12-02T11:21:12Z|00086|binding|INFO|Claiming lport f393e1c3-9bd9-4748-a528-b99dd193646c for this chassis.
Dec  2 06:21:12 np0005542249 ovn_controller[153849]: 2025-12-02T11:21:12Z|00087|binding|INFO|f393e1c3-9bd9-4748-a528-b99dd193646c: Claiming fa:16:3e:4e:4e:d3 10.100.0.12
Dec  2 06:21:12 np0005542249 nova_compute[254900]: 2025-12-02 11:21:12.745 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:12 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:12.761 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:4e:d3 10.100.0.12'], port_security=['fa:16:3e:4e:4e:d3 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5771f44f-b324-4d11-b452-a2c22e990c48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fff78a31f26746918caf04706b12b741', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd6ee0e59-cb51-4b63-b81c-fc532195e47d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff54e9e6-62e9-4528-92f5-a3bb97a08852, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=f393e1c3-9bd9-4748-a528-b99dd193646c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:21:12 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:12.763 163757 INFO neutron.agent.ovn.metadata.agent [-] Port f393e1c3-9bd9-4748-a528-b99dd193646c in datapath df73b9ab-de8d-40fa-9bf0-aa773bb32d3a bound to our chassis#033[00m
Dec  2 06:21:12 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:12.765 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network df73b9ab-de8d-40fa-9bf0-aa773bb32d3a#033[00m
Dec  2 06:21:12 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:12.785 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[c652ffb4-67f6-4bb3-9e1b-f80c3bd1eca6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:12 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:12.787 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdf73b9ab-d1 in ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 06:21:12 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:12.790 262398 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdf73b9ab-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 06:21:12 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:12.790 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[1cecba31-7ac2-49d6-8727-92419605ca9c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:12 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:12.791 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[e2d9967c-5952-446f-a664-342a6e27c19b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:12 np0005542249 systemd-machined[216222]: New machine qemu-8-instance-00000008.
Dec  2 06:21:12 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:12.811 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[d2707e89-e34f-4ff4-abd6-da9ec432b68f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:12 np0005542249 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Dec  2 06:21:12 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:12.844 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[833e9ccc-3215-4434-ad4e-7256705e696f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:12 np0005542249 nova_compute[254900]: 2025-12-02 11:21:12.853 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:12 np0005542249 ovn_controller[153849]: 2025-12-02T11:21:12Z|00088|binding|INFO|Setting lport f393e1c3-9bd9-4748-a528-b99dd193646c ovn-installed in OVS
Dec  2 06:21:12 np0005542249 ovn_controller[153849]: 2025-12-02T11:21:12Z|00089|binding|INFO|Setting lport f393e1c3-9bd9-4748-a528-b99dd193646c up in Southbound
Dec  2 06:21:12 np0005542249 nova_compute[254900]: 2025-12-02 11:21:12.863 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:12 np0005542249 systemd-udevd[273410]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:21:12 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:12.890 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[dc932d50-0986-4b3a-b499-42bbb4b24941]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:12 np0005542249 NetworkManager[48987]: <info>  [1764674472.8962] device (tapf393e1c3-9b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:21:12 np0005542249 NetworkManager[48987]: <info>  [1764674472.8975] device (tapf393e1c3-9b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:21:12 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:12.900 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[b76fa5be-b573-4a6e-a015-c426188de026]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:12 np0005542249 NetworkManager[48987]: <info>  [1764674472.9013] manager: (tapdf73b9ab-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/56)
Dec  2 06:21:12 np0005542249 podman[273386]: 2025-12-02 11:21:12.926155998 +0000 UTC m=+0.144714344 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:21:12 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:12.941 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[c558d8bd-64c6-41db-85d1-633e06676458]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:12 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:12.944 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[e41913ab-ad20-4934-b128-ef533dbc14a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:12 np0005542249 NetworkManager[48987]: <info>  [1764674472.9727] device (tapdf73b9ab-d0): carrier: link connected
Dec  2 06:21:12 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:12.981 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[8ae64a17-edd3-463b-a3bd-0092daad85bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:13.003 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[d2ca6b1a-c345-4907-a95c-ebbe93dc959b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf73b9ab-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:51:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466579, 'reachable_time': 36709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273442, 'error': None, 'target': 'ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:13.030 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[b40ed9dc-d8a5-432b-b91e-5fb92c1ee698]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe15:5153'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 466579, 'tstamp': 466579}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273443, 'error': None, 'target': 'ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.039 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:13.058 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[3ad7c79f-4d69-444b-813c-9631d885468d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf73b9ab-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:51:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466579, 'reachable_time': 36709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273444, 'error': None, 'target': 'ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:13.103 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[2f97ecd8-1cba-475f-b3bb-ca43456c9dfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:13.191 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[1f88030d-0c5d-4d77-bbd6-4c1fb3d1719d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:13.193 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf73b9ab-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:13.193 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:13.194 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf73b9ab-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.196 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:13 np0005542249 NetworkManager[48987]: <info>  [1764674473.1975] manager: (tapdf73b9ab-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Dec  2 06:21:13 np0005542249 kernel: tapdf73b9ab-d0: entered promiscuous mode
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:13.199 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdf73b9ab-d0, col_values=(('external_ids', {'iface-id': '21f7ba63-7631-4701-921a-a830d0f08e6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:21:13 np0005542249 ovn_controller[153849]: 2025-12-02T11:21:13Z|00090|binding|INFO|Releasing lport 21f7ba63-7631-4701-921a-a830d0f08e6f from this chassis (sb_readonly=0)
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.226 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:13.227 163757 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/df73b9ab-de8d-40fa-9bf0-aa773bb32d3a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/df73b9ab-de8d-40fa-9bf0-aa773bb32d3a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:13.229 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[50d97a93-0f8c-4007-bcca-8e073467bf67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:13.230 163757 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]: global
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]:    log         /dev/log local0 debug
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]:    log-tag     haproxy-metadata-proxy-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]:    user        root
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]:    group       root
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]:    maxconn     1024
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]:    pidfile     /var/lib/neutron/external/pids/df73b9ab-de8d-40fa-9bf0-aa773bb32d3a.pid.haproxy
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]:    daemon
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]: defaults
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]:    log global
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]:    mode http
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]:    option httplog
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]:    option dontlognull
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]:    option http-server-close
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]:    option forwardfor
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]:    retries                 3
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]:    timeout http-request    30s
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]:    timeout connect         30s
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]:    timeout client          32s
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]:    timeout server          32s
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]:    timeout http-keep-alive 30s
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]: listen listener
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]:    bind 169.254.169.254:80
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]:    http-request add-header X-OVN-Network-ID df73b9ab-de8d-40fa-9bf0-aa773bb32d3a
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 06:21:13 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:13.231 163757 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a', 'env', 'PROCESS_TAG=haproxy-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/df73b9ab-de8d-40fa-9bf0-aa773bb32d3a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.290 254904 DEBUG nova.compute.manager [req-86c408f2-15f4-483a-8122-7fc33b39c8a7 req-7fbdf071-b4aa-4fa1-a759-dddd2a142372 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Received event network-vif-plugged-f393e1c3-9bd9-4748-a528-b99dd193646c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.291 254904 DEBUG oslo_concurrency.lockutils [req-86c408f2-15f4-483a-8122-7fc33b39c8a7 req-7fbdf071-b4aa-4fa1-a759-dddd2a142372 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "5771f44f-b324-4d11-b452-a2c22e990c48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.291 254904 DEBUG oslo_concurrency.lockutils [req-86c408f2-15f4-483a-8122-7fc33b39c8a7 req-7fbdf071-b4aa-4fa1-a759-dddd2a142372 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "5771f44f-b324-4d11-b452-a2c22e990c48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.292 254904 DEBUG oslo_concurrency.lockutils [req-86c408f2-15f4-483a-8122-7fc33b39c8a7 req-7fbdf071-b4aa-4fa1-a759-dddd2a142372 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "5771f44f-b324-4d11-b452-a2c22e990c48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.292 254904 DEBUG nova.compute.manager [req-86c408f2-15f4-483a-8122-7fc33b39c8a7 req-7fbdf071-b4aa-4fa1-a759-dddd2a142372 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Processing event network-vif-plugged-f393e1c3-9bd9-4748-a528-b99dd193646c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.299 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674473.2989511, 5771f44f-b324-4d11-b452-a2c22e990c48 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.299 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] VM Started (Lifecycle Event)#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.303 254904 DEBUG nova.compute.manager [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.309 254904 DEBUG nova.virt.libvirt.driver [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.314 254904 INFO nova.virt.libvirt.driver [-] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Instance spawned successfully.#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.314 254904 DEBUG nova.virt.libvirt.driver [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.321 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.326 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.340 254904 DEBUG nova.virt.libvirt.driver [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.340 254904 DEBUG nova.virt.libvirt.driver [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.341 254904 DEBUG nova.virt.libvirt.driver [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.342 254904 DEBUG nova.virt.libvirt.driver [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.343 254904 DEBUG nova.virt.libvirt.driver [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.344 254904 DEBUG nova.virt.libvirt.driver [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.349 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.350 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674473.300461, 5771f44f-b324-4d11-b452-a2c22e990c48 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.350 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.384 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.392 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674473.307212, 5771f44f-b324-4d11-b452-a2c22e990c48 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.392 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.399 254904 INFO nova.compute.manager [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Took 6.92 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.400 254904 DEBUG nova.compute.manager [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.411 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.415 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.443 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.468 254904 INFO nova.compute.manager [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Took 7.84 seconds to build instance.#033[00m
Dec  2 06:21:13 np0005542249 nova_compute[254900]: 2025-12-02 11:21:13.485 254904 DEBUG oslo_concurrency.lockutils [None req-149eb624-b233-4101-abbb-b532ccb6b3c9 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "5771f44f-b324-4d11-b452-a2c22e990c48" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.928s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:13 np0005542249 podman[273518]: 2025-12-02 11:21:13.723460068 +0000 UTC m=+0.070743920 container create 3bad0a1e998dfbafd8d84da9451c4fe26d5d8ef8f5cce960b79de7607c8474bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:21:13 np0005542249 systemd[1]: Started libpod-conmon-3bad0a1e998dfbafd8d84da9451c4fe26d5d8ef8f5cce960b79de7607c8474bd.scope.
Dec  2 06:21:13 np0005542249 podman[273518]: 2025-12-02 11:21:13.695973115 +0000 UTC m=+0.043257007 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:21:13 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:21:13 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a143d86d91350ae77ac171b9921569a963a18a6c1bf26fb20ac437bd8292d4c5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 06:21:13 np0005542249 podman[273518]: 2025-12-02 11:21:13.834427743 +0000 UTC m=+0.181711615 container init 3bad0a1e998dfbafd8d84da9451c4fe26d5d8ef8f5cce960b79de7607c8474bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:21:13 np0005542249 podman[273518]: 2025-12-02 11:21:13.842768183 +0000 UTC m=+0.190052035 container start 3bad0a1e998dfbafd8d84da9451c4fe26d5d8ef8f5cce960b79de7607c8474bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:21:13 np0005542249 neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a[273533]: [NOTICE]   (273537) : New worker (273539) forked
Dec  2 06:21:13 np0005542249 neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a[273533]: [NOTICE]   (273537) : Loading success.
Dec  2 06:21:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1184: 321 pgs: 321 active+clean; 180 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 2.2 MiB/s wr, 52 op/s
Dec  2 06:21:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:21:14 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/56064244' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:21:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Dec  2 06:21:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Dec  2 06:21:15 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Dec  2 06:21:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1186: 321 pgs: 321 active+clean; 180 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.5 MiB/s wr, 39 op/s
Dec  2 06:21:16 np0005542249 nova_compute[254900]: 2025-12-02 11:21:16.281 254904 DEBUG nova.compute.manager [req-20a8b8fa-4949-43f4-b00d-4dc88708feb5 req-30f175cc-c720-4c79-aa55-e702631f1153 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Received event network-vif-plugged-f393e1c3-9bd9-4748-a528-b99dd193646c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:21:16 np0005542249 nova_compute[254900]: 2025-12-02 11:21:16.282 254904 DEBUG oslo_concurrency.lockutils [req-20a8b8fa-4949-43f4-b00d-4dc88708feb5 req-30f175cc-c720-4c79-aa55-e702631f1153 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "5771f44f-b324-4d11-b452-a2c22e990c48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:16 np0005542249 nova_compute[254900]: 2025-12-02 11:21:16.282 254904 DEBUG oslo_concurrency.lockutils [req-20a8b8fa-4949-43f4-b00d-4dc88708feb5 req-30f175cc-c720-4c79-aa55-e702631f1153 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "5771f44f-b324-4d11-b452-a2c22e990c48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:16 np0005542249 nova_compute[254900]: 2025-12-02 11:21:16.282 254904 DEBUG oslo_concurrency.lockutils [req-20a8b8fa-4949-43f4-b00d-4dc88708feb5 req-30f175cc-c720-4c79-aa55-e702631f1153 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "5771f44f-b324-4d11-b452-a2c22e990c48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:16 np0005542249 nova_compute[254900]: 2025-12-02 11:21:16.282 254904 DEBUG nova.compute.manager [req-20a8b8fa-4949-43f4-b00d-4dc88708feb5 req-30f175cc-c720-4c79-aa55-e702631f1153 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] No waiting events found dispatching network-vif-plugged-f393e1c3-9bd9-4748-a528-b99dd193646c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:21:16 np0005542249 nova_compute[254900]: 2025-12-02 11:21:16.283 254904 WARNING nova.compute.manager [req-20a8b8fa-4949-43f4-b00d-4dc88708feb5 req-30f175cc-c720-4c79-aa55-e702631f1153 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Received unexpected event network-vif-plugged-f393e1c3-9bd9-4748-a528-b99dd193646c for instance with vm_state active and task_state None.#033[00m
Dec  2 06:21:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:21:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Dec  2 06:21:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Dec  2 06:21:16 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Dec  2 06:21:16 np0005542249 nova_compute[254900]: 2025-12-02 11:21:16.759 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:17 np0005542249 nova_compute[254900]: 2025-12-02 11:21:17.007 254904 DEBUG oslo_concurrency.lockutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Acquiring lock "415ead6c-ffe0-4426-a145-1cb487cfa30f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:17 np0005542249 nova_compute[254900]: 2025-12-02 11:21:17.008 254904 DEBUG oslo_concurrency.lockutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lock "415ead6c-ffe0-4426-a145-1cb487cfa30f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:17 np0005542249 nova_compute[254900]: 2025-12-02 11:21:17.029 254904 DEBUG nova.compute.manager [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:21:17 np0005542249 nova_compute[254900]: 2025-12-02 11:21:17.123 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:17 np0005542249 NetworkManager[48987]: <info>  [1764674477.1247] manager: (patch-br-int-to-provnet-236defd8-cd9f-40fb-be9d-c3108d5fe981): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Dec  2 06:21:17 np0005542249 NetworkManager[48987]: <info>  [1764674477.1259] manager: (patch-provnet-236defd8-cd9f-40fb-be9d-c3108d5fe981-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Dec  2 06:21:17 np0005542249 nova_compute[254900]: 2025-12-02 11:21:17.133 254904 DEBUG oslo_concurrency.lockutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:17 np0005542249 nova_compute[254900]: 2025-12-02 11:21:17.134 254904 DEBUG oslo_concurrency.lockutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:17 np0005542249 nova_compute[254900]: 2025-12-02 11:21:17.144 254904 DEBUG nova.virt.hardware [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:21:17 np0005542249 nova_compute[254900]: 2025-12-02 11:21:17.144 254904 INFO nova.compute.claims [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:21:17 np0005542249 nova_compute[254900]: 2025-12-02 11:21:17.303 254904 DEBUG oslo_concurrency.processutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:21:17 np0005542249 ovn_controller[153849]: 2025-12-02T11:21:17Z|00091|binding|INFO|Releasing lport 21f7ba63-7631-4701-921a-a830d0f08e6f from this chassis (sb_readonly=0)
Dec  2 06:21:17 np0005542249 ovn_controller[153849]: 2025-12-02T11:21:17Z|00092|binding|INFO|Releasing lport 21f7ba63-7631-4701-921a-a830d0f08e6f from this chassis (sb_readonly=0)
Dec  2 06:21:17 np0005542249 nova_compute[254900]: 2025-12-02 11:21:17.351 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:21:17 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2326110686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:21:17 np0005542249 nova_compute[254900]: 2025-12-02 11:21:17.790 254904 DEBUG oslo_concurrency.processutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:21:17 np0005542249 nova_compute[254900]: 2025-12-02 11:21:17.800 254904 DEBUG nova.compute.provider_tree [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:21:17 np0005542249 nova_compute[254900]: 2025-12-02 11:21:17.832 254904 DEBUG nova.scheduler.client.report [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:21:17 np0005542249 nova_compute[254900]: 2025-12-02 11:21:17.855 254904 DEBUG oslo_concurrency.lockutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:17 np0005542249 nova_compute[254900]: 2025-12-02 11:21:17.856 254904 DEBUG nova.compute.manager [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:21:17 np0005542249 nova_compute[254900]: 2025-12-02 11:21:17.914 254904 DEBUG nova.compute.manager [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:21:17 np0005542249 nova_compute[254900]: 2025-12-02 11:21:17.914 254904 DEBUG nova.network.neutron [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:21:17 np0005542249 nova_compute[254900]: 2025-12-02 11:21:17.960 254904 INFO nova.virt.libvirt.driver [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:21:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1188: 321 pgs: 321 active+clean; 180 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 137 op/s
Dec  2 06:21:17 np0005542249 nova_compute[254900]: 2025-12-02 11:21:17.982 254904 DEBUG nova.compute.manager [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.007 254904 DEBUG nova.compute.manager [req-99fcde87-c89e-4dfc-a47e-a2b90fd4384b req-f6cd72b6-2de4-4674-a96b-341fd66b5906 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Received event network-changed-f393e1c3-9bd9-4748-a528-b99dd193646c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.007 254904 DEBUG nova.compute.manager [req-99fcde87-c89e-4dfc-a47e-a2b90fd4384b req-f6cd72b6-2de4-4674-a96b-341fd66b5906 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Refreshing instance network info cache due to event network-changed-f393e1c3-9bd9-4748-a528-b99dd193646c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.007 254904 DEBUG oslo_concurrency.lockutils [req-99fcde87-c89e-4dfc-a47e-a2b90fd4384b req-f6cd72b6-2de4-4674-a96b-341fd66b5906 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-5771f44f-b324-4d11-b452-a2c22e990c48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.008 254904 DEBUG oslo_concurrency.lockutils [req-99fcde87-c89e-4dfc-a47e-a2b90fd4384b req-f6cd72b6-2de4-4674-a96b-341fd66b5906 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-5771f44f-b324-4d11-b452-a2c22e990c48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.008 254904 DEBUG nova.network.neutron [req-99fcde87-c89e-4dfc-a47e-a2b90fd4384b req-f6cd72b6-2de4-4674-a96b-341fd66b5906 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Refreshing network info cache for port f393e1c3-9bd9-4748-a528-b99dd193646c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.043 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.105 254904 DEBUG nova.compute.manager [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.107 254904 DEBUG nova.virt.libvirt.driver [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.107 254904 INFO nova.virt.libvirt.driver [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Creating image(s)#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.145 254904 DEBUG nova.storage.rbd_utils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] rbd image 415ead6c-ffe0-4426-a145-1cb487cfa30f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.174 254904 DEBUG nova.storage.rbd_utils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] rbd image 415ead6c-ffe0-4426-a145-1cb487cfa30f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.204 254904 DEBUG nova.storage.rbd_utils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] rbd image 415ead6c-ffe0-4426-a145-1cb487cfa30f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.209 254904 DEBUG oslo_concurrency.processutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.237 254904 DEBUG nova.policy [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '33395809f6bd4db1bf1ab3a67fdbc5d5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '401c4eb4c3ea4ca886484161dcd637b6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.276 254904 DEBUG oslo_concurrency.processutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.277 254904 DEBUG oslo_concurrency.lockutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Acquiring lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.278 254904 DEBUG oslo_concurrency.lockutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.278 254904 DEBUG oslo_concurrency.lockutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.310 254904 DEBUG nova.storage.rbd_utils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] rbd image 415ead6c-ffe0-4426-a145-1cb487cfa30f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.319 254904 DEBUG oslo_concurrency.processutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 415ead6c-ffe0-4426-a145-1cb487cfa30f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:21:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Dec  2 06:21:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Dec  2 06:21:18 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.634 254904 DEBUG oslo_concurrency.processutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 415ead6c-ffe0-4426-a145-1cb487cfa30f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.726 254904 DEBUG nova.storage.rbd_utils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] resizing rbd image 415ead6c-ffe0-4426-a145-1cb487cfa30f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.847 254904 DEBUG nova.objects.instance [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lazy-loading 'migration_context' on Instance uuid 415ead6c-ffe0-4426-a145-1cb487cfa30f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.867 254904 DEBUG nova.virt.libvirt.driver [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.868 254904 DEBUG nova.virt.libvirt.driver [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Ensure instance console log exists: /var/lib/nova/instances/415ead6c-ffe0-4426-a145-1cb487cfa30f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.869 254904 DEBUG oslo_concurrency.lockutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.869 254904 DEBUG oslo_concurrency.lockutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:18 np0005542249 nova_compute[254900]: 2025-12-02 11:21:18.870 254904 DEBUG oslo_concurrency.lockutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:19 np0005542249 nova_compute[254900]: 2025-12-02 11:21:19.479 254904 DEBUG nova.network.neutron [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Successfully created port: 41345e4a-b8a8-4ed2-80a8-e69289b0f8fb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:21:19 np0005542249 nova_compute[254900]: 2025-12-02 11:21:19.558 254904 DEBUG nova.network.neutron [req-99fcde87-c89e-4dfc-a47e-a2b90fd4384b req-f6cd72b6-2de4-4674-a96b-341fd66b5906 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Updated VIF entry in instance network info cache for port f393e1c3-9bd9-4748-a528-b99dd193646c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:21:19 np0005542249 nova_compute[254900]: 2025-12-02 11:21:19.559 254904 DEBUG nova.network.neutron [req-99fcde87-c89e-4dfc-a47e-a2b90fd4384b req-f6cd72b6-2de4-4674-a96b-341fd66b5906 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Updating instance_info_cache with network_info: [{"id": "f393e1c3-9bd9-4748-a528-b99dd193646c", "address": "fa:16:3e:4e:4e:d3", "network": {"id": "df73b9ab-de8d-40fa-9bf0-aa773bb32d3a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-950743948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff78a31f26746918caf04706b12b741", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf393e1c3-9b", "ovs_interfaceid": "f393e1c3-9bd9-4748-a528-b99dd193646c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:21:19 np0005542249 nova_compute[254900]: 2025-12-02 11:21:19.578 254904 DEBUG oslo_concurrency.lockutils [req-99fcde87-c89e-4dfc-a47e-a2b90fd4384b req-f6cd72b6-2de4-4674-a96b-341fd66b5906 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-5771f44f-b324-4d11-b452-a2c22e990c48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:21:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:19.836 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:19.837 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:19.838 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1190: 321 pgs: 321 active+clean; 185 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 160 KiB/s wr, 209 op/s
Dec  2 06:21:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Dec  2 06:21:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Dec  2 06:21:20 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Dec  2 06:21:20 np0005542249 nova_compute[254900]: 2025-12-02 11:21:20.715 254904 DEBUG nova.network.neutron [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Successfully updated port: 41345e4a-b8a8-4ed2-80a8-e69289b0f8fb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:21:20 np0005542249 nova_compute[254900]: 2025-12-02 11:21:20.731 254904 DEBUG oslo_concurrency.lockutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Acquiring lock "refresh_cache-415ead6c-ffe0-4426-a145-1cb487cfa30f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:21:20 np0005542249 nova_compute[254900]: 2025-12-02 11:21:20.731 254904 DEBUG oslo_concurrency.lockutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Acquired lock "refresh_cache-415ead6c-ffe0-4426-a145-1cb487cfa30f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:21:20 np0005542249 nova_compute[254900]: 2025-12-02 11:21:20.732 254904 DEBUG nova.network.neutron [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:21:20 np0005542249 nova_compute[254900]: 2025-12-02 11:21:20.793 254904 DEBUG nova.compute.manager [req-c883d15b-f049-4c80-9c07-627843353cb2 req-7a7388e0-122f-428a-b13a-f38e22f8c78e 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Received event network-changed-41345e4a-b8a8-4ed2-80a8-e69289b0f8fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:21:20 np0005542249 nova_compute[254900]: 2025-12-02 11:21:20.793 254904 DEBUG nova.compute.manager [req-c883d15b-f049-4c80-9c07-627843353cb2 req-7a7388e0-122f-428a-b13a-f38e22f8c78e 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Refreshing instance network info cache due to event network-changed-41345e4a-b8a8-4ed2-80a8-e69289b0f8fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:21:20 np0005542249 nova_compute[254900]: 2025-12-02 11:21:20.793 254904 DEBUG oslo_concurrency.lockutils [req-c883d15b-f049-4c80-9c07-627843353cb2 req-7a7388e0-122f-428a-b13a-f38e22f8c78e 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-415ead6c-ffe0-4426-a145-1cb487cfa30f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:21:20 np0005542249 nova_compute[254900]: 2025-12-02 11:21:20.917 254904 DEBUG nova.network.neutron [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:21:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:21:20 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/294476010' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:21:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:21:20 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/294476010' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:21:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:21:21 np0005542249 nova_compute[254900]: 2025-12-02 11:21:21.685 254904 DEBUG nova.network.neutron [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Updating instance_info_cache with network_info: [{"id": "41345e4a-b8a8-4ed2-80a8-e69289b0f8fb", "address": "fa:16:3e:4c:b5:d0", "network": {"id": "d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1780654330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "401c4eb4c3ea4ca886484161dcd637b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41345e4a-b8", "ovs_interfaceid": "41345e4a-b8a8-4ed2-80a8-e69289b0f8fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:21:21 np0005542249 nova_compute[254900]: 2025-12-02 11:21:21.719 254904 DEBUG oslo_concurrency.lockutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Releasing lock "refresh_cache-415ead6c-ffe0-4426-a145-1cb487cfa30f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:21:21 np0005542249 nova_compute[254900]: 2025-12-02 11:21:21.720 254904 DEBUG nova.compute.manager [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Instance network_info: |[{"id": "41345e4a-b8a8-4ed2-80a8-e69289b0f8fb", "address": "fa:16:3e:4c:b5:d0", "network": {"id": "d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1780654330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "401c4eb4c3ea4ca886484161dcd637b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41345e4a-b8", "ovs_interfaceid": "41345e4a-b8a8-4ed2-80a8-e69289b0f8fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:21:21 np0005542249 nova_compute[254900]: 2025-12-02 11:21:21.721 254904 DEBUG oslo_concurrency.lockutils [req-c883d15b-f049-4c80-9c07-627843353cb2 req-7a7388e0-122f-428a-b13a-f38e22f8c78e 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-415ead6c-ffe0-4426-a145-1cb487cfa30f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:21:21 np0005542249 nova_compute[254900]: 2025-12-02 11:21:21.721 254904 DEBUG nova.network.neutron [req-c883d15b-f049-4c80-9c07-627843353cb2 req-7a7388e0-122f-428a-b13a-f38e22f8c78e 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Refreshing network info cache for port 41345e4a-b8a8-4ed2-80a8-e69289b0f8fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:21:21 np0005542249 nova_compute[254900]: 2025-12-02 11:21:21.727 254904 DEBUG nova.virt.libvirt.driver [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Start _get_guest_xml network_info=[{"id": "41345e4a-b8a8-4ed2-80a8-e69289b0f8fb", "address": "fa:16:3e:4c:b5:d0", "network": {"id": "d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1780654330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "401c4eb4c3ea4ca886484161dcd637b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41345e4a-b8", "ovs_interfaceid": "41345e4a-b8a8-4ed2-80a8-e69289b0f8fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'image_id': '5a40f66c-ab43-47dd-9880-e59f9fa2c60e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:21:21 np0005542249 nova_compute[254900]: 2025-12-02 11:21:21.742 254904 WARNING nova.virt.libvirt.driver [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:21:21 np0005542249 nova_compute[254900]: 2025-12-02 11:21:21.761 254904 DEBUG nova.virt.libvirt.host [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:21:21 np0005542249 nova_compute[254900]: 2025-12-02 11:21:21.762 254904 DEBUG nova.virt.libvirt.host [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:21:21 np0005542249 nova_compute[254900]: 2025-12-02 11:21:21.763 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:21 np0005542249 nova_compute[254900]: 2025-12-02 11:21:21.769 254904 DEBUG nova.virt.libvirt.host [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:21:21 np0005542249 nova_compute[254900]: 2025-12-02 11:21:21.771 254904 DEBUG nova.virt.libvirt.host [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:21:21 np0005542249 nova_compute[254900]: 2025-12-02 11:21:21.772 254904 DEBUG nova.virt.libvirt.driver [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:21:21 np0005542249 nova_compute[254900]: 2025-12-02 11:21:21.772 254904 DEBUG nova.virt.hardware [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:21:21 np0005542249 nova_compute[254900]: 2025-12-02 11:21:21.772 254904 DEBUG nova.virt.hardware [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:21:21 np0005542249 nova_compute[254900]: 2025-12-02 11:21:21.773 254904 DEBUG nova.virt.hardware [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:21:21 np0005542249 nova_compute[254900]: 2025-12-02 11:21:21.773 254904 DEBUG nova.virt.hardware [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:21:21 np0005542249 nova_compute[254900]: 2025-12-02 11:21:21.773 254904 DEBUG nova.virt.hardware [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:21:21 np0005542249 nova_compute[254900]: 2025-12-02 11:21:21.774 254904 DEBUG nova.virt.hardware [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:21:21 np0005542249 nova_compute[254900]: 2025-12-02 11:21:21.775 254904 DEBUG nova.virt.hardware [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:21:21 np0005542249 nova_compute[254900]: 2025-12-02 11:21:21.777 254904 DEBUG nova.virt.hardware [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:21:21 np0005542249 nova_compute[254900]: 2025-12-02 11:21:21.777 254904 DEBUG nova.virt.hardware [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:21:21 np0005542249 nova_compute[254900]: 2025-12-02 11:21:21.777 254904 DEBUG nova.virt.hardware [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:21:21 np0005542249 nova_compute[254900]: 2025-12-02 11:21:21.778 254904 DEBUG nova.virt.hardware [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:21:21 np0005542249 nova_compute[254900]: 2025-12-02 11:21:21.786 254904 DEBUG oslo_concurrency.processutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:21:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1192: 321 pgs: 321 active+clean; 208 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 249 op/s
Dec  2 06:21:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:21:22 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2388474644' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.265 254904 DEBUG oslo_concurrency.processutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.299 254904 DEBUG nova.storage.rbd_utils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] rbd image 415ead6c-ffe0-4426-a145-1cb487cfa30f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.305 254904 DEBUG oslo_concurrency.processutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:21:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:21:22 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3663707360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.772 254904 DEBUG oslo_concurrency.processutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.775 254904 DEBUG nova.virt.libvirt.vif [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:21:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-133415757',display_name='tempest-TestEncryptedCinderVolumes-server-133415757',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-133415757',id=9,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEXEYwVhOXVaKmb16qIGzsPPwQMmvRw3pYewNYOxWs1L+HL4RkXgCdhuRfSTGpysaJFWg3ygNTzcxEctMScVQQSoZ5a4dTSi84EU1UrsvSAbV2JnkgGl+JMtFLIFfJ1zOA==',key_name='tempest-keypair-1068411774',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='401c4eb4c3ea4ca886484161dcd637b6',ramdisk_id='',reservation_id='r-xxeybq0j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1842533919',owner_user_name='tempest-TestEncryptedCinderVolumes-1842533919-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:21:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='33395809f6bd4db1bf1ab3a67fdbc5d5',uuid=415ead6c-ffe0-4426-a145-1cb487cfa30f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41345e4a-b8a8-4ed2-80a8-e69289b0f8fb", "address": "fa:16:3e:4c:b5:d0", "network": {"id": "d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1780654330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "401c4eb4c3ea4ca886484161dcd637b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41345e4a-b8", "ovs_interfaceid": "41345e4a-b8a8-4ed2-80a8-e69289b0f8fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.776 254904 DEBUG nova.network.os_vif_util [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Converting VIF {"id": "41345e4a-b8a8-4ed2-80a8-e69289b0f8fb", "address": "fa:16:3e:4c:b5:d0", "network": {"id": "d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1780654330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "401c4eb4c3ea4ca886484161dcd637b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41345e4a-b8", "ovs_interfaceid": "41345e4a-b8a8-4ed2-80a8-e69289b0f8fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.777 254904 DEBUG nova.network.os_vif_util [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:b5:d0,bridge_name='br-int',has_traffic_filtering=True,id=41345e4a-b8a8-4ed2-80a8-e69289b0f8fb,network=Network(d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41345e4a-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.779 254904 DEBUG nova.objects.instance [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 415ead6c-ffe0-4426-a145-1cb487cfa30f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.798 254904 DEBUG nova.virt.libvirt.driver [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:21:22 np0005542249 nova_compute[254900]:  <uuid>415ead6c-ffe0-4426-a145-1cb487cfa30f</uuid>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:  <name>instance-00000009</name>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-133415757</nova:name>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:21:21</nova:creationTime>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:21:22 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:        <nova:user uuid="33395809f6bd4db1bf1ab3a67fdbc5d5">tempest-TestEncryptedCinderVolumes-1842533919-project-member</nova:user>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:        <nova:project uuid="401c4eb4c3ea4ca886484161dcd637b6">tempest-TestEncryptedCinderVolumes-1842533919</nova:project>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <nova:root type="image" uuid="5a40f66c-ab43-47dd-9880-e59f9fa2c60e"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:        <nova:port uuid="41345e4a-b8a8-4ed2-80a8-e69289b0f8fb">
Dec  2 06:21:22 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <entry name="serial">415ead6c-ffe0-4426-a145-1cb487cfa30f</entry>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <entry name="uuid">415ead6c-ffe0-4426-a145-1cb487cfa30f</entry>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/415ead6c-ffe0-4426-a145-1cb487cfa30f_disk">
Dec  2 06:21:22 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:21:22 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/415ead6c-ffe0-4426-a145-1cb487cfa30f_disk.config">
Dec  2 06:21:22 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:21:22 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:4c:b5:d0"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <target dev="tap41345e4a-b8"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/415ead6c-ffe0-4426-a145-1cb487cfa30f/console.log" append="off"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:21:22 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:21:22 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:21:22 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:21:22 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.800 254904 DEBUG nova.compute.manager [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Preparing to wait for external event network-vif-plugged-41345e4a-b8a8-4ed2-80a8-e69289b0f8fb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.801 254904 DEBUG oslo_concurrency.lockutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Acquiring lock "415ead6c-ffe0-4426-a145-1cb487cfa30f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.801 254904 DEBUG oslo_concurrency.lockutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lock "415ead6c-ffe0-4426-a145-1cb487cfa30f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.802 254904 DEBUG oslo_concurrency.lockutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lock "415ead6c-ffe0-4426-a145-1cb487cfa30f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.803 254904 DEBUG nova.virt.libvirt.vif [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:21:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-133415757',display_name='tempest-TestEncryptedCinderVolumes-server-133415757',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-133415757',id=9,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEXEYwVhOXVaKmb16qIGzsPPwQMmvRw3pYewNYOxWs1L+HL4RkXgCdhuRfSTGpysaJFWg3ygNTzcxEctMScVQQSoZ5a4dTSi84EU1UrsvSAbV2JnkgGl+JMtFLIFfJ1zOA==',key_name='tempest-keypair-1068411774',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='401c4eb4c3ea4ca886484161dcd637b6',ramdisk_id='',reservation_id='r-xxeybq0j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1842533919',owner_user_name='tempest-TestEncryptedCinderVolumes-1842533919-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:21:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='33395809f6bd4db1bf1ab3a67fdbc5d5',uuid=415ead6c-ffe0-4426-a145-1cb487cfa30f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41345e4a-b8a8-4ed2-80a8-e69289b0f8fb", "address": "fa:16:3e:4c:b5:d0", "network": {"id": "d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1780654330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "401c4eb4c3ea4ca886484161dcd637b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41345e4a-b8", "ovs_interfaceid": "41345e4a-b8a8-4ed2-80a8-e69289b0f8fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.803 254904 DEBUG nova.network.os_vif_util [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Converting VIF {"id": "41345e4a-b8a8-4ed2-80a8-e69289b0f8fb", "address": "fa:16:3e:4c:b5:d0", "network": {"id": "d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1780654330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "401c4eb4c3ea4ca886484161dcd637b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41345e4a-b8", "ovs_interfaceid": "41345e4a-b8a8-4ed2-80a8-e69289b0f8fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.804 254904 DEBUG nova.network.os_vif_util [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:b5:d0,bridge_name='br-int',has_traffic_filtering=True,id=41345e4a-b8a8-4ed2-80a8-e69289b0f8fb,network=Network(d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41345e4a-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.805 254904 DEBUG os_vif [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:b5:d0,bridge_name='br-int',has_traffic_filtering=True,id=41345e4a-b8a8-4ed2-80a8-e69289b0f8fb,network=Network(d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41345e4a-b8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.806 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.807 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.808 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.813 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.814 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41345e4a-b8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.815 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap41345e4a-b8, col_values=(('external_ids', {'iface-id': '41345e4a-b8a8-4ed2-80a8-e69289b0f8fb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4c:b5:d0', 'vm-uuid': '415ead6c-ffe0-4426-a145-1cb487cfa30f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.840 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:22 np0005542249 NetworkManager[48987]: <info>  [1764674482.8421] manager: (tap41345e4a-b8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.845 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.851 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.854 254904 INFO os_vif [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:b5:d0,bridge_name='br-int',has_traffic_filtering=True,id=41345e4a-b8a8-4ed2-80a8-e69289b0f8fb,network=Network(d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41345e4a-b8')#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.912 254904 DEBUG nova.virt.libvirt.driver [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.913 254904 DEBUG nova.virt.libvirt.driver [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.914 254904 DEBUG nova.virt.libvirt.driver [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] No VIF found with MAC fa:16:3e:4c:b5:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.915 254904 INFO nova.virt.libvirt.driver [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Using config drive#033[00m
Dec  2 06:21:22 np0005542249 nova_compute[254900]: 2025-12-02 11:21:22.946 254904 DEBUG nova.storage.rbd_utils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] rbd image 415ead6c-ffe0-4426-a145-1cb487cfa30f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:21:23 np0005542249 nova_compute[254900]: 2025-12-02 11:21:23.046 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:23 np0005542249 nova_compute[254900]: 2025-12-02 11:21:23.083 254904 DEBUG nova.network.neutron [req-c883d15b-f049-4c80-9c07-627843353cb2 req-7a7388e0-122f-428a-b13a-f38e22f8c78e 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Updated VIF entry in instance network info cache for port 41345e4a-b8a8-4ed2-80a8-e69289b0f8fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:21:23 np0005542249 nova_compute[254900]: 2025-12-02 11:21:23.084 254904 DEBUG nova.network.neutron [req-c883d15b-f049-4c80-9c07-627843353cb2 req-7a7388e0-122f-428a-b13a-f38e22f8c78e 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Updating instance_info_cache with network_info: [{"id": "41345e4a-b8a8-4ed2-80a8-e69289b0f8fb", "address": "fa:16:3e:4c:b5:d0", "network": {"id": "d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1780654330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "401c4eb4c3ea4ca886484161dcd637b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41345e4a-b8", "ovs_interfaceid": "41345e4a-b8a8-4ed2-80a8-e69289b0f8fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:21:23 np0005542249 nova_compute[254900]: 2025-12-02 11:21:23.099 254904 DEBUG oslo_concurrency.lockutils [req-c883d15b-f049-4c80-9c07-627843353cb2 req-7a7388e0-122f-428a-b13a-f38e22f8c78e 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-415ead6c-ffe0-4426-a145-1cb487cfa30f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:21:23 np0005542249 nova_compute[254900]: 2025-12-02 11:21:23.403 254904 INFO nova.virt.libvirt.driver [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Creating config drive at /var/lib/nova/instances/415ead6c-ffe0-4426-a145-1cb487cfa30f/disk.config#033[00m
Dec  2 06:21:23 np0005542249 nova_compute[254900]: 2025-12-02 11:21:23.413 254904 DEBUG oslo_concurrency.processutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/415ead6c-ffe0-4426-a145-1cb487cfa30f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0bmk2g2k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:21:23 np0005542249 nova_compute[254900]: 2025-12-02 11:21:23.560 254904 DEBUG oslo_concurrency.processutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/415ead6c-ffe0-4426-a145-1cb487cfa30f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0bmk2g2k" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:21:23 np0005542249 nova_compute[254900]: 2025-12-02 11:21:23.603 254904 DEBUG nova.storage.rbd_utils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] rbd image 415ead6c-ffe0-4426-a145-1cb487cfa30f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:21:23 np0005542249 nova_compute[254900]: 2025-12-02 11:21:23.610 254904 DEBUG oslo_concurrency.processutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/415ead6c-ffe0-4426-a145-1cb487cfa30f/disk.config 415ead6c-ffe0-4426-a145-1cb487cfa30f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:21:23 np0005542249 nova_compute[254900]: 2025-12-02 11:21:23.813 254904 DEBUG oslo_concurrency.processutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/415ead6c-ffe0-4426-a145-1cb487cfa30f/disk.config 415ead6c-ffe0-4426-a145-1cb487cfa30f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:21:23 np0005542249 nova_compute[254900]: 2025-12-02 11:21:23.814 254904 INFO nova.virt.libvirt.driver [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Deleting local config drive /var/lib/nova/instances/415ead6c-ffe0-4426-a145-1cb487cfa30f/disk.config because it was imported into RBD.#033[00m
Dec  2 06:21:23 np0005542249 NetworkManager[48987]: <info>  [1764674483.8718] manager: (tap41345e4a-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/61)
Dec  2 06:21:23 np0005542249 kernel: tap41345e4a-b8: entered promiscuous mode
Dec  2 06:21:23 np0005542249 ovn_controller[153849]: 2025-12-02T11:21:23Z|00093|binding|INFO|Claiming lport 41345e4a-b8a8-4ed2-80a8-e69289b0f8fb for this chassis.
Dec  2 06:21:23 np0005542249 ovn_controller[153849]: 2025-12-02T11:21:23Z|00094|binding|INFO|41345e4a-b8a8-4ed2-80a8-e69289b0f8fb: Claiming fa:16:3e:4c:b5:d0 10.100.0.8
Dec  2 06:21:23 np0005542249 nova_compute[254900]: 2025-12-02 11:21:23.879 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:23 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:23.885 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:b5:d0 10.100.0.8'], port_security=['fa:16:3e:4c:b5:d0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '415ead6c-ffe0-4426-a145-1cb487cfa30f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '401c4eb4c3ea4ca886484161dcd637b6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b4000e73-fb51-40ce-b6d7-d2ba9743ca07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df6d8bd2-f456-4c02-be9f-8241c1cd11ce, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=41345e4a-b8a8-4ed2-80a8-e69289b0f8fb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:21:23 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:23.887 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 41345e4a-b8a8-4ed2-80a8-e69289b0f8fb in datapath d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa bound to our chassis#033[00m
Dec  2 06:21:23 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:23.890 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa#033[00m
Dec  2 06:21:23 np0005542249 systemd-udevd[273874]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:21:23 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:23.909 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[44278b22-dbf9-4d0f-b58b-743fcdeeaf75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:23 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:23.910 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd5b1cd69-c1 in ovnmeta-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 06:21:23 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:23.913 262398 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd5b1cd69-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 06:21:23 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:23.914 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[0c3eed11-e5c4-41db-9fc4-e12f4556a7de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:23 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:23.916 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a5984f-4c9c-4258-af7a-00a5b1f01efd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:23 np0005542249 ovn_controller[153849]: 2025-12-02T11:21:23Z|00095|binding|INFO|Setting lport 41345e4a-b8a8-4ed2-80a8-e69289b0f8fb ovn-installed in OVS
Dec  2 06:21:23 np0005542249 ovn_controller[153849]: 2025-12-02T11:21:23Z|00096|binding|INFO|Setting lport 41345e4a-b8a8-4ed2-80a8-e69289b0f8fb up in Southbound
Dec  2 06:21:23 np0005542249 systemd-machined[216222]: New machine qemu-9-instance-00000009.
Dec  2 06:21:23 np0005542249 nova_compute[254900]: 2025-12-02 11:21:23.921 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:23 np0005542249 NetworkManager[48987]: <info>  [1764674483.9392] device (tap41345e4a-b8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:21:23 np0005542249 NetworkManager[48987]: <info>  [1764674483.9406] device (tap41345e4a-b8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:21:23 np0005542249 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Dec  2 06:21:23 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:23.942 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[20452933-0fe5-4ce8-b40a-cabe78c1fb64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1193: 321 pgs: 321 active+clean; 227 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.9 MiB/s wr, 250 op/s
Dec  2 06:21:23 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:23.982 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[50704f3e-cb65-4923-91c1-16fad058879b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:24.020 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[1a54ffa9-f900-43f8-b3f1-1d4053384d16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:24.027 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[30b1b80e-edb6-43e5-be58-d446decb41a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:24 np0005542249 NetworkManager[48987]: <info>  [1764674484.0294] manager: (tapd5b1cd69-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/62)
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:24.076 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[8525dd9a-48f3-4ebc-99db-fd8c64e7c8fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:24.079 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[c7dc4994-f86b-4708-a5a8-6d1fcc75300a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:24 np0005542249 NetworkManager[48987]: <info>  [1764674484.1036] device (tapd5b1cd69-c0): carrier: link connected
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:24.108 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[4eecda80-7305-489e-95cd-8da69ec8014f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:24.133 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[6345e573-f0c5-47fb-aee9-6841650e1995]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5b1cd69-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:16:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467692, 'reachable_time': 41624, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273907, 'error': None, 'target': 'ovnmeta-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:24.163 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[20f83ffc-b507-478a-a8ac-fce2588e3d3e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe78:16cb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467692, 'tstamp': 467692}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273908, 'error': None, 'target': 'ovnmeta-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:24.183 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[5a32197b-acf6-4f4f-a64c-32b4a2264f00]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5b1cd69-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:16:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467692, 'reachable_time': 41624, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273909, 'error': None, 'target': 'ovnmeta-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.234 254904 DEBUG nova.compute.manager [req-056bee9b-683f-4e77-882a-2adb3a490e6d req-a006cf2c-1190-444d-ae49-e79d4c5e4bcf 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Received event network-vif-plugged-41345e4a-b8a8-4ed2-80a8-e69289b0f8fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.236 254904 DEBUG oslo_concurrency.lockutils [req-056bee9b-683f-4e77-882a-2adb3a490e6d req-a006cf2c-1190-444d-ae49-e79d4c5e4bcf 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "415ead6c-ffe0-4426-a145-1cb487cfa30f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.240 254904 DEBUG oslo_concurrency.lockutils [req-056bee9b-683f-4e77-882a-2adb3a490e6d req-a006cf2c-1190-444d-ae49-e79d4c5e4bcf 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "415ead6c-ffe0-4426-a145-1cb487cfa30f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.241 254904 DEBUG oslo_concurrency.lockutils [req-056bee9b-683f-4e77-882a-2adb3a490e6d req-a006cf2c-1190-444d-ae49-e79d4c5e4bcf 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "415ead6c-ffe0-4426-a145-1cb487cfa30f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.241 254904 DEBUG nova.compute.manager [req-056bee9b-683f-4e77-882a-2adb3a490e6d req-a006cf2c-1190-444d-ae49-e79d4c5e4bcf 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Processing event network-vif-plugged-41345e4a-b8a8-4ed2-80a8-e69289b0f8fb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:24.250 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[59b79c71-3c5d-42d4-b7cd-ede761ca8551]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:24.332 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[2e3dced4-a158-4e9d-8550-8d7226c83557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:24.334 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5b1cd69-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:24.334 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:24.334 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5b1cd69-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.336 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:24 np0005542249 NetworkManager[48987]: <info>  [1764674484.3375] manager: (tapd5b1cd69-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Dec  2 06:21:24 np0005542249 kernel: tapd5b1cd69-c0: entered promiscuous mode
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:24.343 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd5b1cd69-c0, col_values=(('external_ids', {'iface-id': '0f800441-c122-44ad-a818-41a3e18ef470'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:21:24 np0005542249 ovn_controller[153849]: 2025-12-02T11:21:24Z|00097|binding|INFO|Releasing lport 0f800441-c122-44ad-a818-41a3e18ef470 from this chassis (sb_readonly=0)
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.345 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.369 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:24.371 163757 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:24.372 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[92fdacf7-7a79-4445-b7b3-650a3b436017]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:24.373 163757 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]: global
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]:    log         /dev/log local0 debug
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]:    log-tag     haproxy-metadata-proxy-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]:    user        root
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]:    group       root
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]:    maxconn     1024
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]:    pidfile     /var/lib/neutron/external/pids/d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa.pid.haproxy
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]:    daemon
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]: defaults
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]:    log global
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]:    mode http
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]:    option httplog
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]:    option dontlognull
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]:    option http-server-close
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]:    option forwardfor
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]:    retries                 3
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]:    timeout http-request    30s
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]:    timeout connect         30s
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]:    timeout client          32s
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]:    timeout server          32s
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]:    timeout http-keep-alive 30s
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]: listen listener
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]:    bind 169.254.169.254:80
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]:    http-request add-header X-OVN-Network-ID d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 06:21:24 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:24.373 163757 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa', 'env', 'PROCESS_TAG=haproxy-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.440 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674484.439264, 415ead6c-ffe0-4426-a145-1cb487cfa30f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.441 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] VM Started (Lifecycle Event)#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.445 254904 DEBUG nova.compute.manager [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.450 254904 DEBUG nova.virt.libvirt.driver [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.455 254904 INFO nova.virt.libvirt.driver [-] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Instance spawned successfully.#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.456 254904 DEBUG nova.virt.libvirt.driver [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.463 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.468 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.480 254904 DEBUG nova.virt.libvirt.driver [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.481 254904 DEBUG nova.virt.libvirt.driver [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.482 254904 DEBUG nova.virt.libvirt.driver [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.483 254904 DEBUG nova.virt.libvirt.driver [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.484 254904 DEBUG nova.virt.libvirt.driver [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.485 254904 DEBUG nova.virt.libvirt.driver [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.489 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.498 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674484.4395108, 415ead6c-ffe0-4426-a145-1cb487cfa30f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.501 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.529 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.540 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674484.4494705, 415ead6c-ffe0-4426-a145-1cb487cfa30f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.541 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.549 254904 INFO nova.compute.manager [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Took 6.44 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.550 254904 DEBUG nova.compute.manager [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.587 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.596 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.623 254904 INFO nova.compute.manager [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Took 7.53 seconds to build instance.#033[00m
Dec  2 06:21:24 np0005542249 nova_compute[254900]: 2025-12-02 11:21:24.657 254904 DEBUG oslo_concurrency.lockutils [None req-1575ce30-e375-4396-a224-49bc1f3a53b4 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lock "415ead6c-ffe0-4426-a145-1cb487cfa30f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:24 np0005542249 podman[273983]: 2025-12-02 11:21:24.836111408 +0000 UTC m=+0.088602670 container create affd82d2e27ca8f17f1fa8604e76325c43d45e87a6303345d1603a4fe4b2ff92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:21:24 np0005542249 podman[273983]: 2025-12-02 11:21:24.781191714 +0000 UTC m=+0.033682996 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:21:24 np0005542249 systemd[1]: Started libpod-conmon-affd82d2e27ca8f17f1fa8604e76325c43d45e87a6303345d1603a4fe4b2ff92.scope.
Dec  2 06:21:24 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:21:24 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffd20b40bd8a416c1eaf800b6b4678cdbf6ba9cccc0990cb4367f684ac89b651/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 06:21:24 np0005542249 podman[273983]: 2025-12-02 11:21:24.960180337 +0000 UTC m=+0.212671579 container init affd82d2e27ca8f17f1fa8604e76325c43d45e87a6303345d1603a4fe4b2ff92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  2 06:21:24 np0005542249 podman[273983]: 2025-12-02 11:21:24.971049363 +0000 UTC m=+0.223540595 container start affd82d2e27ca8f17f1fa8604e76325c43d45e87a6303345d1603a4fe4b2ff92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  2 06:21:25 np0005542249 neutron-haproxy-ovnmeta-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa[273998]: [NOTICE]   (274002) : New worker (274004) forked
Dec  2 06:21:25 np0005542249 neutron-haproxy-ovnmeta-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa[273998]: [NOTICE]   (274002) : Loading success.
Dec  2 06:21:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1194: 321 pgs: 321 active+clean; 227 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.7 MiB/s wr, 145 op/s
Dec  2 06:21:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:21:26
Dec  2 06:21:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:21:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:21:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'images', 'vms', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', '.mgr', 'default.rgw.log']
Dec  2 06:21:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:21:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:21:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:21:26 np0005542249 nova_compute[254900]: 2025-12-02 11:21:26.355 254904 DEBUG nova.compute.manager [req-3569c347-e372-448f-a9ea-8df51db9f68e req-db558aab-69ca-41b5-883f-73a348e1cf36 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Received event network-vif-plugged-41345e4a-b8a8-4ed2-80a8-e69289b0f8fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:21:26 np0005542249 nova_compute[254900]: 2025-12-02 11:21:26.355 254904 DEBUG oslo_concurrency.lockutils [req-3569c347-e372-448f-a9ea-8df51db9f68e req-db558aab-69ca-41b5-883f-73a348e1cf36 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "415ead6c-ffe0-4426-a145-1cb487cfa30f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:26 np0005542249 nova_compute[254900]: 2025-12-02 11:21:26.356 254904 DEBUG oslo_concurrency.lockutils [req-3569c347-e372-448f-a9ea-8df51db9f68e req-db558aab-69ca-41b5-883f-73a348e1cf36 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "415ead6c-ffe0-4426-a145-1cb487cfa30f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:26 np0005542249 nova_compute[254900]: 2025-12-02 11:21:26.357 254904 DEBUG oslo_concurrency.lockutils [req-3569c347-e372-448f-a9ea-8df51db9f68e req-db558aab-69ca-41b5-883f-73a348e1cf36 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "415ead6c-ffe0-4426-a145-1cb487cfa30f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:26 np0005542249 nova_compute[254900]: 2025-12-02 11:21:26.358 254904 DEBUG nova.compute.manager [req-3569c347-e372-448f-a9ea-8df51db9f68e req-db558aab-69ca-41b5-883f-73a348e1cf36 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] No waiting events found dispatching network-vif-plugged-41345e4a-b8a8-4ed2-80a8-e69289b0f8fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:21:26 np0005542249 nova_compute[254900]: 2025-12-02 11:21:26.359 254904 WARNING nova.compute.manager [req-3569c347-e372-448f-a9ea-8df51db9f68e req-db558aab-69ca-41b5-883f-73a348e1cf36 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Received unexpected event network-vif-plugged-41345e4a-b8a8-4ed2-80a8-e69289b0f8fb for instance with vm_state active and task_state None.#033[00m
Dec  2 06:21:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:21:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Dec  2 06:21:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Dec  2 06:21:26 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Dec  2 06:21:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:21:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:21:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:21:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:21:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:21:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:21:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:21:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:21:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:21:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:21:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:21:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:21:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:21:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:21:26 np0005542249 ovn_controller[153849]: 2025-12-02T11:21:26Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4e:4e:d3 10.100.0.12
Dec  2 06:21:26 np0005542249 ovn_controller[153849]: 2025-12-02T11:21:26Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4e:4e:d3 10.100.0.12
Dec  2 06:21:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:21:27 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/178649735' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:21:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:21:27 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/178649735' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:21:27 np0005542249 nova_compute[254900]: 2025-12-02 11:21:27.473 254904 DEBUG nova.compute.manager [req-fad7977a-0ec9-423b-9769-11c3bdecd111 req-c240efc5-3d2e-4e6a-b0b8-b758d4d0caf4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Received event network-changed-41345e4a-b8a8-4ed2-80a8-e69289b0f8fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:21:27 np0005542249 nova_compute[254900]: 2025-12-02 11:21:27.474 254904 DEBUG nova.compute.manager [req-fad7977a-0ec9-423b-9769-11c3bdecd111 req-c240efc5-3d2e-4e6a-b0b8-b758d4d0caf4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Refreshing instance network info cache due to event network-changed-41345e4a-b8a8-4ed2-80a8-e69289b0f8fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:21:27 np0005542249 nova_compute[254900]: 2025-12-02 11:21:27.474 254904 DEBUG oslo_concurrency.lockutils [req-fad7977a-0ec9-423b-9769-11c3bdecd111 req-c240efc5-3d2e-4e6a-b0b8-b758d4d0caf4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-415ead6c-ffe0-4426-a145-1cb487cfa30f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:21:27 np0005542249 nova_compute[254900]: 2025-12-02 11:21:27.475 254904 DEBUG oslo_concurrency.lockutils [req-fad7977a-0ec9-423b-9769-11c3bdecd111 req-c240efc5-3d2e-4e6a-b0b8-b758d4d0caf4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-415ead6c-ffe0-4426-a145-1cb487cfa30f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:21:27 np0005542249 nova_compute[254900]: 2025-12-02 11:21:27.475 254904 DEBUG nova.network.neutron [req-fad7977a-0ec9-423b-9769-11c3bdecd111 req-c240efc5-3d2e-4e6a-b0b8-b758d4d0caf4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Refreshing network info cache for port 41345e4a-b8a8-4ed2-80a8-e69289b0f8fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:21:27 np0005542249 nova_compute[254900]: 2025-12-02 11:21:27.842 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1196: 321 pgs: 321 active+clean; 253 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.0 MiB/s wr, 277 op/s
Dec  2 06:21:28 np0005542249 nova_compute[254900]: 2025-12-02 11:21:28.048 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:28 np0005542249 nova_compute[254900]: 2025-12-02 11:21:28.856 254904 DEBUG nova.network.neutron [req-fad7977a-0ec9-423b-9769-11c3bdecd111 req-c240efc5-3d2e-4e6a-b0b8-b758d4d0caf4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Updated VIF entry in instance network info cache for port 41345e4a-b8a8-4ed2-80a8-e69289b0f8fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:21:28 np0005542249 nova_compute[254900]: 2025-12-02 11:21:28.857 254904 DEBUG nova.network.neutron [req-fad7977a-0ec9-423b-9769-11c3bdecd111 req-c240efc5-3d2e-4e6a-b0b8-b758d4d0caf4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Updating instance_info_cache with network_info: [{"id": "41345e4a-b8a8-4ed2-80a8-e69289b0f8fb", "address": "fa:16:3e:4c:b5:d0", "network": {"id": "d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1780654330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "401c4eb4c3ea4ca886484161dcd637b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41345e4a-b8", "ovs_interfaceid": "41345e4a-b8a8-4ed2-80a8-e69289b0f8fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:21:28 np0005542249 nova_compute[254900]: 2025-12-02 11:21:28.878 254904 DEBUG oslo_concurrency.lockutils [req-fad7977a-0ec9-423b-9769-11c3bdecd111 req-c240efc5-3d2e-4e6a-b0b8-b758d4d0caf4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-415ead6c-ffe0-4426-a145-1cb487cfa30f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:21:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1197: 321 pgs: 321 active+clean; 260 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.9 MiB/s wr, 257 op/s
Dec  2 06:21:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:21:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1198: 321 pgs: 321 active+clean; 260 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.5 MiB/s wr, 221 op/s
Dec  2 06:21:32 np0005542249 podman[274013]: 2025-12-02 11:21:32.543912663 +0000 UTC m=+0.106340805 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:21:32 np0005542249 nova_compute[254900]: 2025-12-02 11:21:32.846 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:33 np0005542249 nova_compute[254900]: 2025-12-02 11:21:33.048 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:33 np0005542249 nova_compute[254900]: 2025-12-02 11:21:33.939 254904 DEBUG oslo_concurrency.lockutils [None req-dc753381-fc4e-4605-b94b-3b3c5e089087 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Acquiring lock "5771f44f-b324-4d11-b452-a2c22e990c48" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:33 np0005542249 nova_compute[254900]: 2025-12-02 11:21:33.940 254904 DEBUG oslo_concurrency.lockutils [None req-dc753381-fc4e-4605-b94b-3b3c5e089087 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "5771f44f-b324-4d11-b452-a2c22e990c48" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:33 np0005542249 nova_compute[254900]: 2025-12-02 11:21:33.956 254904 DEBUG nova.objects.instance [None req-dc753381-fc4e-4605-b94b-3b3c5e089087 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lazy-loading 'flavor' on Instance uuid 5771f44f-b324-4d11-b452-a2c22e990c48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:21:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1199: 321 pgs: 321 active+clean; 260 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 182 op/s
Dec  2 06:21:33 np0005542249 nova_compute[254900]: 2025-12-02 11:21:33.976 254904 INFO nova.virt.libvirt.driver [None req-dc753381-fc4e-4605-b94b-3b3c5e089087 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Ignoring supplied device name: /dev/vdb#033[00m
Dec  2 06:21:33 np0005542249 nova_compute[254900]: 2025-12-02 11:21:33.992 254904 DEBUG oslo_concurrency.lockutils [None req-dc753381-fc4e-4605-b94b-3b3c5e089087 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "5771f44f-b324-4d11-b452-a2c22e990c48" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:34 np0005542249 nova_compute[254900]: 2025-12-02 11:21:34.212 254904 DEBUG oslo_concurrency.lockutils [None req-dc753381-fc4e-4605-b94b-3b3c5e089087 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Acquiring lock "5771f44f-b324-4d11-b452-a2c22e990c48" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:34 np0005542249 nova_compute[254900]: 2025-12-02 11:21:34.213 254904 DEBUG oslo_concurrency.lockutils [None req-dc753381-fc4e-4605-b94b-3b3c5e089087 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "5771f44f-b324-4d11-b452-a2c22e990c48" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:34 np0005542249 nova_compute[254900]: 2025-12-02 11:21:34.214 254904 INFO nova.compute.manager [None req-dc753381-fc4e-4605-b94b-3b3c5e089087 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Attaching volume 7daea6dc-a881-41a7-8b48-8e2c11598a6e to /dev/vdb#033[00m
Dec  2 06:21:34 np0005542249 nova_compute[254900]: 2025-12-02 11:21:34.360 254904 DEBUG os_brick.utils [None req-dc753381-fc4e-4605-b94b-3b3c5e089087 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:21:34 np0005542249 nova_compute[254900]: 2025-12-02 11:21:34.361 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:21:34 np0005542249 nova_compute[254900]: 2025-12-02 11:21:34.384 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:21:34 np0005542249 nova_compute[254900]: 2025-12-02 11:21:34.384 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[e7f12ad1-5cd9-4b42-9748-3a11e917af1c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:34 np0005542249 nova_compute[254900]: 2025-12-02 11:21:34.386 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:21:34 np0005542249 nova_compute[254900]: 2025-12-02 11:21:34.401 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:21:34 np0005542249 nova_compute[254900]: 2025-12-02 11:21:34.401 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e6f4ce-4801-4e44-a971-63338ef83563]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:34 np0005542249 nova_compute[254900]: 2025-12-02 11:21:34.403 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:21:34 np0005542249 nova_compute[254900]: 2025-12-02 11:21:34.417 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:21:34 np0005542249 nova_compute[254900]: 2025-12-02 11:21:34.417 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[37421b21-6ffd-4b2b-9e77-fd4c07eca546]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:34 np0005542249 nova_compute[254900]: 2025-12-02 11:21:34.419 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[0dd31a69-fdfe-4868-8545-552ee9222c16]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:34 np0005542249 nova_compute[254900]: 2025-12-02 11:21:34.420 254904 DEBUG oslo_concurrency.processutils [None req-dc753381-fc4e-4605-b94b-3b3c5e089087 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:21:34 np0005542249 nova_compute[254900]: 2025-12-02 11:21:34.446 254904 DEBUG oslo_concurrency.processutils [None req-dc753381-fc4e-4605-b94b-3b3c5e089087 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:21:34 np0005542249 nova_compute[254900]: 2025-12-02 11:21:34.451 254904 DEBUG os_brick.initiator.connectors.lightos [None req-dc753381-fc4e-4605-b94b-3b3c5e089087 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:21:34 np0005542249 nova_compute[254900]: 2025-12-02 11:21:34.452 254904 DEBUG os_brick.initiator.connectors.lightos [None req-dc753381-fc4e-4605-b94b-3b3c5e089087 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:21:34 np0005542249 nova_compute[254900]: 2025-12-02 11:21:34.452 254904 DEBUG os_brick.initiator.connectors.lightos [None req-dc753381-fc4e-4605-b94b-3b3c5e089087 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:21:34 np0005542249 nova_compute[254900]: 2025-12-02 11:21:34.453 254904 DEBUG os_brick.utils [None req-dc753381-fc4e-4605-b94b-3b3c5e089087 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] <== get_connector_properties: return (92ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:21:34 np0005542249 nova_compute[254900]: 2025-12-02 11:21:34.454 254904 DEBUG nova.virt.block_device [None req-dc753381-fc4e-4605-b94b-3b3c5e089087 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Updating existing volume attachment record: eb500d95-67ac-482d-80d1-24cc277f71b2 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:21:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:21:35 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1600145777' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:21:35 np0005542249 nova_compute[254900]: 2025-12-02 11:21:35.165 254904 DEBUG nova.objects.instance [None req-dc753381-fc4e-4605-b94b-3b3c5e089087 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lazy-loading 'flavor' on Instance uuid 5771f44f-b324-4d11-b452-a2c22e990c48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:21:35 np0005542249 nova_compute[254900]: 2025-12-02 11:21:35.198 254904 DEBUG nova.virt.libvirt.driver [None req-dc753381-fc4e-4605-b94b-3b3c5e089087 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Attempting to attach volume 7daea6dc-a881-41a7-8b48-8e2c11598a6e with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Dec  2 06:21:35 np0005542249 nova_compute[254900]: 2025-12-02 11:21:35.202 254904 DEBUG nova.virt.libvirt.guest [None req-dc753381-fc4e-4605-b94b-3b3c5e089087 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] attach device xml: <disk type="network" device="disk">
Dec  2 06:21:35 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:21:35 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-7daea6dc-a881-41a7-8b48-8e2c11598a6e">
Dec  2 06:21:35 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:21:35 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:21:35 np0005542249 nova_compute[254900]:  <auth username="openstack">
Dec  2 06:21:35 np0005542249 nova_compute[254900]:    <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:21:35 np0005542249 nova_compute[254900]:  </auth>
Dec  2 06:21:35 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:21:35 np0005542249 nova_compute[254900]:  <serial>7daea6dc-a881-41a7-8b48-8e2c11598a6e</serial>
Dec  2 06:21:35 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:21:35 np0005542249 nova_compute[254900]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  2 06:21:35 np0005542249 nova_compute[254900]: 2025-12-02 11:21:35.364 254904 DEBUG nova.virt.libvirt.driver [None req-dc753381-fc4e-4605-b94b-3b3c5e089087 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:21:35 np0005542249 nova_compute[254900]: 2025-12-02 11:21:35.365 254904 DEBUG nova.virt.libvirt.driver [None req-dc753381-fc4e-4605-b94b-3b3c5e089087 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:21:35 np0005542249 nova_compute[254900]: 2025-12-02 11:21:35.366 254904 DEBUG nova.virt.libvirt.driver [None req-dc753381-fc4e-4605-b94b-3b3c5e089087 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:21:35 np0005542249 nova_compute[254900]: 2025-12-02 11:21:35.366 254904 DEBUG nova.virt.libvirt.driver [None req-dc753381-fc4e-4605-b94b-3b3c5e089087 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] No VIF found with MAC fa:16:3e:4e:4e:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:21:35 np0005542249 nova_compute[254900]: 2025-12-02 11:21:35.591 254904 DEBUG oslo_concurrency.lockutils [None req-dc753381-fc4e-4605-b94b-3b3c5e089087 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "5771f44f-b324-4d11-b452-a2c22e990c48" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.377s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:35 np0005542249 nova_compute[254900]: 2025-12-02 11:21:35.721 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:21:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1200: 321 pgs: 321 active+clean; 260 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 180 op/s
Dec  2 06:21:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:21:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:21:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:21:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001105905999706974 of space, bias 1.0, pg target 0.3317717999120922 quantized to 32 (current 32)
Dec  2 06:21:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:21:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0006968905670052745 of space, bias 1.0, pg target 0.20906717010158235 quantized to 32 (current 32)
Dec  2 06:21:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:21:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  2 06:21:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:21:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  2 06:21:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:21:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:21:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:21:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:21:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:21:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:21:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:21:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:21:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:21:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:21:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:21:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:21:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:21:36 np0005542249 nova_compute[254900]: 2025-12-02 11:21:36.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:21:36 np0005542249 nova_compute[254900]: 2025-12-02 11:21:36.382 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:21:36 np0005542249 nova_compute[254900]: 2025-12-02 11:21:36.383 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 06:21:36 np0005542249 nova_compute[254900]: 2025-12-02 11:21:36.684 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "refresh_cache-5771f44f-b324-4d11-b452-a2c22e990c48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:21:36 np0005542249 nova_compute[254900]: 2025-12-02 11:21:36.685 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquired lock "refresh_cache-5771f44f-b324-4d11-b452-a2c22e990c48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:21:36 np0005542249 nova_compute[254900]: 2025-12-02 11:21:36.686 254904 DEBUG nova.network.neutron [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 06:21:36 np0005542249 nova_compute[254900]: 2025-12-02 11:21:36.686 254904 DEBUG nova.objects.instance [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5771f44f-b324-4d11-b452-a2c22e990c48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:21:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Dec  2 06:21:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Dec  2 06:21:37 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Dec  2 06:21:37 np0005542249 nova_compute[254900]: 2025-12-02 11:21:37.849 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1202: 321 pgs: 321 active+clean; 260 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 685 KiB/s wr, 35 op/s
Dec  2 06:21:38 np0005542249 nova_compute[254900]: 2025-12-02 11:21:38.051 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:38 np0005542249 nova_compute[254900]: 2025-12-02 11:21:38.582 254904 DEBUG nova.network.neutron [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Updating instance_info_cache with network_info: [{"id": "f393e1c3-9bd9-4748-a528-b99dd193646c", "address": "fa:16:3e:4e:4e:d3", "network": {"id": "df73b9ab-de8d-40fa-9bf0-aa773bb32d3a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-950743948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff78a31f26746918caf04706b12b741", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf393e1c3-9b", "ovs_interfaceid": "f393e1c3-9bd9-4748-a528-b99dd193646c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:21:38 np0005542249 nova_compute[254900]: 2025-12-02 11:21:38.597 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Releasing lock "refresh_cache-5771f44f-b324-4d11-b452-a2c22e990c48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:21:38 np0005542249 nova_compute[254900]: 2025-12-02 11:21:38.597 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 06:21:38 np0005542249 nova_compute[254900]: 2025-12-02 11:21:38.598 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:21:38 np0005542249 nova_compute[254900]: 2025-12-02 11:21:38.599 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:21:38 np0005542249 nova_compute[254900]: 2025-12-02 11:21:38.599 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:21:38 np0005542249 nova_compute[254900]: 2025-12-02 11:21:38.599 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:21:38 np0005542249 nova_compute[254900]: 2025-12-02 11:21:38.830 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Dec  2 06:21:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Dec  2 06:21:39 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Dec  2 06:21:39 np0005542249 nova_compute[254900]: 2025-12-02 11:21:39.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:21:39 np0005542249 nova_compute[254900]: 2025-12-02 11:21:39.383 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:21:39 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1204: 321 pgs: 321 active+clean; 265 MiB data, 362 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 644 KiB/s wr, 18 op/s
Dec  2 06:21:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Dec  2 06:21:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Dec  2 06:21:40 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Dec  2 06:21:40 np0005542249 nova_compute[254900]: 2025-12-02 11:21:40.377 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:21:40 np0005542249 nova_compute[254900]: 2025-12-02 11:21:40.403 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:21:40 np0005542249 ovn_controller[153849]: 2025-12-02T11:21:40Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4c:b5:d0 10.100.0.8
Dec  2 06:21:40 np0005542249 ovn_controller[153849]: 2025-12-02T11:21:40Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4c:b5:d0 10.100.0.8
Dec  2 06:21:41 np0005542249 podman[274062]: 2025-12-02 11:21:41.002892985 +0000 UTC m=+0.080355022 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec  2 06:21:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:21:41 np0005542249 nova_compute[254900]: 2025-12-02 11:21:41.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:21:41 np0005542249 nova_compute[254900]: 2025-12-02 11:21:41.414 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:41 np0005542249 nova_compute[254900]: 2025-12-02 11:21:41.415 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:41 np0005542249 nova_compute[254900]: 2025-12-02 11:21:41.415 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:41 np0005542249 nova_compute[254900]: 2025-12-02 11:21:41.415 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:21:41 np0005542249 nova_compute[254900]: 2025-12-02 11:21:41.415 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:21:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:21:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2548900219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:21:41 np0005542249 nova_compute[254900]: 2025-12-02 11:21:41.871 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:21:41 np0005542249 nova_compute[254900]: 2025-12-02 11:21:41.961 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:21:41 np0005542249 nova_compute[254900]: 2025-12-02 11:21:41.962 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:21:41 np0005542249 nova_compute[254900]: 2025-12-02 11:21:41.962 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:21:41 np0005542249 nova_compute[254900]: 2025-12-02 11:21:41.967 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:21:41 np0005542249 nova_compute[254900]: 2025-12-02 11:21:41.968 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:21:41 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1206: 321 pgs: 321 active+clean; 276 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 453 KiB/s rd, 2.8 MiB/s wr, 107 op/s
Dec  2 06:21:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Dec  2 06:21:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Dec  2 06:21:42 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Dec  2 06:21:42 np0005542249 nova_compute[254900]: 2025-12-02 11:21:42.174 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:21:42 np0005542249 nova_compute[254900]: 2025-12-02 11:21:42.175 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4242MB free_disk=59.917030334472656GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:21:42 np0005542249 nova_compute[254900]: 2025-12-02 11:21:42.175 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:42 np0005542249 nova_compute[254900]: 2025-12-02 11:21:42.176 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:42 np0005542249 nova_compute[254900]: 2025-12-02 11:21:42.292 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Instance 5771f44f-b324-4d11-b452-a2c22e990c48 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 06:21:42 np0005542249 nova_compute[254900]: 2025-12-02 11:21:42.292 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Instance 415ead6c-ffe0-4426-a145-1cb487cfa30f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 06:21:42 np0005542249 nova_compute[254900]: 2025-12-02 11:21:42.293 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:21:42 np0005542249 nova_compute[254900]: 2025-12-02 11:21:42.293 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:21:42 np0005542249 nova_compute[254900]: 2025-12-02 11:21:42.313 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Refreshing inventories for resource provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  2 06:21:42 np0005542249 nova_compute[254900]: 2025-12-02 11:21:42.333 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Updating ProviderTree inventory for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  2 06:21:42 np0005542249 nova_compute[254900]: 2025-12-02 11:21:42.334 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Updating inventory in ProviderTree for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  2 06:21:42 np0005542249 nova_compute[254900]: 2025-12-02 11:21:42.357 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Refreshing aggregate associations for resource provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  2 06:21:42 np0005542249 nova_compute[254900]: 2025-12-02 11:21:42.387 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Refreshing trait associations for resource provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062, traits: HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SVM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_MMX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,HW_CPU_X86_CLMUL,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE41,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  2 06:21:42 np0005542249 nova_compute[254900]: 2025-12-02 11:21:42.485 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:21:42 np0005542249 nova_compute[254900]: 2025-12-02 11:21:42.893 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:21:42 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1939372416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:21:42 np0005542249 nova_compute[254900]: 2025-12-02 11:21:42.949 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:21:42 np0005542249 nova_compute[254900]: 2025-12-02 11:21:42.955 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:21:42 np0005542249 nova_compute[254900]: 2025-12-02 11:21:42.978 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:21:43 np0005542249 nova_compute[254900]: 2025-12-02 11:21:43.010 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:21:43 np0005542249 nova_compute[254900]: 2025-12-02 11:21:43.010 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:43 np0005542249 nova_compute[254900]: 2025-12-02 11:21:43.053 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Dec  2 06:21:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Dec  2 06:21:43 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Dec  2 06:21:43 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1209: 321 pgs: 321 active+clean; 288 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 859 KiB/s rd, 4.2 MiB/s wr, 229 op/s
Dec  2 06:21:43 np0005542249 nova_compute[254900]: 2025-12-02 11:21:43.995 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:44 np0005542249 nova_compute[254900]: 2025-12-02 11:21:44.010 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:21:44 np0005542249 podman[274133]: 2025-12-02 11:21:44.011851856 +0000 UTC m=+0.070254486 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec  2 06:21:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Dec  2 06:21:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Dec  2 06:21:44 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Dec  2 06:21:45 np0005542249 nova_compute[254900]: 2025-12-02 11:21:45.597 254904 DEBUG oslo_concurrency.lockutils [None req-cc3dd862-1dce-48df-babc-ce8497c3911a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Acquiring lock "5771f44f-b324-4d11-b452-a2c22e990c48" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:45 np0005542249 nova_compute[254900]: 2025-12-02 11:21:45.598 254904 DEBUG oslo_concurrency.lockutils [None req-cc3dd862-1dce-48df-babc-ce8497c3911a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "5771f44f-b324-4d11-b452-a2c22e990c48" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:45 np0005542249 nova_compute[254900]: 2025-12-02 11:21:45.613 254904 INFO nova.compute.manager [None req-cc3dd862-1dce-48df-babc-ce8497c3911a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Detaching volume 7daea6dc-a881-41a7-8b48-8e2c11598a6e#033[00m
Dec  2 06:21:45 np0005542249 nova_compute[254900]: 2025-12-02 11:21:45.791 254904 DEBUG oslo_concurrency.lockutils [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Acquiring lock "5771f44f-b324-4d11-b452-a2c22e990c48" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:45 np0005542249 nova_compute[254900]: 2025-12-02 11:21:45.810 254904 INFO nova.virt.block_device [None req-cc3dd862-1dce-48df-babc-ce8497c3911a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Attempting to driver detach volume 7daea6dc-a881-41a7-8b48-8e2c11598a6e from mountpoint /dev/vdb#033[00m
Dec  2 06:21:45 np0005542249 nova_compute[254900]: 2025-12-02 11:21:45.823 254904 DEBUG nova.virt.libvirt.driver [None req-cc3dd862-1dce-48df-babc-ce8497c3911a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Attempting to detach device vdb from instance 5771f44f-b324-4d11-b452-a2c22e990c48 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  2 06:21:45 np0005542249 nova_compute[254900]: 2025-12-02 11:21:45.825 254904 DEBUG nova.virt.libvirt.guest [None req-cc3dd862-1dce-48df-babc-ce8497c3911a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] detach device xml: <disk type="network" device="disk">
Dec  2 06:21:45 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:21:45 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-7daea6dc-a881-41a7-8b48-8e2c11598a6e">
Dec  2 06:21:45 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:21:45 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:21:45 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:21:45 np0005542249 nova_compute[254900]:  <serial>7daea6dc-a881-41a7-8b48-8e2c11598a6e</serial>
Dec  2 06:21:45 np0005542249 nova_compute[254900]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  2 06:21:45 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:21:45 np0005542249 nova_compute[254900]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  2 06:21:45 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1211: 321 pgs: 321 active+clean; 293 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 719 KiB/s rd, 3.5 MiB/s wr, 196 op/s
Dec  2 06:21:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:21:46 np0005542249 nova_compute[254900]: 2025-12-02 11:21:46.841 254904 INFO nova.virt.libvirt.driver [None req-cc3dd862-1dce-48df-babc-ce8497c3911a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Successfully detached device vdb from instance 5771f44f-b324-4d11-b452-a2c22e990c48 from the persistent domain config.#033[00m
Dec  2 06:21:46 np0005542249 nova_compute[254900]: 2025-12-02 11:21:46.842 254904 DEBUG nova.virt.libvirt.driver [None req-cc3dd862-1dce-48df-babc-ce8497c3911a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 5771f44f-b324-4d11-b452-a2c22e990c48 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  2 06:21:46 np0005542249 nova_compute[254900]: 2025-12-02 11:21:46.843 254904 DEBUG nova.virt.libvirt.guest [None req-cc3dd862-1dce-48df-babc-ce8497c3911a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] detach device xml: <disk type="network" device="disk">
Dec  2 06:21:46 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:21:46 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-7daea6dc-a881-41a7-8b48-8e2c11598a6e">
Dec  2 06:21:46 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:21:46 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:21:46 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:21:46 np0005542249 nova_compute[254900]:  <serial>7daea6dc-a881-41a7-8b48-8e2c11598a6e</serial>
Dec  2 06:21:46 np0005542249 nova_compute[254900]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  2 06:21:46 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:21:46 np0005542249 nova_compute[254900]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  2 06:21:46 np0005542249 nova_compute[254900]: 2025-12-02 11:21:46.967 254904 DEBUG nova.virt.libvirt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Received event <DeviceRemovedEvent: 1764674506.9670842, 5771f44f-b324-4d11-b452-a2c22e990c48 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  2 06:21:46 np0005542249 nova_compute[254900]: 2025-12-02 11:21:46.970 254904 DEBUG nova.virt.libvirt.driver [None req-cc3dd862-1dce-48df-babc-ce8497c3911a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 5771f44f-b324-4d11-b452-a2c22e990c48 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  2 06:21:46 np0005542249 nova_compute[254900]: 2025-12-02 11:21:46.973 254904 INFO nova.virt.libvirt.driver [None req-cc3dd862-1dce-48df-babc-ce8497c3911a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Successfully detached device vdb from instance 5771f44f-b324-4d11-b452-a2c22e990c48 from the live domain config.#033[00m
Dec  2 06:21:47 np0005542249 nova_compute[254900]: 2025-12-02 11:21:47.211 254904 DEBUG nova.objects.instance [None req-cc3dd862-1dce-48df-babc-ce8497c3911a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lazy-loading 'flavor' on Instance uuid 5771f44f-b324-4d11-b452-a2c22e990c48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:21:47 np0005542249 nova_compute[254900]: 2025-12-02 11:21:47.260 254904 DEBUG oslo_concurrency.lockutils [None req-cc3dd862-1dce-48df-babc-ce8497c3911a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "5771f44f-b324-4d11-b452-a2c22e990c48" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:47 np0005542249 nova_compute[254900]: 2025-12-02 11:21:47.261 254904 DEBUG oslo_concurrency.lockutils [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "5771f44f-b324-4d11-b452-a2c22e990c48" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 1.470s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:47 np0005542249 nova_compute[254900]: 2025-12-02 11:21:47.262 254904 DEBUG oslo_concurrency.lockutils [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Acquiring lock "5771f44f-b324-4d11-b452-a2c22e990c48-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:47 np0005542249 nova_compute[254900]: 2025-12-02 11:21:47.262 254904 DEBUG oslo_concurrency.lockutils [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "5771f44f-b324-4d11-b452-a2c22e990c48-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:47 np0005542249 nova_compute[254900]: 2025-12-02 11:21:47.262 254904 DEBUG oslo_concurrency.lockutils [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "5771f44f-b324-4d11-b452-a2c22e990c48-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:47 np0005542249 nova_compute[254900]: 2025-12-02 11:21:47.264 254904 INFO nova.compute.manager [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Terminating instance#033[00m
Dec  2 06:21:47 np0005542249 nova_compute[254900]: 2025-12-02 11:21:47.266 254904 DEBUG nova.compute.manager [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:21:47 np0005542249 kernel: tapf393e1c3-9b (unregistering): left promiscuous mode
Dec  2 06:21:47 np0005542249 NetworkManager[48987]: <info>  [1764674507.3287] device (tapf393e1c3-9b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:21:47 np0005542249 ovn_controller[153849]: 2025-12-02T11:21:47Z|00098|binding|INFO|Releasing lport f393e1c3-9bd9-4748-a528-b99dd193646c from this chassis (sb_readonly=0)
Dec  2 06:21:47 np0005542249 ovn_controller[153849]: 2025-12-02T11:21:47Z|00099|binding|INFO|Setting lport f393e1c3-9bd9-4748-a528-b99dd193646c down in Southbound
Dec  2 06:21:47 np0005542249 ovn_controller[153849]: 2025-12-02T11:21:47Z|00100|binding|INFO|Removing iface tapf393e1c3-9b ovn-installed in OVS
Dec  2 06:21:47 np0005542249 nova_compute[254900]: 2025-12-02 11:21:47.345 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:47 np0005542249 nova_compute[254900]: 2025-12-02 11:21:47.349 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:47 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:47.355 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:4e:d3 10.100.0.12'], port_security=['fa:16:3e:4e:4e:d3 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5771f44f-b324-4d11-b452-a2c22e990c48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fff78a31f26746918caf04706b12b741', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd6ee0e59-cb51-4b63-b81c-fc532195e47d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.228'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff54e9e6-62e9-4528-92f5-a3bb97a08852, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=f393e1c3-9bd9-4748-a528-b99dd193646c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:21:47 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:47.357 163757 INFO neutron.agent.ovn.metadata.agent [-] Port f393e1c3-9bd9-4748-a528-b99dd193646c in datapath df73b9ab-de8d-40fa-9bf0-aa773bb32d3a unbound from our chassis#033[00m
Dec  2 06:21:47 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:47.358 163757 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network df73b9ab-de8d-40fa-9bf0-aa773bb32d3a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 06:21:47 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:47.360 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[9abcb15a-44f7-4cbe-80db-91dee11da7db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:47 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:47.360 163757 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a namespace which is not needed anymore#033[00m
Dec  2 06:21:47 np0005542249 nova_compute[254900]: 2025-12-02 11:21:47.381 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:47 np0005542249 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Dec  2 06:21:47 np0005542249 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 14.612s CPU time.
Dec  2 06:21:47 np0005542249 systemd-machined[216222]: Machine qemu-8-instance-00000008 terminated.
Dec  2 06:21:47 np0005542249 nova_compute[254900]: 2025-12-02 11:21:47.508 254904 INFO nova.virt.libvirt.driver [-] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Instance destroyed successfully.#033[00m
Dec  2 06:21:47 np0005542249 nova_compute[254900]: 2025-12-02 11:21:47.509 254904 DEBUG nova.objects.instance [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lazy-loading 'resources' on Instance uuid 5771f44f-b324-4d11-b452-a2c22e990c48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:21:47 np0005542249 nova_compute[254900]: 2025-12-02 11:21:47.525 254904 DEBUG nova.virt.libvirt.vif [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-465048395',display_name='tempest-VolumesSnapshotTestJSON-instance-465048395',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-465048395',id=8,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPU7qPrCI8kbBPSW9jQCNKEm8UHqJFIzgbHmvMHzii2gZdSdpoJnxyFr3zMkvJqvqJLI4PHcvcTY2SVNDguBmTrW9hzDpT7tGfwq75CHGE67RRM5E2fn/JKC+AdldivSGQ==',key_name='tempest-keypair-5305996',keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:21:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fff78a31f26746918caf04706b12b741',ramdisk_id='',reservation_id='r-un4zyblu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesSnapshotTestJSON-1610940554',owner_user_name='tempest-VolumesSnapshotTestJSON-1610940554-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:21:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e0796090ff07418b99397a7f13f11633',uuid=5771f44f-b324-4d11-b452-a2c22e990c48,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f393e1c3-9bd9-4748-a528-b99dd193646c", "address": "fa:16:3e:4e:4e:d3", "network": {"id": "df73b9ab-de8d-40fa-9bf0-aa773bb32d3a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-950743948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff78a31f26746918caf04706b12b741", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf393e1c3-9b", "ovs_interfaceid": "f393e1c3-9bd9-4748-a528-b99dd193646c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:21:47 np0005542249 nova_compute[254900]: 2025-12-02 11:21:47.525 254904 DEBUG nova.network.os_vif_util [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Converting VIF {"id": "f393e1c3-9bd9-4748-a528-b99dd193646c", "address": "fa:16:3e:4e:4e:d3", "network": {"id": "df73b9ab-de8d-40fa-9bf0-aa773bb32d3a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-950743948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff78a31f26746918caf04706b12b741", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf393e1c3-9b", "ovs_interfaceid": "f393e1c3-9bd9-4748-a528-b99dd193646c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:21:47 np0005542249 nova_compute[254900]: 2025-12-02 11:21:47.526 254904 DEBUG nova.network.os_vif_util [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4e:4e:d3,bridge_name='br-int',has_traffic_filtering=True,id=f393e1c3-9bd9-4748-a528-b99dd193646c,network=Network(df73b9ab-de8d-40fa-9bf0-aa773bb32d3a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf393e1c3-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:21:47 np0005542249 nova_compute[254900]: 2025-12-02 11:21:47.526 254904 DEBUG os_vif [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4e:4e:d3,bridge_name='br-int',has_traffic_filtering=True,id=f393e1c3-9bd9-4748-a528-b99dd193646c,network=Network(df73b9ab-de8d-40fa-9bf0-aa773bb32d3a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf393e1c3-9b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:21:47 np0005542249 nova_compute[254900]: 2025-12-02 11:21:47.528 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:47 np0005542249 nova_compute[254900]: 2025-12-02 11:21:47.528 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf393e1c3-9b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:21:47 np0005542249 nova_compute[254900]: 2025-12-02 11:21:47.596 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:47 np0005542249 nova_compute[254900]: 2025-12-02 11:21:47.600 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:21:47 np0005542249 nova_compute[254900]: 2025-12-02 11:21:47.604 254904 INFO os_vif [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4e:4e:d3,bridge_name='br-int',has_traffic_filtering=True,id=f393e1c3-9bd9-4748-a528-b99dd193646c,network=Network(df73b9ab-de8d-40fa-9bf0-aa773bb32d3a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf393e1c3-9b')#033[00m
Dec  2 06:21:47 np0005542249 neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a[273533]: [NOTICE]   (273537) : haproxy version is 2.8.14-c23fe91
Dec  2 06:21:47 np0005542249 neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a[273533]: [NOTICE]   (273537) : path to executable is /usr/sbin/haproxy
Dec  2 06:21:47 np0005542249 neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a[273533]: [WARNING]  (273537) : Exiting Master process...
Dec  2 06:21:47 np0005542249 neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a[273533]: [ALERT]    (273537) : Current worker (273539) exited with code 143 (Terminated)
Dec  2 06:21:47 np0005542249 neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a[273533]: [WARNING]  (273537) : All workers exited. Exiting... (0)
Dec  2 06:21:47 np0005542249 systemd[1]: libpod-3bad0a1e998dfbafd8d84da9451c4fe26d5d8ef8f5cce960b79de7607c8474bd.scope: Deactivated successfully.
Dec  2 06:21:47 np0005542249 podman[274182]: 2025-12-02 11:21:47.632998854 +0000 UTC m=+0.120532079 container died 3bad0a1e998dfbafd8d84da9451c4fe26d5d8ef8f5cce960b79de7607c8474bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Dec  2 06:21:47 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3bad0a1e998dfbafd8d84da9451c4fe26d5d8ef8f5cce960b79de7607c8474bd-userdata-shm.mount: Deactivated successfully.
Dec  2 06:21:47 np0005542249 systemd[1]: var-lib-containers-storage-overlay-a143d86d91350ae77ac171b9921569a963a18a6c1bf26fb20ac437bd8292d4c5-merged.mount: Deactivated successfully.
Dec  2 06:21:47 np0005542249 podman[274182]: 2025-12-02 11:21:47.674267428 +0000 UTC m=+0.161800623 container cleanup 3bad0a1e998dfbafd8d84da9451c4fe26d5d8ef8f5cce960b79de7607c8474bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:21:47 np0005542249 systemd[1]: libpod-conmon-3bad0a1e998dfbafd8d84da9451c4fe26d5d8ef8f5cce960b79de7607c8474bd.scope: Deactivated successfully.
Dec  2 06:21:47 np0005542249 nova_compute[254900]: 2025-12-02 11:21:47.770 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:47 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1212: 321 pgs: 321 active+clean; 293 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 299 KiB/s rd, 1.5 MiB/s wr, 140 op/s
Dec  2 06:21:48 np0005542249 nova_compute[254900]: 2025-12-02 11:21:48.055 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:48 np0005542249 podman[274238]: 2025-12-02 11:21:48.096752679 +0000 UTC m=+0.399407375 container remove 3bad0a1e998dfbafd8d84da9451c4fe26d5d8ef8f5cce960b79de7607c8474bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:21:48 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:48.104 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[eed2e485-7865-4325-a43d-b25b83106ef5]: (4, ('Tue Dec  2 11:21:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a (3bad0a1e998dfbafd8d84da9451c4fe26d5d8ef8f5cce960b79de7607c8474bd)\n3bad0a1e998dfbafd8d84da9451c4fe26d5d8ef8f5cce960b79de7607c8474bd\nTue Dec  2 11:21:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a (3bad0a1e998dfbafd8d84da9451c4fe26d5d8ef8f5cce960b79de7607c8474bd)\n3bad0a1e998dfbafd8d84da9451c4fe26d5d8ef8f5cce960b79de7607c8474bd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:48 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:48.106 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[1f474604-108f-4704-9df5-589861120722]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:48 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:48.108 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf73b9ab-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:21:48 np0005542249 nova_compute[254900]: 2025-12-02 11:21:48.111 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:48 np0005542249 kernel: tapdf73b9ab-d0: left promiscuous mode
Dec  2 06:21:48 np0005542249 nova_compute[254900]: 2025-12-02 11:21:48.136 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:48 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:48.140 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[b105f188-a514-4667-87c4-25debce4f1ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:48 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:48.156 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee77484-b627-4240-9762-0037e5bff7a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:48 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:48.158 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[49743ec1-ae63-415b-b7e5-3d60d36e34b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:48 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:48.176 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[1aea84f6-4051-4120-ad8e-79c41f707bbb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466570, 'reachable_time': 43585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274253, 'error': None, 'target': 'ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:48 np0005542249 systemd[1]: run-netns-ovnmeta\x2ddf73b9ab\x2dde8d\x2d40fa\x2d9bf0\x2daa773bb32d3a.mount: Deactivated successfully.
Dec  2 06:21:48 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:48.179 164036 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 06:21:48 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:48.179 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[1c5f5f1d-ed23-4a31-a7fc-6195d70971bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:21:48 np0005542249 nova_compute[254900]: 2025-12-02 11:21:48.335 254904 INFO nova.virt.libvirt.driver [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Deleting instance files /var/lib/nova/instances/5771f44f-b324-4d11-b452-a2c22e990c48_del#033[00m
Dec  2 06:21:48 np0005542249 nova_compute[254900]: 2025-12-02 11:21:48.335 254904 INFO nova.virt.libvirt.driver [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Deletion of /var/lib/nova/instances/5771f44f-b324-4d11-b452-a2c22e990c48_del complete#033[00m
Dec  2 06:21:48 np0005542249 nova_compute[254900]: 2025-12-02 11:21:48.409 254904 INFO nova.compute.manager [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Took 1.14 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:21:48 np0005542249 nova_compute[254900]: 2025-12-02 11:21:48.410 254904 DEBUG oslo.service.loopingcall [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:21:48 np0005542249 nova_compute[254900]: 2025-12-02 11:21:48.410 254904 DEBUG nova.compute.manager [-] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:21:48 np0005542249 nova_compute[254900]: 2025-12-02 11:21:48.410 254904 DEBUG nova.network.neutron [-] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:21:49 np0005542249 nova_compute[254900]: 2025-12-02 11:21:49.117 254904 DEBUG nova.network.neutron [-] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:21:49 np0005542249 nova_compute[254900]: 2025-12-02 11:21:49.151 254904 INFO nova.compute.manager [-] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Took 0.74 seconds to deallocate network for instance.#033[00m
Dec  2 06:21:49 np0005542249 nova_compute[254900]: 2025-12-02 11:21:49.243 254904 DEBUG nova.compute.manager [req-e7c358e1-75ea-4b63-8cd3-5d79c3eee00c req-369878d4-e8ac-4bcd-a93f-7ff51bb4c62a 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Received event network-vif-deleted-f393e1c3-9bd9-4748-a528-b99dd193646c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:21:49 np0005542249 nova_compute[254900]: 2025-12-02 11:21:49.286 254904 WARNING nova.volume.cinder [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Attachment eb500d95-67ac-482d-80d1-24cc277f71b2 does not exist. Ignoring.: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = eb500d95-67ac-482d-80d1-24cc277f71b2. (HTTP 404) (Request-ID: req-579e8d8f-8ed4-40dd-b91f-1057322df807)#033[00m
Dec  2 06:21:49 np0005542249 nova_compute[254900]: 2025-12-02 11:21:49.287 254904 INFO nova.compute.manager [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Took 0.14 seconds to detach 1 volumes for instance.#033[00m
Dec  2 06:21:49 np0005542249 nova_compute[254900]: 2025-12-02 11:21:49.334 254904 DEBUG oslo_concurrency.lockutils [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:49 np0005542249 nova_compute[254900]: 2025-12-02 11:21:49.335 254904 DEBUG oslo_concurrency.lockutils [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:49 np0005542249 nova_compute[254900]: 2025-12-02 11:21:49.409 254904 DEBUG oslo_concurrency.processutils [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:21:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:21:49 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1646739754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:21:49 np0005542249 nova_compute[254900]: 2025-12-02 11:21:49.851 254904 DEBUG oslo_concurrency.processutils [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:21:49 np0005542249 nova_compute[254900]: 2025-12-02 11:21:49.859 254904 DEBUG nova.compute.provider_tree [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:21:49 np0005542249 nova_compute[254900]: 2025-12-02 11:21:49.891 254904 DEBUG nova.scheduler.client.report [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:21:49 np0005542249 nova_compute[254900]: 2025-12-02 11:21:49.919 254904 DEBUG oslo_concurrency.lockutils [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:49 np0005542249 nova_compute[254900]: 2025-12-02 11:21:49.955 254904 INFO nova.scheduler.client.report [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Deleted allocations for instance 5771f44f-b324-4d11-b452-a2c22e990c48#033[00m
Dec  2 06:21:49 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1213: 321 pgs: 321 active+clean; 270 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 229 KiB/s rd, 1.2 MiB/s wr, 109 op/s
Dec  2 06:21:50 np0005542249 nova_compute[254900]: 2025-12-02 11:21:50.033 254904 DEBUG oslo_concurrency.lockutils [None req-2709a620-0b4d-4ac2-82fc-4e08972a926a e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "5771f44f-b324-4d11-b452-a2c22e990c48" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:50 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:50.076 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:23:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:25:d0:a1:75:ed'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:21:50 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:50.078 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 06:21:50 np0005542249 nova_compute[254900]: 2025-12-02 11:21:50.115 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:21:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1307886451' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:21:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:21:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1307886451' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:21:51.851396) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674511851428, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1314, "num_deletes": 268, "total_data_size": 1710807, "memory_usage": 1742848, "flush_reason": "Manual Compaction"}
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674511862786, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1656025, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23717, "largest_seqno": 25030, "table_properties": {"data_size": 1649745, "index_size": 3483, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13749, "raw_average_key_size": 20, "raw_value_size": 1636808, "raw_average_value_size": 2392, "num_data_blocks": 154, "num_entries": 684, "num_filter_entries": 684, "num_deletions": 268, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764674438, "oldest_key_time": 1764674438, "file_creation_time": 1764674511, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 11427 microseconds, and 4963 cpu microseconds.
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:21:51.862823) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1656025 bytes OK
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:21:51.862839) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:21:51.864214) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:21:51.864227) EVENT_LOG_v1 {"time_micros": 1764674511864222, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:21:51.864242) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1704593, prev total WAL file size 1704593, number of live WAL files 2.
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:21:51.864868) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1617KB)], [53(8922KB)]
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674511864925, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10793083, "oldest_snapshot_seqno": -1}
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5238 keys, 10698355 bytes, temperature: kUnknown
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674511931255, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 10698355, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10657107, "index_size": 27058, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13125, "raw_key_size": 129951, "raw_average_key_size": 24, "raw_value_size": 10556473, "raw_average_value_size": 2015, "num_data_blocks": 1123, "num_entries": 5238, "num_filter_entries": 5238, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672515, "oldest_key_time": 0, "file_creation_time": 1764674511, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:21:51.931500) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 10698355 bytes
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:21:51.932715) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.5 rd, 161.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 8.7 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(13.0) write-amplify(6.5) OK, records in: 5779, records dropped: 541 output_compression: NoCompression
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:21:51.932735) EVENT_LOG_v1 {"time_micros": 1764674511932724, "job": 28, "event": "compaction_finished", "compaction_time_micros": 66402, "compaction_time_cpu_micros": 21774, "output_level": 6, "num_output_files": 1, "total_output_size": 10698355, "num_input_records": 5779, "num_output_records": 5238, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674511933384, "job": 28, "event": "table_file_deletion", "file_number": 55}
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674511935288, "job": 28, "event": "table_file_deletion", "file_number": 53}
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:21:51.864772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:21:51.935394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:21:51.935399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:21:51.935401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:21:51.935403) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:21:51 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:21:51.935405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:21:51 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1215: 321 pgs: 321 active+clean; 235 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 27 KiB/s wr, 39 op/s
Dec  2 06:21:52 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:21:52.080 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:21:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:21:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1703140398' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:21:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:21:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1703140398' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:21:52 np0005542249 nova_compute[254900]: 2025-12-02 11:21:52.596 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:53 np0005542249 nova_compute[254900]: 2025-12-02 11:21:53.057 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Dec  2 06:21:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Dec  2 06:21:53 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Dec  2 06:21:53 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1217: 321 pgs: 321 active+clean; 206 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 24 KiB/s wr, 86 op/s
Dec  2 06:21:54 np0005542249 nova_compute[254900]: 2025-12-02 11:21:54.377 254904 DEBUG oslo_concurrency.lockutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Acquiring lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:54 np0005542249 nova_compute[254900]: 2025-12-02 11:21:54.378 254904 DEBUG oslo_concurrency.lockutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:54 np0005542249 nova_compute[254900]: 2025-12-02 11:21:54.413 254904 DEBUG nova.compute.manager [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:21:54 np0005542249 nova_compute[254900]: 2025-12-02 11:21:54.480 254904 DEBUG oslo_concurrency.lockutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:54 np0005542249 nova_compute[254900]: 2025-12-02 11:21:54.480 254904 DEBUG oslo_concurrency.lockutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:54 np0005542249 nova_compute[254900]: 2025-12-02 11:21:54.486 254904 DEBUG nova.virt.hardware [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:21:54 np0005542249 nova_compute[254900]: 2025-12-02 11:21:54.487 254904 INFO nova.compute.claims [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:21:54 np0005542249 nova_compute[254900]: 2025-12-02 11:21:54.596 254904 DEBUG oslo_concurrency.processutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:21:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:21:55 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2229595983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:21:55 np0005542249 nova_compute[254900]: 2025-12-02 11:21:55.069 254904 DEBUG oslo_concurrency.processutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:21:55 np0005542249 nova_compute[254900]: 2025-12-02 11:21:55.079 254904 DEBUG nova.compute.provider_tree [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:21:55 np0005542249 nova_compute[254900]: 2025-12-02 11:21:55.098 254904 DEBUG nova.scheduler.client.report [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:21:55 np0005542249 nova_compute[254900]: 2025-12-02 11:21:55.125 254904 DEBUG oslo_concurrency.lockutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:55 np0005542249 nova_compute[254900]: 2025-12-02 11:21:55.126 254904 DEBUG nova.compute.manager [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:21:55 np0005542249 nova_compute[254900]: 2025-12-02 11:21:55.171 254904 DEBUG nova.compute.manager [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:21:55 np0005542249 nova_compute[254900]: 2025-12-02 11:21:55.172 254904 DEBUG nova.network.neutron [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:21:55 np0005542249 nova_compute[254900]: 2025-12-02 11:21:55.190 254904 INFO nova.virt.libvirt.driver [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:21:55 np0005542249 nova_compute[254900]: 2025-12-02 11:21:55.204 254904 DEBUG nova.compute.manager [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:21:55 np0005542249 nova_compute[254900]: 2025-12-02 11:21:55.331 254904 DEBUG nova.compute.manager [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:21:55 np0005542249 nova_compute[254900]: 2025-12-02 11:21:55.333 254904 DEBUG nova.virt.libvirt.driver [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:21:55 np0005542249 nova_compute[254900]: 2025-12-02 11:21:55.334 254904 INFO nova.virt.libvirt.driver [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Creating image(s)#033[00m
Dec  2 06:21:55 np0005542249 nova_compute[254900]: 2025-12-02 11:21:55.368 254904 DEBUG nova.storage.rbd_utils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] rbd image b2e17f69-4b2a-4abd-b738-61cd5813d48e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:21:55 np0005542249 nova_compute[254900]: 2025-12-02 11:21:55.405 254904 DEBUG nova.storage.rbd_utils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] rbd image b2e17f69-4b2a-4abd-b738-61cd5813d48e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:21:55 np0005542249 nova_compute[254900]: 2025-12-02 11:21:55.442 254904 DEBUG nova.storage.rbd_utils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] rbd image b2e17f69-4b2a-4abd-b738-61cd5813d48e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:21:55 np0005542249 nova_compute[254900]: 2025-12-02 11:21:55.448 254904 DEBUG oslo_concurrency.processutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:21:55 np0005542249 nova_compute[254900]: 2025-12-02 11:21:55.565 254904 DEBUG oslo_concurrency.processutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 --force-share --output=json" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:21:55 np0005542249 nova_compute[254900]: 2025-12-02 11:21:55.566 254904 DEBUG oslo_concurrency.lockutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Acquiring lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:55 np0005542249 nova_compute[254900]: 2025-12-02 11:21:55.567 254904 DEBUG oslo_concurrency.lockutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:55 np0005542249 nova_compute[254900]: 2025-12-02 11:21:55.568 254904 DEBUG oslo_concurrency.lockutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:55 np0005542249 nova_compute[254900]: 2025-12-02 11:21:55.603 254904 DEBUG nova.storage.rbd_utils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] rbd image b2e17f69-4b2a-4abd-b738-61cd5813d48e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:21:55 np0005542249 nova_compute[254900]: 2025-12-02 11:21:55.609 254904 DEBUG oslo_concurrency.processutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 b2e17f69-4b2a-4abd-b738-61cd5813d48e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:21:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Dec  2 06:21:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Dec  2 06:21:55 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Dec  2 06:21:55 np0005542249 nova_compute[254900]: 2025-12-02 11:21:55.953 254904 DEBUG oslo_concurrency.processutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 b2e17f69-4b2a-4abd-b738-61cd5813d48e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:21:55 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1219: 321 pgs: 321 active+clean; 188 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 26 KiB/s wr, 94 op/s
Dec  2 06:21:56 np0005542249 nova_compute[254900]: 2025-12-02 11:21:56.013 254904 DEBUG nova.storage.rbd_utils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] resizing rbd image b2e17f69-4b2a-4abd-b738-61cd5813d48e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  2 06:21:56 np0005542249 nova_compute[254900]: 2025-12-02 11:21:56.146 254904 DEBUG nova.policy [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8dfcddc04ea44d4085721856fb3f3d12', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd0fe4a9242c84683be1c02df04c2dbf3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:21:56 np0005542249 nova_compute[254900]: 2025-12-02 11:21:56.156 254904 DEBUG nova.objects.instance [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lazy-loading 'migration_context' on Instance uuid b2e17f69-4b2a-4abd-b738-61cd5813d48e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:21:56 np0005542249 nova_compute[254900]: 2025-12-02 11:21:56.177 254904 DEBUG nova.virt.libvirt.driver [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 06:21:56 np0005542249 nova_compute[254900]: 2025-12-02 11:21:56.178 254904 DEBUG nova.virt.libvirt.driver [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Ensure instance console log exists: /var/lib/nova/instances/b2e17f69-4b2a-4abd-b738-61cd5813d48e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:21:56 np0005542249 nova_compute[254900]: 2025-12-02 11:21:56.179 254904 DEBUG oslo_concurrency.lockutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:56 np0005542249 nova_compute[254900]: 2025-12-02 11:21:56.181 254904 DEBUG oslo_concurrency.lockutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:56 np0005542249 nova_compute[254900]: 2025-12-02 11:21:56.182 254904 DEBUG oslo_concurrency.lockutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:21:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:21:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:21:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:21:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:21:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:21:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:21:56 np0005542249 nova_compute[254900]: 2025-12-02 11:21:56.844 254904 DEBUG nova.network.neutron [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Successfully created port: 37bb6586-c8c5-4107-9775-10964531e11e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:21:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Dec  2 06:21:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Dec  2 06:21:56 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Dec  2 06:21:57 np0005542249 nova_compute[254900]: 2025-12-02 11:21:57.625 254904 DEBUG nova.network.neutron [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Successfully updated port: 37bb6586-c8c5-4107-9775-10964531e11e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:21:57 np0005542249 nova_compute[254900]: 2025-12-02 11:21:57.628 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:57 np0005542249 nova_compute[254900]: 2025-12-02 11:21:57.647 254904 DEBUG oslo_concurrency.lockutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Acquiring lock "refresh_cache-b2e17f69-4b2a-4abd-b738-61cd5813d48e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:21:57 np0005542249 nova_compute[254900]: 2025-12-02 11:21:57.647 254904 DEBUG oslo_concurrency.lockutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Acquired lock "refresh_cache-b2e17f69-4b2a-4abd-b738-61cd5813d48e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:21:57 np0005542249 nova_compute[254900]: 2025-12-02 11:21:57.648 254904 DEBUG nova.network.neutron [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:21:57 np0005542249 nova_compute[254900]: 2025-12-02 11:21:57.701 254904 DEBUG nova.compute.manager [req-bf15d966-f9c0-4ebd-bd03-5f87536fd8bb req-211a28a8-d233-43cb-a888-0c60bc7706ab 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Received event network-changed-37bb6586-c8c5-4107-9775-10964531e11e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:21:57 np0005542249 nova_compute[254900]: 2025-12-02 11:21:57.701 254904 DEBUG nova.compute.manager [req-bf15d966-f9c0-4ebd-bd03-5f87536fd8bb req-211a28a8-d233-43cb-a888-0c60bc7706ab 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Refreshing instance network info cache due to event network-changed-37bb6586-c8c5-4107-9775-10964531e11e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:21:57 np0005542249 nova_compute[254900]: 2025-12-02 11:21:57.701 254904 DEBUG oslo_concurrency.lockutils [req-bf15d966-f9c0-4ebd-bd03-5f87536fd8bb req-211a28a8-d233-43cb-a888-0c60bc7706ab 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-b2e17f69-4b2a-4abd-b738-61cd5813d48e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:21:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec  2 06:21:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  2 06:21:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:21:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:21:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:21:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:21:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:21:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:21:57 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 42a86387-4886-44bf-9791-e51f09b28d50 does not exist
Dec  2 06:21:57 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 9d73a3be-3593-4748-bda3-176a2dd70a18 does not exist
Dec  2 06:21:57 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 38537dbd-8779-45a9-bb6e-8df77f250df7 does not exist
Dec  2 06:21:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:21:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:21:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:21:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:21:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:21:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:21:57 np0005542249 nova_compute[254900]: 2025-12-02 11:21:57.798 254904 DEBUG nova.network.neutron [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:21:57 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  2 06:21:57 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:21:57 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:21:57 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:21:57 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1221: 321 pgs: 321 active+clean; 210 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 3.5 MiB/s wr, 141 op/s
Dec  2 06:21:58 np0005542249 nova_compute[254900]: 2025-12-02 11:21:58.062 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:58 np0005542249 podman[274737]: 2025-12-02 11:21:58.425884632 +0000 UTC m=+0.050379214 container create b378906612ebbd72c4f936c56bccf303196415ae53f4dfa5f87ce7e86396836a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:21:58 np0005542249 systemd[1]: Started libpod-conmon-b378906612ebbd72c4f936c56bccf303196415ae53f4dfa5f87ce7e86396836a.scope.
Dec  2 06:21:58 np0005542249 podman[274737]: 2025-12-02 11:21:58.403599897 +0000 UTC m=+0.028094509 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:21:58 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:21:58 np0005542249 podman[274737]: 2025-12-02 11:21:58.547514819 +0000 UTC m=+0.172009481 container init b378906612ebbd72c4f936c56bccf303196415ae53f4dfa5f87ce7e86396836a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dhawan, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  2 06:21:58 np0005542249 podman[274737]: 2025-12-02 11:21:58.558140537 +0000 UTC m=+0.182635139 container start b378906612ebbd72c4f936c56bccf303196415ae53f4dfa5f87ce7e86396836a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dhawan, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  2 06:21:58 np0005542249 podman[274737]: 2025-12-02 11:21:58.561978298 +0000 UTC m=+0.186472960 container attach b378906612ebbd72c4f936c56bccf303196415ae53f4dfa5f87ce7e86396836a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  2 06:21:58 np0005542249 infallible_dhawan[274754]: 167 167
Dec  2 06:21:58 np0005542249 systemd[1]: libpod-b378906612ebbd72c4f936c56bccf303196415ae53f4dfa5f87ce7e86396836a.scope: Deactivated successfully.
Dec  2 06:21:58 np0005542249 conmon[274754]: conmon b378906612ebbd72c4f9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b378906612ebbd72c4f936c56bccf303196415ae53f4dfa5f87ce7e86396836a.scope/container/memory.events
Dec  2 06:21:58 np0005542249 podman[274737]: 2025-12-02 11:21:58.568456209 +0000 UTC m=+0.192950851 container died b378906612ebbd72c4f936c56bccf303196415ae53f4dfa5f87ce7e86396836a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dhawan, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Dec  2 06:21:58 np0005542249 systemd[1]: var-lib-containers-storage-overlay-4e74b3ca7ef7b20e36eba6bde555dd9a9879329afb1fb5ada6c17991d555957c-merged.mount: Deactivated successfully.
Dec  2 06:21:58 np0005542249 podman[274737]: 2025-12-02 11:21:58.620673241 +0000 UTC m=+0.245167823 container remove b378906612ebbd72c4f936c56bccf303196415ae53f4dfa5f87ce7e86396836a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  2 06:21:58 np0005542249 systemd[1]: libpod-conmon-b378906612ebbd72c4f936c56bccf303196415ae53f4dfa5f87ce7e86396836a.scope: Deactivated successfully.
Dec  2 06:21:58 np0005542249 nova_compute[254900]: 2025-12-02 11:21:58.719 254904 DEBUG nova.network.neutron [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Updating instance_info_cache with network_info: [{"id": "37bb6586-c8c5-4107-9775-10964531e11e", "address": "fa:16:3e:d8:49:e3", "network": {"id": "29518419-c819-4471-8719-e72e25a6d1be", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-51492735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0fe4a9242c84683be1c02df04c2dbf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37bb6586-c8", "ovs_interfaceid": "37bb6586-c8c5-4107-9775-10964531e11e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:21:58 np0005542249 nova_compute[254900]: 2025-12-02 11:21:58.735 254904 DEBUG oslo_concurrency.lockutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Releasing lock "refresh_cache-b2e17f69-4b2a-4abd-b738-61cd5813d48e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:21:58 np0005542249 nova_compute[254900]: 2025-12-02 11:21:58.735 254904 DEBUG nova.compute.manager [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Instance network_info: |[{"id": "37bb6586-c8c5-4107-9775-10964531e11e", "address": "fa:16:3e:d8:49:e3", "network": {"id": "29518419-c819-4471-8719-e72e25a6d1be", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-51492735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0fe4a9242c84683be1c02df04c2dbf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37bb6586-c8", "ovs_interfaceid": "37bb6586-c8c5-4107-9775-10964531e11e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:21:58 np0005542249 nova_compute[254900]: 2025-12-02 11:21:58.736 254904 DEBUG oslo_concurrency.lockutils [req-bf15d966-f9c0-4ebd-bd03-5f87536fd8bb req-211a28a8-d233-43cb-a888-0c60bc7706ab 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-b2e17f69-4b2a-4abd-b738-61cd5813d48e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:21:58 np0005542249 nova_compute[254900]: 2025-12-02 11:21:58.736 254904 DEBUG nova.network.neutron [req-bf15d966-f9c0-4ebd-bd03-5f87536fd8bb req-211a28a8-d233-43cb-a888-0c60bc7706ab 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Refreshing network info cache for port 37bb6586-c8c5-4107-9775-10964531e11e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:21:58 np0005542249 nova_compute[254900]: 2025-12-02 11:21:58.738 254904 DEBUG nova.virt.libvirt.driver [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Start _get_guest_xml network_info=[{"id": "37bb6586-c8c5-4107-9775-10964531e11e", "address": "fa:16:3e:d8:49:e3", "network": {"id": "29518419-c819-4471-8719-e72e25a6d1be", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-51492735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0fe4a9242c84683be1c02df04c2dbf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37bb6586-c8", "ovs_interfaceid": "37bb6586-c8c5-4107-9775-10964531e11e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'image_id': '5a40f66c-ab43-47dd-9880-e59f9fa2c60e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:21:58 np0005542249 nova_compute[254900]: 2025-12-02 11:21:58.744 254904 WARNING nova.virt.libvirt.driver [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:21:58 np0005542249 nova_compute[254900]: 2025-12-02 11:21:58.749 254904 DEBUG nova.virt.libvirt.host [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:21:58 np0005542249 nova_compute[254900]: 2025-12-02 11:21:58.750 254904 DEBUG nova.virt.libvirt.host [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:21:58 np0005542249 nova_compute[254900]: 2025-12-02 11:21:58.757 254904 DEBUG nova.virt.libvirt.host [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:21:58 np0005542249 nova_compute[254900]: 2025-12-02 11:21:58.758 254904 DEBUG nova.virt.libvirt.host [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:21:58 np0005542249 nova_compute[254900]: 2025-12-02 11:21:58.759 254904 DEBUG nova.virt.libvirt.driver [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:21:58 np0005542249 nova_compute[254900]: 2025-12-02 11:21:58.759 254904 DEBUG nova.virt.hardware [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:21:58 np0005542249 nova_compute[254900]: 2025-12-02 11:21:58.760 254904 DEBUG nova.virt.hardware [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:21:58 np0005542249 nova_compute[254900]: 2025-12-02 11:21:58.760 254904 DEBUG nova.virt.hardware [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:21:58 np0005542249 nova_compute[254900]: 2025-12-02 11:21:58.760 254904 DEBUG nova.virt.hardware [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:21:58 np0005542249 nova_compute[254900]: 2025-12-02 11:21:58.761 254904 DEBUG nova.virt.hardware [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:21:58 np0005542249 nova_compute[254900]: 2025-12-02 11:21:58.761 254904 DEBUG nova.virt.hardware [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:21:58 np0005542249 nova_compute[254900]: 2025-12-02 11:21:58.761 254904 DEBUG nova.virt.hardware [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:21:58 np0005542249 nova_compute[254900]: 2025-12-02 11:21:58.761 254904 DEBUG nova.virt.hardware [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:21:58 np0005542249 nova_compute[254900]: 2025-12-02 11:21:58.762 254904 DEBUG nova.virt.hardware [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:21:58 np0005542249 nova_compute[254900]: 2025-12-02 11:21:58.762 254904 DEBUG nova.virt.hardware [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:21:58 np0005542249 nova_compute[254900]: 2025-12-02 11:21:58.762 254904 DEBUG nova.virt.hardware [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:21:58 np0005542249 nova_compute[254900]: 2025-12-02 11:21:58.766 254904 DEBUG oslo_concurrency.processutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:21:58 np0005542249 podman[274779]: 2025-12-02 11:21:58.851877575 +0000 UTC m=+0.050272232 container create 9bf00a4c400d6e349b5869c46e1c86b38d36927402df37475fa88bac75ce2e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:21:58 np0005542249 systemd[1]: Started libpod-conmon-9bf00a4c400d6e349b5869c46e1c86b38d36927402df37475fa88bac75ce2e67.scope.
Dec  2 06:21:58 np0005542249 podman[274779]: 2025-12-02 11:21:58.830649368 +0000 UTC m=+0.029044045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:21:58 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:21:58 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd47233a3bf8682da28c417cfe70b4a2728f770800742331c35594b384281f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:21:58 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd47233a3bf8682da28c417cfe70b4a2728f770800742331c35594b384281f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:21:58 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd47233a3bf8682da28c417cfe70b4a2728f770800742331c35594b384281f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:21:58 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd47233a3bf8682da28c417cfe70b4a2728f770800742331c35594b384281f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:21:58 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd47233a3bf8682da28c417cfe70b4a2728f770800742331c35594b384281f5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:21:58 np0005542249 podman[274779]: 2025-12-02 11:21:58.948618587 +0000 UTC m=+0.147013274 container init 9bf00a4c400d6e349b5869c46e1c86b38d36927402df37475fa88bac75ce2e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:21:58 np0005542249 podman[274779]: 2025-12-02 11:21:58.959791911 +0000 UTC m=+0.158186568 container start 9bf00a4c400d6e349b5869c46e1c86b38d36927402df37475fa88bac75ce2e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  2 06:21:58 np0005542249 podman[274779]: 2025-12-02 11:21:58.962644766 +0000 UTC m=+0.161039423 container attach 9bf00a4c400d6e349b5869c46e1c86b38d36927402df37475fa88bac75ce2e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  2 06:21:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:21:59 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2673346701' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.196 254904 DEBUG oslo_concurrency.processutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.232 254904 DEBUG nova.storage.rbd_utils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] rbd image b2e17f69-4b2a-4abd-b738-61cd5813d48e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.239 254904 DEBUG oslo_concurrency.processutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:21:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:21:59 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1220970942' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.664 254904 DEBUG oslo_concurrency.processutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.665 254904 DEBUG nova.virt.libvirt.vif [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:21:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-1483529601',display_name='tempest-VolumesExtendAttachedTest-instance-1483529601',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-1483529601',id=10,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCnOZUzLN6BwPEo2cXeff79zs4c1KWXCWq8pmV3Yw2NDx+MKi80mVUlDZ0dl8tKqAxFlSUkXWRk36c9C/Kg4Ld+Rv8uR01bm1saW6nRqne/ZVSRaX5WsvDoDbjF8u71n0A==',key_name='tempest-keypair-1334584430',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d0fe4a9242c84683be1c02df04c2dbf3',ramdisk_id='',reservation_id='r-gir9z6nx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesExtendAttachedTest-1434628380',owner_user_name='tempest-VolumesExtendAttachedTest-1434628380-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:21:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8dfcddc04ea44d4085721856fb3f3d12',uuid=b2e17f69-4b2a-4abd-b738-61cd5813d48e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "37bb6586-c8c5-4107-9775-10964531e11e", "address": "fa:16:3e:d8:49:e3", "network": {"id": "29518419-c819-4471-8719-e72e25a6d1be", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-51492735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0fe4a9242c84683be1c02df04c2dbf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37bb6586-c8", "ovs_interfaceid": "37bb6586-c8c5-4107-9775-10964531e11e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.666 254904 DEBUG nova.network.os_vif_util [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Converting VIF {"id": "37bb6586-c8c5-4107-9775-10964531e11e", "address": "fa:16:3e:d8:49:e3", "network": {"id": "29518419-c819-4471-8719-e72e25a6d1be", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-51492735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0fe4a9242c84683be1c02df04c2dbf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37bb6586-c8", "ovs_interfaceid": "37bb6586-c8c5-4107-9775-10964531e11e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.667 254904 DEBUG nova.network.os_vif_util [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:49:e3,bridge_name='br-int',has_traffic_filtering=True,id=37bb6586-c8c5-4107-9775-10964531e11e,network=Network(29518419-c819-4471-8719-e72e25a6d1be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37bb6586-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.668 254904 DEBUG nova.objects.instance [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lazy-loading 'pci_devices' on Instance uuid b2e17f69-4b2a-4abd-b738-61cd5813d48e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.684 254904 DEBUG nova.virt.libvirt.driver [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:21:59 np0005542249 nova_compute[254900]:  <uuid>b2e17f69-4b2a-4abd-b738-61cd5813d48e</uuid>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:  <name>instance-0000000a</name>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <nova:name>tempest-VolumesExtendAttachedTest-instance-1483529601</nova:name>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:21:58</nova:creationTime>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:21:59 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:        <nova:user uuid="8dfcddc04ea44d4085721856fb3f3d12">tempest-VolumesExtendAttachedTest-1434628380-project-member</nova:user>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:        <nova:project uuid="d0fe4a9242c84683be1c02df04c2dbf3">tempest-VolumesExtendAttachedTest-1434628380</nova:project>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <nova:root type="image" uuid="5a40f66c-ab43-47dd-9880-e59f9fa2c60e"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:        <nova:port uuid="37bb6586-c8c5-4107-9775-10964531e11e">
Dec  2 06:21:59 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <entry name="serial">b2e17f69-4b2a-4abd-b738-61cd5813d48e</entry>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <entry name="uuid">b2e17f69-4b2a-4abd-b738-61cd5813d48e</entry>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/b2e17f69-4b2a-4abd-b738-61cd5813d48e_disk">
Dec  2 06:21:59 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:21:59 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/b2e17f69-4b2a-4abd-b738-61cd5813d48e_disk.config">
Dec  2 06:21:59 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:21:59 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:d8:49:e3"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <target dev="tap37bb6586-c8"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/b2e17f69-4b2a-4abd-b738-61cd5813d48e/console.log" append="off"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:21:59 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:21:59 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:21:59 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:21:59 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.684 254904 DEBUG nova.compute.manager [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Preparing to wait for external event network-vif-plugged-37bb6586-c8c5-4107-9775-10964531e11e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.684 254904 DEBUG oslo_concurrency.lockutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Acquiring lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.685 254904 DEBUG oslo_concurrency.lockutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.685 254904 DEBUG oslo_concurrency.lockutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.685 254904 DEBUG nova.virt.libvirt.vif [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:21:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-1483529601',display_name='tempest-VolumesExtendAttachedTest-instance-1483529601',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-1483529601',id=10,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCnOZUzLN6BwPEo2cXeff79zs4c1KWXCWq8pmV3Yw2NDx+MKi80mVUlDZ0dl8tKqAxFlSUkXWRk36c9C/Kg4Ld+Rv8uR01bm1saW6nRqne/ZVSRaX5WsvDoDbjF8u71n0A==',key_name='tempest-keypair-1334584430',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d0fe4a9242c84683be1c02df04c2dbf3',ramdisk_id='',reservation_id='r-gir9z6nx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesExtendAttachedTest-1434628380',owner_user_name='tempest-VolumesExtendAttachedTest-1434628380-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:21:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8dfcddc04ea44d4085721856fb3f3d12',uuid=b2e17f69-4b2a-4abd-b738-61cd5813d48e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "37bb6586-c8c5-4107-9775-10964531e11e", "address": "fa:16:3e:d8:49:e3", "network": {"id": "29518419-c819-4471-8719-e72e25a6d1be", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-51492735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0fe4a9242c84683be1c02df04c2dbf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37bb6586-c8", "ovs_interfaceid": "37bb6586-c8c5-4107-9775-10964531e11e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.686 254904 DEBUG nova.network.os_vif_util [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Converting VIF {"id": "37bb6586-c8c5-4107-9775-10964531e11e", "address": "fa:16:3e:d8:49:e3", "network": {"id": "29518419-c819-4471-8719-e72e25a6d1be", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-51492735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0fe4a9242c84683be1c02df04c2dbf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37bb6586-c8", "ovs_interfaceid": "37bb6586-c8c5-4107-9775-10964531e11e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.686 254904 DEBUG nova.network.os_vif_util [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:49:e3,bridge_name='br-int',has_traffic_filtering=True,id=37bb6586-c8c5-4107-9775-10964531e11e,network=Network(29518419-c819-4471-8719-e72e25a6d1be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37bb6586-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.686 254904 DEBUG os_vif [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:49:e3,bridge_name='br-int',has_traffic_filtering=True,id=37bb6586-c8c5-4107-9775-10964531e11e,network=Network(29518419-c819-4471-8719-e72e25a6d1be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37bb6586-c8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.687 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.687 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.687 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.690 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.691 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap37bb6586-c8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.691 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap37bb6586-c8, col_values=(('external_ids', {'iface-id': '37bb6586-c8c5-4107-9775-10964531e11e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d8:49:e3', 'vm-uuid': 'b2e17f69-4b2a-4abd-b738-61cd5813d48e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:21:59 np0005542249 NetworkManager[48987]: <info>  [1764674519.7229] manager: (tap37bb6586-c8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.721 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.726 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.731 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.732 254904 INFO os_vif [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:49:e3,bridge_name='br-int',has_traffic_filtering=True,id=37bb6586-c8c5-4107-9775-10964531e11e,network=Network(29518419-c819-4471-8719-e72e25a6d1be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37bb6586-c8')#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.789 254904 DEBUG nova.virt.libvirt.driver [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.791 254904 DEBUG nova.virt.libvirt.driver [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.791 254904 DEBUG nova.virt.libvirt.driver [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] No VIF found with MAC fa:16:3e:d8:49:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.792 254904 INFO nova.virt.libvirt.driver [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Using config drive#033[00m
Dec  2 06:21:59 np0005542249 nova_compute[254900]: 2025-12-02 11:21:59.814 254904 DEBUG nova.storage.rbd_utils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] rbd image b2e17f69-4b2a-4abd-b738-61cd5813d48e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:21:59 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1222: 321 pgs: 321 active+clean; 214 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 3.5 MiB/s wr, 101 op/s
Dec  2 06:22:00 np0005542249 xenodochial_almeida[274805]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:22:00 np0005542249 xenodochial_almeida[274805]: --> relative data size: 1.0
Dec  2 06:22:00 np0005542249 xenodochial_almeida[274805]: --> All data devices are unavailable
Dec  2 06:22:00 np0005542249 systemd[1]: libpod-9bf00a4c400d6e349b5869c46e1c86b38d36927402df37475fa88bac75ce2e67.scope: Deactivated successfully.
Dec  2 06:22:00 np0005542249 systemd[1]: libpod-9bf00a4c400d6e349b5869c46e1c86b38d36927402df37475fa88bac75ce2e67.scope: Consumed 1.032s CPU time.
Dec  2 06:22:00 np0005542249 podman[274906]: 2025-12-02 11:22:00.137104785 +0000 UTC m=+0.030162943 container died 9bf00a4c400d6e349b5869c46e1c86b38d36927402df37475fa88bac75ce2e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:22:00 np0005542249 systemd[1]: var-lib-containers-storage-overlay-2bd47233a3bf8682da28c417cfe70b4a2728f770800742331c35594b384281f5-merged.mount: Deactivated successfully.
Dec  2 06:22:00 np0005542249 podman[274906]: 2025-12-02 11:22:00.190798596 +0000 UTC m=+0.083856744 container remove 9bf00a4c400d6e349b5869c46e1c86b38d36927402df37475fa88bac75ce2e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:22:00 np0005542249 systemd[1]: libpod-conmon-9bf00a4c400d6e349b5869c46e1c86b38d36927402df37475fa88bac75ce2e67.scope: Deactivated successfully.
Dec  2 06:22:00 np0005542249 nova_compute[254900]: 2025-12-02 11:22:00.455 254904 INFO nova.virt.libvirt.driver [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Creating config drive at /var/lib/nova/instances/b2e17f69-4b2a-4abd-b738-61cd5813d48e/disk.config#033[00m
Dec  2 06:22:00 np0005542249 nova_compute[254900]: 2025-12-02 11:22:00.460 254904 DEBUG oslo_concurrency.processutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b2e17f69-4b2a-4abd-b738-61cd5813d48e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8z8uveor execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:22:00 np0005542249 nova_compute[254900]: 2025-12-02 11:22:00.479 254904 DEBUG nova.network.neutron [req-bf15d966-f9c0-4ebd-bd03-5f87536fd8bb req-211a28a8-d233-43cb-a888-0c60bc7706ab 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Updated VIF entry in instance network info cache for port 37bb6586-c8c5-4107-9775-10964531e11e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:22:00 np0005542249 nova_compute[254900]: 2025-12-02 11:22:00.480 254904 DEBUG nova.network.neutron [req-bf15d966-f9c0-4ebd-bd03-5f87536fd8bb req-211a28a8-d233-43cb-a888-0c60bc7706ab 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Updating instance_info_cache with network_info: [{"id": "37bb6586-c8c5-4107-9775-10964531e11e", "address": "fa:16:3e:d8:49:e3", "network": {"id": "29518419-c819-4471-8719-e72e25a6d1be", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-51492735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0fe4a9242c84683be1c02df04c2dbf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37bb6586-c8", "ovs_interfaceid": "37bb6586-c8c5-4107-9775-10964531e11e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:22:00 np0005542249 nova_compute[254900]: 2025-12-02 11:22:00.496 254904 DEBUG oslo_concurrency.lockutils [req-bf15d966-f9c0-4ebd-bd03-5f87536fd8bb req-211a28a8-d233-43cb-a888-0c60bc7706ab 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-b2e17f69-4b2a-4abd-b738-61cd5813d48e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:22:00 np0005542249 nova_compute[254900]: 2025-12-02 11:22:00.586 254904 DEBUG oslo_concurrency.processutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b2e17f69-4b2a-4abd-b738-61cd5813d48e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8z8uveor" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:22:00 np0005542249 nova_compute[254900]: 2025-12-02 11:22:00.613 254904 DEBUG nova.storage.rbd_utils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] rbd image b2e17f69-4b2a-4abd-b738-61cd5813d48e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:22:00 np0005542249 nova_compute[254900]: 2025-12-02 11:22:00.618 254904 DEBUG oslo_concurrency.processutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b2e17f69-4b2a-4abd-b738-61cd5813d48e/disk.config b2e17f69-4b2a-4abd-b738-61cd5813d48e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:22:00 np0005542249 nova_compute[254900]: 2025-12-02 11:22:00.752 254904 DEBUG oslo_concurrency.lockutils [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Acquiring lock "415ead6c-ffe0-4426-a145-1cb487cfa30f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:00 np0005542249 nova_compute[254900]: 2025-12-02 11:22:00.752 254904 DEBUG oslo_concurrency.lockutils [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lock "415ead6c-ffe0-4426-a145-1cb487cfa30f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:00 np0005542249 nova_compute[254900]: 2025-12-02 11:22:00.770 254904 DEBUG nova.objects.instance [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lazy-loading 'flavor' on Instance uuid 415ead6c-ffe0-4426-a145-1cb487cfa30f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:22:00 np0005542249 nova_compute[254900]: 2025-12-02 11:22:00.789 254904 DEBUG oslo_concurrency.processutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b2e17f69-4b2a-4abd-b738-61cd5813d48e/disk.config b2e17f69-4b2a-4abd-b738-61cd5813d48e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:22:00 np0005542249 nova_compute[254900]: 2025-12-02 11:22:00.792 254904 INFO nova.virt.libvirt.driver [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Deleting local config drive /var/lib/nova/instances/b2e17f69-4b2a-4abd-b738-61cd5813d48e/disk.config because it was imported into RBD.#033[00m
Dec  2 06:22:00 np0005542249 nova_compute[254900]: 2025-12-02 11:22:00.813 254904 DEBUG oslo_concurrency.lockutils [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lock "415ead6c-ffe0-4426-a145-1cb487cfa30f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:00 np0005542249 kernel: tap37bb6586-c8: entered promiscuous mode
Dec  2 06:22:00 np0005542249 NetworkManager[48987]: <info>  [1764674520.8965] manager: (tap37bb6586-c8): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Dec  2 06:22:00 np0005542249 nova_compute[254900]: 2025-12-02 11:22:00.899 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:00 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:00Z|00101|binding|INFO|Claiming lport 37bb6586-c8c5-4107-9775-10964531e11e for this chassis.
Dec  2 06:22:00 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:00Z|00102|binding|INFO|37bb6586-c8c5-4107-9775-10964531e11e: Claiming fa:16:3e:d8:49:e3 10.100.0.9
Dec  2 06:22:00 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:00.906 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:49:e3 10.100.0.9'], port_security=['fa:16:3e:d8:49:e3 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b2e17f69-4b2a-4abd-b738-61cd5813d48e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-29518419-c819-4471-8719-e72e25a6d1be', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0fe4a9242c84683be1c02df04c2dbf3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4740505d-999d-42bd-bfbf-9331ea52ddcd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cab70206-79f0-4509-94be-3054675bd95c, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=37bb6586-c8c5-4107-9775-10964531e11e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:22:00 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:00.907 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 37bb6586-c8c5-4107-9775-10964531e11e in datapath 29518419-c819-4471-8719-e72e25a6d1be bound to our chassis#033[00m
Dec  2 06:22:00 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:00.909 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 29518419-c819-4471-8719-e72e25a6d1be#033[00m
Dec  2 06:22:00 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:00.929 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[1ecd0ffe-9c6d-4293-89c4-e05b4a0cf0f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:00 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:00.930 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap29518419-c1 in ovnmeta-29518419-c819-4471-8719-e72e25a6d1be namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 06:22:00 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:00Z|00103|binding|INFO|Setting lport 37bb6586-c8c5-4107-9775-10964531e11e ovn-installed in OVS
Dec  2 06:22:00 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:00Z|00104|binding|INFO|Setting lport 37bb6586-c8c5-4107-9775-10964531e11e up in Southbound
Dec  2 06:22:00 np0005542249 nova_compute[254900]: 2025-12-02 11:22:00.932 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:00 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:00.934 262398 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap29518419-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 06:22:00 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:00.934 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[a0189778-da8c-45a7-a0af-e48f1019feb0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:00 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:00.935 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c8fd2a-9567-4753-ad40-7e494c618228]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:00 np0005542249 systemd-udevd[275125]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:22:00 np0005542249 podman[275102]: 2025-12-02 11:22:00.941911382 +0000 UTC m=+0.113199395 container create d8f218d7189fdbb7bf7e5d36f2359654df4f9e4cd34184e71103a9548ffe1b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:22:00 np0005542249 systemd-machined[216222]: New machine qemu-10-instance-0000000a.
Dec  2 06:22:00 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:00.947 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[df71beec-6de8-48a1-8a73-dd20f72aa4af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:00 np0005542249 NetworkManager[48987]: <info>  [1764674520.9508] device (tap37bb6586-c8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:22:00 np0005542249 NetworkManager[48987]: <info>  [1764674520.9519] device (tap37bb6586-c8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:22:00 np0005542249 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Dec  2 06:22:00 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:00.965 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[383af818-153b-4bda-9518-8bfadc10f0ba]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:00 np0005542249 systemd[1]: Started libpod-conmon-d8f218d7189fdbb7bf7e5d36f2359654df4f9e4cd34184e71103a9548ffe1b92.scope.
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:00.998 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e750ed-382a-4375-adb8-5903831c99c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:01 np0005542249 NetworkManager[48987]: <info>  [1764674521.0059] manager: (tap29518419-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/66)
Dec  2 06:22:01 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:22:01 np0005542249 podman[275102]: 2025-12-02 11:22:00.910446015 +0000 UTC m=+0.081734058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:22:01 np0005542249 systemd-udevd[275128]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:01.008 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[ecf33576-3512-4094-a678-33535687aa21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.025 254904 DEBUG oslo_concurrency.lockutils [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Acquiring lock "415ead6c-ffe0-4426-a145-1cb487cfa30f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.026 254904 DEBUG oslo_concurrency.lockutils [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lock "415ead6c-ffe0-4426-a145-1cb487cfa30f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.026 254904 INFO nova.compute.manager [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Attaching volume 18882187-066e-4edc-aa65-e42faf84995b to /dev/vdb#033[00m
Dec  2 06:22:01 np0005542249 podman[275102]: 2025-12-02 11:22:01.040439061 +0000 UTC m=+0.211727104 container init d8f218d7189fdbb7bf7e5d36f2359654df4f9e4cd34184e71103a9548ffe1b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haslett, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:01.042 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[75190ba9-5983-4f7d-9a84-fe3869d477f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:01.046 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd89a49-b3e7-4cc6-a67b-08e2c584f5d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:01 np0005542249 podman[275102]: 2025-12-02 11:22:01.051545383 +0000 UTC m=+0.222833396 container start d8f218d7189fdbb7bf7e5d36f2359654df4f9e4cd34184e71103a9548ffe1b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haslett, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:22:01 np0005542249 podman[275102]: 2025-12-02 11:22:01.055650111 +0000 UTC m=+0.226938114 container attach d8f218d7189fdbb7bf7e5d36f2359654df4f9e4cd34184e71103a9548ffe1b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 06:22:01 np0005542249 great_haslett[275137]: 167 167
Dec  2 06:22:01 np0005542249 systemd[1]: libpod-d8f218d7189fdbb7bf7e5d36f2359654df4f9e4cd34184e71103a9548ffe1b92.scope: Deactivated successfully.
Dec  2 06:22:01 np0005542249 podman[275102]: 2025-12-02 11:22:01.060371165 +0000 UTC m=+0.231659178 container died d8f218d7189fdbb7bf7e5d36f2359654df4f9e4cd34184e71103a9548ffe1b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haslett, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:22:01 np0005542249 NetworkManager[48987]: <info>  [1764674521.0749] device (tap29518419-c0): carrier: link connected
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:01.083 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[362296de-3a4f-44f2-ac0e-bd331e8257ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:01 np0005542249 systemd[1]: var-lib-containers-storage-overlay-ed48d7c3e4e8f94ece53a697141ec89548ea0e898b073027ec681595fa51af8c-merged.mount: Deactivated successfully.
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:01.106 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[22618693-357f-4953-8657-e73a44da3d28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap29518419-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:f1:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471389, 'reachable_time': 30325, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275174, 'error': None, 'target': 'ovnmeta-29518419-c819-4471-8719-e72e25a6d1be', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:01 np0005542249 podman[275102]: 2025-12-02 11:22:01.10890055 +0000 UTC m=+0.280188563 container remove d8f218d7189fdbb7bf7e5d36f2359654df4f9e4cd34184e71103a9548ffe1b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  2 06:22:01 np0005542249 systemd[1]: libpod-conmon-d8f218d7189fdbb7bf7e5d36f2359654df4f9e4cd34184e71103a9548ffe1b92.scope: Deactivated successfully.
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:01.124 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[7975c565-054f-4d03-bbb8-0a40cff855e3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed5:f11f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471389, 'tstamp': 471389}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275179, 'error': None, 'target': 'ovnmeta-29518419-c819-4471-8719-e72e25a6d1be', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:01.141 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[3a227827-2fda-45b3-b259-dee88c3f4dfb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap29518419-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:f1:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471389, 'reachable_time': 30325, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275182, 'error': None, 'target': 'ovnmeta-29518419-c819-4471-8719-e72e25a6d1be', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.147 254904 DEBUG nova.compute.manager [req-66f15418-07d1-4265-81b2-47b9831c8370 req-502a1023-e022-40a1-a630-08dbb11bbb5e 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Received event network-vif-plugged-37bb6586-c8c5-4107-9775-10964531e11e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.147 254904 DEBUG oslo_concurrency.lockutils [req-66f15418-07d1-4265-81b2-47b9831c8370 req-502a1023-e022-40a1-a630-08dbb11bbb5e 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.147 254904 DEBUG oslo_concurrency.lockutils [req-66f15418-07d1-4265-81b2-47b9831c8370 req-502a1023-e022-40a1-a630-08dbb11bbb5e 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.148 254904 DEBUG oslo_concurrency.lockutils [req-66f15418-07d1-4265-81b2-47b9831c8370 req-502a1023-e022-40a1-a630-08dbb11bbb5e 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.148 254904 DEBUG nova.compute.manager [req-66f15418-07d1-4265-81b2-47b9831c8370 req-502a1023-e022-40a1-a630-08dbb11bbb5e 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Processing event network-vif-plugged-37bb6586-c8c5-4107-9775-10964531e11e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:01.178 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[acf5a700-20ee-4db8-bbfb-eacc758f7761]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.194 254904 DEBUG os_brick.utils [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.195 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.215 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.215 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[0d2627a5-4d7a-43b8-b393-b7001a1ad330]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.217 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.230 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.231 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[44cba5a2-95d9-49b6-9c52-e0cc59380b6d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.235 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.248 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.249 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[74f75f69-e25e-43cd-a185-05849bdbd8f6]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.254 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[5afb1edb-a79b-4c5c-bbc4-68bb7168fc13]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.255 254904 DEBUG oslo_concurrency.processutils [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:01.271 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[742aebcd-6c3c-4d1a-8e8c-9da1f738d2c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:01.273 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29518419-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:01.274 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:01.274 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29518419-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:22:01 np0005542249 kernel: tap29518419-c0: entered promiscuous mode
Dec  2 06:22:01 np0005542249 NetworkManager[48987]: <info>  [1764674521.2775] manager: (tap29518419-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:01.283 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap29518419-c0, col_values=(('external_ids', {'iface-id': '32301a91-103a-44ac-b580-0ae264006e72'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:22:01 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:01Z|00105|binding|INFO|Releasing lport 32301a91-103a-44ac-b580-0ae264006e72 from this chassis (sb_readonly=0)
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:01.289 163757 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/29518419-c819-4471-8719-e72e25a6d1be.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/29518419-c819-4471-8719-e72e25a6d1be.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.290 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:01.293 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[3df4166a-f591-4e85-9d74-57dc040f4a3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:01.294 163757 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]: global
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]:    log         /dev/log local0 debug
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]:    log-tag     haproxy-metadata-proxy-29518419-c819-4471-8719-e72e25a6d1be
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]:    user        root
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]:    group       root
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]:    maxconn     1024
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]:    pidfile     /var/lib/neutron/external/pids/29518419-c819-4471-8719-e72e25a6d1be.pid.haproxy
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]:    daemon
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]: defaults
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]:    log global
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]:    mode http
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]:    option httplog
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]:    option dontlognull
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]:    option http-server-close
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]:    option forwardfor
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]:    retries                 3
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]:    timeout http-request    30s
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]:    timeout connect         30s
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]:    timeout client          32s
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]:    timeout server          32s
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]:    timeout http-keep-alive 30s
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]: listen listener
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]:    bind 169.254.169.254:80
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]:    http-request add-header X-OVN-Network-ID 29518419-c819-4471-8719-e72e25a6d1be
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 06:22:01 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:01.295 163757 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-29518419-c819-4471-8719-e72e25a6d1be', 'env', 'PROCESS_TAG=haproxy-29518419-c819-4471-8719-e72e25a6d1be', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/29518419-c819-4471-8719-e72e25a6d1be.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.297 254904 DEBUG oslo_concurrency.processutils [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] CMD "nvme version" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.300 254904 DEBUG os_brick.initiator.connectors.lightos [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.301 254904 DEBUG os_brick.initiator.connectors.lightos [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.301 254904 DEBUG os_brick.initiator.connectors.lightos [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.301 254904 DEBUG os_brick.utils [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] <== get_connector_properties: return (107ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.302 254904 DEBUG nova.virt.block_device [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Updating existing volume attachment record: ef25d768-cc2e-425a-ab6e-93c78cac1c67 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.309 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:01 np0005542249 podman[275200]: 2025-12-02 11:22:01.354554045 +0000 UTC m=+0.068633545 container create f40e7125aa93315c1073322ecfceba6452062ce4a3cb26fe2cc047728cea0b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:22:01 np0005542249 systemd[1]: Started libpod-conmon-f40e7125aa93315c1073322ecfceba6452062ce4a3cb26fe2cc047728cea0b37.scope.
Dec  2 06:22:01 np0005542249 podman[275200]: 2025-12-02 11:22:01.330390469 +0000 UTC m=+0.044469949 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:22:01 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:22:01 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf9ff8217343c724f894ebd19d32c09408265dd320be36fad2f29cbbbbb3f64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:22:01 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf9ff8217343c724f894ebd19d32c09408265dd320be36fad2f29cbbbbb3f64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:22:01 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf9ff8217343c724f894ebd19d32c09408265dd320be36fad2f29cbbbbb3f64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:22:01 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf9ff8217343c724f894ebd19d32c09408265dd320be36fad2f29cbbbbb3f64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:22:01 np0005542249 podman[275200]: 2025-12-02 11:22:01.468591951 +0000 UTC m=+0.182671431 container init f40e7125aa93315c1073322ecfceba6452062ce4a3cb26fe2cc047728cea0b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_easley, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  2 06:22:01 np0005542249 podman[275200]: 2025-12-02 11:22:01.478412769 +0000 UTC m=+0.192492239 container start f40e7125aa93315c1073322ecfceba6452062ce4a3cb26fe2cc047728cea0b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_easley, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  2 06:22:01 np0005542249 podman[275200]: 2025-12-02 11:22:01.482311031 +0000 UTC m=+0.196390511 container attach f40e7125aa93315c1073322ecfceba6452062ce4a3cb26fe2cc047728cea0b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.648 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674521.6483097, b2e17f69-4b2a-4abd-b738-61cd5813d48e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.649 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] VM Started (Lifecycle Event)#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.651 254904 DEBUG nova.compute.manager [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.656 254904 DEBUG nova.virt.libvirt.driver [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.660 254904 INFO nova.virt.libvirt.driver [-] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Instance spawned successfully.#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.661 254904 DEBUG nova.virt.libvirt.driver [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.674 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.679 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.684 254904 DEBUG nova.virt.libvirt.driver [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.684 254904 DEBUG nova.virt.libvirt.driver [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.685 254904 DEBUG nova.virt.libvirt.driver [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.685 254904 DEBUG nova.virt.libvirt.driver [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.685 254904 DEBUG nova.virt.libvirt.driver [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.686 254904 DEBUG nova.virt.libvirt.driver [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.698 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.699 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674521.6485333, b2e17f69-4b2a-4abd-b738-61cd5813d48e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.699 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.724 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.728 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674521.654767, b2e17f69-4b2a-4abd-b738-61cd5813d48e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.728 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.761 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.764 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:22:01 np0005542249 podman[275289]: 2025-12-02 11:22:01.77533503 +0000 UTC m=+0.058108997 container create 8ffff86f6a81c33623b60c66ee322ec0d93ec5aec01ff5110ee233753fca0558 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29518419-c819-4471-8719-e72e25a6d1be, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.782 254904 INFO nova.compute.manager [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Took 6.45 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.782 254904 DEBUG nova.compute.manager [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.785 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:22:01 np0005542249 systemd[1]: Started libpod-conmon-8ffff86f6a81c33623b60c66ee322ec0d93ec5aec01ff5110ee233753fca0558.scope.
Dec  2 06:22:01 np0005542249 podman[275289]: 2025-12-02 11:22:01.744601263 +0000 UTC m=+0.027375260 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:22:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:22:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Dec  2 06:22:01 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:22:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.847 254904 INFO nova.compute.manager [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Took 7.38 seconds to build instance.#033[00m
Dec  2 06:22:01 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53e412ef9797afc1c6807d089bcae206eaf169095d7a460f9bab4d82117d7987/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 06:22:01 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Dec  2 06:22:01 np0005542249 nova_compute[254900]: 2025-12-02 11:22:01.864 254904 DEBUG oslo_concurrency.lockutils [None req-fa9e3975-6c43-4ae3-8de9-84915ad07136 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:01 np0005542249 podman[275289]: 2025-12-02 11:22:01.868766246 +0000 UTC m=+0.151540233 container init 8ffff86f6a81c33623b60c66ee322ec0d93ec5aec01ff5110ee233753fca0558 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29518419-c819-4471-8719-e72e25a6d1be, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:22:01 np0005542249 podman[275289]: 2025-12-02 11:22:01.87579501 +0000 UTC m=+0.158568977 container start 8ffff86f6a81c33623b60c66ee322ec0d93ec5aec01ff5110ee233753fca0558 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29518419-c819-4471-8719-e72e25a6d1be, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:22:01 np0005542249 neutron-haproxy-ovnmeta-29518419-c819-4471-8719-e72e25a6d1be[275305]: [NOTICE]   (275309) : New worker (275311) forked
Dec  2 06:22:01 np0005542249 neutron-haproxy-ovnmeta-29518419-c819-4471-8719-e72e25a6d1be[275305]: [NOTICE]   (275309) : Loading success.
Dec  2 06:22:01 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1224: 321 pgs: 321 active+clean; 214 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 3.5 MiB/s wr, 98 op/s
Dec  2 06:22:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:22:02 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4254934054' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:22:02 np0005542249 admiring_easley[275220]: {
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:    "0": [
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:        {
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "devices": [
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "/dev/loop3"
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            ],
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "lv_name": "ceph_lv0",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "lv_size": "21470642176",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "name": "ceph_lv0",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "tags": {
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.cluster_name": "ceph",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.crush_device_class": "",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.encrypted": "0",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.osd_id": "0",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.type": "block",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.vdo": "0"
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            },
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "type": "block",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "vg_name": "ceph_vg0"
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:        }
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:    ],
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:    "1": [
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:        {
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "devices": [
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "/dev/loop4"
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            ],
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "lv_name": "ceph_lv1",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "lv_size": "21470642176",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "name": "ceph_lv1",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "tags": {
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.cluster_name": "ceph",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.crush_device_class": "",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.encrypted": "0",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.osd_id": "1",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.type": "block",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.vdo": "0"
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            },
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "type": "block",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "vg_name": "ceph_vg1"
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:        }
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:    ],
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:    "2": [
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:        {
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "devices": [
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "/dev/loop5"
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            ],
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "lv_name": "ceph_lv2",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "lv_size": "21470642176",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "name": "ceph_lv2",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "tags": {
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.cluster_name": "ceph",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.crush_device_class": "",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.encrypted": "0",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.osd_id": "2",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.type": "block",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:                "ceph.vdo": "0"
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            },
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "type": "block",
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:            "vg_name": "ceph_vg2"
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:        }
Dec  2 06:22:02 np0005542249 admiring_easley[275220]:    ]
Dec  2 06:22:02 np0005542249 admiring_easley[275220]: }
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.318 254904 DEBUG os_brick.encryptors [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Using volume encryption metadata '{'encryption_key_id': 'd0beed2f-b4db-42ac-a101-5103997b1cd7', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-18882187-066e-4edc-aa65-e42faf84995b', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '18882187-066e-4edc-aa65-e42faf84995b', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '415ead6c-ffe0-4426-a145-1cb487cfa30f', 'attached_at': '', 'detached_at': '', 'volume_id': '18882187-066e-4edc-aa65-e42faf84995b', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.333 254904 DEBUG barbicanclient.client [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.354 254904 DEBUG barbicanclient.v1.secrets [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/d0beed2f-b4db-42ac-a101-5103997b1cd7 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.355 254904 INFO barbicanclient.base [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Calculated Secrets uuid ref: secrets/d0beed2f-b4db-42ac-a101-5103997b1cd7#033[00m
Dec  2 06:22:02 np0005542249 systemd[1]: libpod-f40e7125aa93315c1073322ecfceba6452062ce4a3cb26fe2cc047728cea0b37.scope: Deactivated successfully.
Dec  2 06:22:02 np0005542249 podman[275200]: 2025-12-02 11:22:02.36104323 +0000 UTC m=+1.075122780 container died f40e7125aa93315c1073322ecfceba6452062ce4a3cb26fe2cc047728cea0b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_easley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.387 254904 DEBUG barbicanclient.client [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.388 254904 INFO barbicanclient.base [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Calculated Secrets uuid ref: secrets/d0beed2f-b4db-42ac-a101-5103997b1cd7#033[00m
Dec  2 06:22:02 np0005542249 systemd[1]: var-lib-containers-storage-overlay-edf9ff8217343c724f894ebd19d32c09408265dd320be36fad2f29cbbbbb3f64-merged.mount: Deactivated successfully.
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.409 254904 DEBUG barbicanclient.client [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.410 254904 INFO barbicanclient.base [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Calculated Secrets uuid ref: secrets/d0beed2f-b4db-42ac-a101-5103997b1cd7#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.430 254904 DEBUG barbicanclient.client [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.431 254904 INFO barbicanclient.base [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Calculated Secrets uuid ref: secrets/d0beed2f-b4db-42ac-a101-5103997b1cd7#033[00m
Dec  2 06:22:02 np0005542249 podman[275200]: 2025-12-02 11:22:02.436683258 +0000 UTC m=+1.150762718 container remove f40e7125aa93315c1073322ecfceba6452062ce4a3cb26fe2cc047728cea0b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  2 06:22:02 np0005542249 systemd[1]: libpod-conmon-f40e7125aa93315c1073322ecfceba6452062ce4a3cb26fe2cc047728cea0b37.scope: Deactivated successfully.
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.469 254904 DEBUG barbicanclient.client [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.470 254904 INFO barbicanclient.base [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Calculated Secrets uuid ref: secrets/d0beed2f-b4db-42ac-a101-5103997b1cd7#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.494 254904 DEBUG barbicanclient.client [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.495 254904 INFO barbicanclient.base [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Calculated Secrets uuid ref: secrets/d0beed2f-b4db-42ac-a101-5103997b1cd7#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.506 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764674507.5048826, 5771f44f-b324-4d11-b452-a2c22e990c48 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.507 254904 INFO nova.compute.manager [-] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.549 254904 DEBUG barbicanclient.client [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.549 254904 INFO barbicanclient.base [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Calculated Secrets uuid ref: secrets/d0beed2f-b4db-42ac-a101-5103997b1cd7#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.563 254904 DEBUG nova.compute.manager [None req-474d5463-7693-4a68-882d-3b5c5da03b98 - - - - - -] [instance: 5771f44f-b324-4d11-b452-a2c22e990c48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.584 254904 DEBUG barbicanclient.client [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.584 254904 INFO barbicanclient.base [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Calculated Secrets uuid ref: secrets/d0beed2f-b4db-42ac-a101-5103997b1cd7#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.608 254904 DEBUG barbicanclient.client [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.608 254904 INFO barbicanclient.base [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Calculated Secrets uuid ref: secrets/d0beed2f-b4db-42ac-a101-5103997b1cd7#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.629 254904 DEBUG barbicanclient.client [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.630 254904 INFO barbicanclient.base [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Calculated Secrets uuid ref: secrets/d0beed2f-b4db-42ac-a101-5103997b1cd7#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.660 254904 DEBUG barbicanclient.client [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.660 254904 INFO barbicanclient.base [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Calculated Secrets uuid ref: secrets/d0beed2f-b4db-42ac-a101-5103997b1cd7#033[00m
Dec  2 06:22:02 np0005542249 podman[275360]: 2025-12-02 11:22:02.686322757 +0000 UTC m=+0.080678541 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd)
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.698 254904 DEBUG barbicanclient.client [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.698 254904 INFO barbicanclient.base [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Calculated Secrets uuid ref: secrets/d0beed2f-b4db-42ac-a101-5103997b1cd7#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.730 254904 DEBUG barbicanclient.client [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.731 254904 INFO barbicanclient.base [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Calculated Secrets uuid ref: secrets/d0beed2f-b4db-42ac-a101-5103997b1cd7#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.755 254904 DEBUG barbicanclient.client [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.756 254904 INFO barbicanclient.base [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Calculated Secrets uuid ref: secrets/d0beed2f-b4db-42ac-a101-5103997b1cd7#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.776 254904 DEBUG barbicanclient.client [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.777 254904 INFO barbicanclient.base [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Calculated Secrets uuid ref: secrets/d0beed2f-b4db-42ac-a101-5103997b1cd7#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.800 254904 DEBUG barbicanclient.client [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.801 254904 DEBUG nova.virt.libvirt.host [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Secret XML: <secret ephemeral="no" private="no">
Dec  2 06:22:02 np0005542249 nova_compute[254900]:  <usage type="volume">
Dec  2 06:22:02 np0005542249 nova_compute[254900]:    <volume>18882187-066e-4edc-aa65-e42faf84995b</volume>
Dec  2 06:22:02 np0005542249 nova_compute[254900]:  </usage>
Dec  2 06:22:02 np0005542249 nova_compute[254900]: </secret>
Dec  2 06:22:02 np0005542249 nova_compute[254900]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.812 254904 DEBUG nova.objects.instance [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lazy-loading 'flavor' on Instance uuid 415ead6c-ffe0-4426-a145-1cb487cfa30f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.839 254904 DEBUG nova.virt.libvirt.driver [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Attempting to attach volume 18882187-066e-4edc-aa65-e42faf84995b with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Dec  2 06:22:02 np0005542249 nova_compute[254900]: 2025-12-02 11:22:02.843 254904 DEBUG nova.virt.libvirt.guest [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] attach device xml: <disk type="network" device="disk">
Dec  2 06:22:02 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:22:02 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-18882187-066e-4edc-aa65-e42faf84995b">
Dec  2 06:22:02 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:22:02 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:22:02 np0005542249 nova_compute[254900]:  <auth username="openstack">
Dec  2 06:22:02 np0005542249 nova_compute[254900]:    <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:22:02 np0005542249 nova_compute[254900]:  </auth>
Dec  2 06:22:02 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:22:02 np0005542249 nova_compute[254900]:  <serial>18882187-066e-4edc-aa65-e42faf84995b</serial>
Dec  2 06:22:02 np0005542249 nova_compute[254900]:  <encryption format="luks">
Dec  2 06:22:02 np0005542249 nova_compute[254900]:    <secret type="passphrase" uuid="aafa6098-67c8-4c6a-9cf7-05b84c93b291"/>
Dec  2 06:22:02 np0005542249 nova_compute[254900]:  </encryption>
Dec  2 06:22:02 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:22:02 np0005542249 nova_compute[254900]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  2 06:22:03 np0005542249 nova_compute[254900]: 2025-12-02 11:22:03.068 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:03 np0005542249 podman[275517]: 2025-12-02 11:22:03.220791591 +0000 UTC m=+0.047295684 container create deb2ec61274224008dd2869ada44cb736cff6a3f2312a68a28eb5b5d4b45e51f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  2 06:22:03 np0005542249 nova_compute[254900]: 2025-12-02 11:22:03.249 254904 DEBUG nova.compute.manager [req-42a90fd5-2050-482a-81c9-3c2430144ee2 req-d13414b0-4130-43c9-b4fa-73cbef0e97c0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Received event network-vif-plugged-37bb6586-c8c5-4107-9775-10964531e11e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:22:03 np0005542249 nova_compute[254900]: 2025-12-02 11:22:03.250 254904 DEBUG oslo_concurrency.lockutils [req-42a90fd5-2050-482a-81c9-3c2430144ee2 req-d13414b0-4130-43c9-b4fa-73cbef0e97c0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:03 np0005542249 nova_compute[254900]: 2025-12-02 11:22:03.250 254904 DEBUG oslo_concurrency.lockutils [req-42a90fd5-2050-482a-81c9-3c2430144ee2 req-d13414b0-4130-43c9-b4fa-73cbef0e97c0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:03 np0005542249 nova_compute[254900]: 2025-12-02 11:22:03.251 254904 DEBUG oslo_concurrency.lockutils [req-42a90fd5-2050-482a-81c9-3c2430144ee2 req-d13414b0-4130-43c9-b4fa-73cbef0e97c0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:03 np0005542249 nova_compute[254900]: 2025-12-02 11:22:03.251 254904 DEBUG nova.compute.manager [req-42a90fd5-2050-482a-81c9-3c2430144ee2 req-d13414b0-4130-43c9-b4fa-73cbef0e97c0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] No waiting events found dispatching network-vif-plugged-37bb6586-c8c5-4107-9775-10964531e11e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:22:03 np0005542249 nova_compute[254900]: 2025-12-02 11:22:03.252 254904 WARNING nova.compute.manager [req-42a90fd5-2050-482a-81c9-3c2430144ee2 req-d13414b0-4130-43c9-b4fa-73cbef0e97c0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Received unexpected event network-vif-plugged-37bb6586-c8c5-4107-9775-10964531e11e for instance with vm_state active and task_state None.#033[00m
Dec  2 06:22:03 np0005542249 systemd[1]: Started libpod-conmon-deb2ec61274224008dd2869ada44cb736cff6a3f2312a68a28eb5b5d4b45e51f.scope.
Dec  2 06:22:03 np0005542249 podman[275517]: 2025-12-02 11:22:03.197789056 +0000 UTC m=+0.024293139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:22:03 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:22:03 np0005542249 podman[275517]: 2025-12-02 11:22:03.314513623 +0000 UTC m=+0.141017686 container init deb2ec61274224008dd2869ada44cb736cff6a3f2312a68a28eb5b5d4b45e51f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jepsen, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  2 06:22:03 np0005542249 podman[275517]: 2025-12-02 11:22:03.322599355 +0000 UTC m=+0.149103458 container start deb2ec61274224008dd2869ada44cb736cff6a3f2312a68a28eb5b5d4b45e51f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  2 06:22:03 np0005542249 podman[275517]: 2025-12-02 11:22:03.326845097 +0000 UTC m=+0.153349180 container attach deb2ec61274224008dd2869ada44cb736cff6a3f2312a68a28eb5b5d4b45e51f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jepsen, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:22:03 np0005542249 frosty_jepsen[275534]: 167 167
Dec  2 06:22:03 np0005542249 systemd[1]: libpod-deb2ec61274224008dd2869ada44cb736cff6a3f2312a68a28eb5b5d4b45e51f.scope: Deactivated successfully.
Dec  2 06:22:03 np0005542249 conmon[275534]: conmon deb2ec61274224008dd2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-deb2ec61274224008dd2869ada44cb736cff6a3f2312a68a28eb5b5d4b45e51f.scope/container/memory.events
Dec  2 06:22:03 np0005542249 podman[275517]: 2025-12-02 11:22:03.330981286 +0000 UTC m=+0.157485379 container died deb2ec61274224008dd2869ada44cb736cff6a3f2312a68a28eb5b5d4b45e51f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jepsen, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:22:03 np0005542249 systemd[1]: var-lib-containers-storage-overlay-c4d704d2aeb684a50b01d63735701120c141df80e6e237cf4ea8ab96173bb144-merged.mount: Deactivated successfully.
Dec  2 06:22:03 np0005542249 podman[275517]: 2025-12-02 11:22:03.371177902 +0000 UTC m=+0.197681975 container remove deb2ec61274224008dd2869ada44cb736cff6a3f2312a68a28eb5b5d4b45e51f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jepsen, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  2 06:22:03 np0005542249 systemd[1]: libpod-conmon-deb2ec61274224008dd2869ada44cb736cff6a3f2312a68a28eb5b5d4b45e51f.scope: Deactivated successfully.
Dec  2 06:22:03 np0005542249 nova_compute[254900]: 2025-12-02 11:22:03.498 254904 DEBUG nova.compute.manager [req-49f2d846-ba49-41e9-b598-337f2f0e955e req-01c95455-7937-4f03-a476-ff649e4f2145 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Received event network-changed-37bb6586-c8c5-4107-9775-10964531e11e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:22:03 np0005542249 nova_compute[254900]: 2025-12-02 11:22:03.499 254904 DEBUG nova.compute.manager [req-49f2d846-ba49-41e9-b598-337f2f0e955e req-01c95455-7937-4f03-a476-ff649e4f2145 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Refreshing instance network info cache due to event network-changed-37bb6586-c8c5-4107-9775-10964531e11e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:22:03 np0005542249 nova_compute[254900]: 2025-12-02 11:22:03.500 254904 DEBUG oslo_concurrency.lockutils [req-49f2d846-ba49-41e9-b598-337f2f0e955e req-01c95455-7937-4f03-a476-ff649e4f2145 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-b2e17f69-4b2a-4abd-b738-61cd5813d48e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:22:03 np0005542249 nova_compute[254900]: 2025-12-02 11:22:03.501 254904 DEBUG oslo_concurrency.lockutils [req-49f2d846-ba49-41e9-b598-337f2f0e955e req-01c95455-7937-4f03-a476-ff649e4f2145 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-b2e17f69-4b2a-4abd-b738-61cd5813d48e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:22:03 np0005542249 nova_compute[254900]: 2025-12-02 11:22:03.501 254904 DEBUG nova.network.neutron [req-49f2d846-ba49-41e9-b598-337f2f0e955e req-01c95455-7937-4f03-a476-ff649e4f2145 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Refreshing network info cache for port 37bb6586-c8c5-4107-9775-10964531e11e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:22:03 np0005542249 nova_compute[254900]: 2025-12-02 11:22:03.537 254904 DEBUG oslo_concurrency.lockutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Acquiring lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:03 np0005542249 nova_compute[254900]: 2025-12-02 11:22:03.538 254904 DEBUG oslo_concurrency.lockutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:03 np0005542249 nova_compute[254900]: 2025-12-02 11:22:03.555 254904 DEBUG nova.compute.manager [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:22:03 np0005542249 podman[275560]: 2025-12-02 11:22:03.582977717 +0000 UTC m=+0.065549934 container create d0dbc16f8ef45f49cad7e16330c59bcbbe9bd6fd5fdb61afaf0938ae39949822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bell, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Dec  2 06:22:03 np0005542249 podman[275560]: 2025-12-02 11:22:03.559984963 +0000 UTC m=+0.042557220 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:22:03 np0005542249 nova_compute[254900]: 2025-12-02 11:22:03.667 254904 DEBUG oslo_concurrency.lockutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:03 np0005542249 nova_compute[254900]: 2025-12-02 11:22:03.668 254904 DEBUG oslo_concurrency.lockutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:03 np0005542249 systemd[1]: Started libpod-conmon-d0dbc16f8ef45f49cad7e16330c59bcbbe9bd6fd5fdb61afaf0938ae39949822.scope.
Dec  2 06:22:03 np0005542249 nova_compute[254900]: 2025-12-02 11:22:03.684 254904 DEBUG nova.virt.hardware [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:22:03 np0005542249 nova_compute[254900]: 2025-12-02 11:22:03.685 254904 INFO nova.compute.claims [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:22:03 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:22:03 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d644f00854615cc421e2ccdb43eb297cbd7df751914c12fcbcdee10d7d82c8d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:22:03 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d644f00854615cc421e2ccdb43eb297cbd7df751914c12fcbcdee10d7d82c8d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:22:03 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d644f00854615cc421e2ccdb43eb297cbd7df751914c12fcbcdee10d7d82c8d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:22:03 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d644f00854615cc421e2ccdb43eb297cbd7df751914c12fcbcdee10d7d82c8d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:22:03 np0005542249 podman[275560]: 2025-12-02 11:22:03.729922598 +0000 UTC m=+0.212494865 container init d0dbc16f8ef45f49cad7e16330c59bcbbe9bd6fd5fdb61afaf0938ae39949822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bell, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:22:03 np0005542249 podman[275560]: 2025-12-02 11:22:03.741335998 +0000 UTC m=+0.223908225 container start d0dbc16f8ef45f49cad7e16330c59bcbbe9bd6fd5fdb61afaf0938ae39949822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  2 06:22:03 np0005542249 podman[275560]: 2025-12-02 11:22:03.745438066 +0000 UTC m=+0.228010293 container attach d0dbc16f8ef45f49cad7e16330c59bcbbe9bd6fd5fdb61afaf0938ae39949822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  2 06:22:03 np0005542249 nova_compute[254900]: 2025-12-02 11:22:03.887 254904 DEBUG oslo_concurrency.processutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:22:03 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1225: 321 pgs: 321 active+clean; 214 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 577 KiB/s rd, 2.7 MiB/s wr, 121 op/s
Dec  2 06:22:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:22:04 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1545554694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:22:04 np0005542249 nova_compute[254900]: 2025-12-02 11:22:04.392 254904 DEBUG oslo_concurrency.processutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:22:04 np0005542249 nova_compute[254900]: 2025-12-02 11:22:04.402 254904 DEBUG nova.compute.provider_tree [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:22:04 np0005542249 nova_compute[254900]: 2025-12-02 11:22:04.437 254904 DEBUG nova.scheduler.client.report [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:22:04 np0005542249 nova_compute[254900]: 2025-12-02 11:22:04.472 254904 DEBUG oslo_concurrency.lockutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:04 np0005542249 nova_compute[254900]: 2025-12-02 11:22:04.473 254904 DEBUG nova.compute.manager [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:22:04 np0005542249 nova_compute[254900]: 2025-12-02 11:22:04.543 254904 DEBUG nova.compute.manager [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:22:04 np0005542249 nova_compute[254900]: 2025-12-02 11:22:04.544 254904 DEBUG nova.network.neutron [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:22:04 np0005542249 nova_compute[254900]: 2025-12-02 11:22:04.575 254904 INFO nova.virt.libvirt.driver [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:22:04 np0005542249 nova_compute[254900]: 2025-12-02 11:22:04.606 254904 DEBUG nova.compute.manager [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:22:04 np0005542249 nova_compute[254900]: 2025-12-02 11:22:04.719 254904 DEBUG nova.compute.manager [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:22:04 np0005542249 nova_compute[254900]: 2025-12-02 11:22:04.721 254904 DEBUG nova.virt.libvirt.driver [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:22:04 np0005542249 nova_compute[254900]: 2025-12-02 11:22:04.723 254904 INFO nova.virt.libvirt.driver [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Creating image(s)#033[00m
Dec  2 06:22:04 np0005542249 nova_compute[254900]: 2025-12-02 11:22:04.765 254904 DEBUG nova.storage.rbd_utils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] rbd image d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:22:04 np0005542249 nifty_bell[275576]: {
Dec  2 06:22:04 np0005542249 nifty_bell[275576]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:22:04 np0005542249 nifty_bell[275576]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:22:04 np0005542249 nifty_bell[275576]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:22:04 np0005542249 nifty_bell[275576]:        "osd_id": 0,
Dec  2 06:22:04 np0005542249 nifty_bell[275576]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:22:04 np0005542249 nifty_bell[275576]:        "type": "bluestore"
Dec  2 06:22:04 np0005542249 nifty_bell[275576]:    },
Dec  2 06:22:04 np0005542249 nifty_bell[275576]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:22:04 np0005542249 nifty_bell[275576]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:22:04 np0005542249 nifty_bell[275576]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:22:04 np0005542249 nifty_bell[275576]:        "osd_id": 2,
Dec  2 06:22:04 np0005542249 nifty_bell[275576]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:22:04 np0005542249 nifty_bell[275576]:        "type": "bluestore"
Dec  2 06:22:04 np0005542249 nifty_bell[275576]:    },
Dec  2 06:22:04 np0005542249 nifty_bell[275576]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:22:04 np0005542249 nifty_bell[275576]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:22:04 np0005542249 nifty_bell[275576]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:22:04 np0005542249 nifty_bell[275576]:        "osd_id": 1,
Dec  2 06:22:04 np0005542249 nifty_bell[275576]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:22:04 np0005542249 nifty_bell[275576]:        "type": "bluestore"
Dec  2 06:22:04 np0005542249 nifty_bell[275576]:    }
Dec  2 06:22:04 np0005542249 nifty_bell[275576]: }
Dec  2 06:22:04 np0005542249 nova_compute[254900]: 2025-12-02 11:22:04.803 254904 DEBUG nova.storage.rbd_utils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] rbd image d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:22:04 np0005542249 nova_compute[254900]: 2025-12-02 11:22:04.842 254904 DEBUG nova.storage.rbd_utils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] rbd image d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:22:04 np0005542249 nova_compute[254900]: 2025-12-02 11:22:04.847 254904 DEBUG oslo_concurrency.processutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:22:04 np0005542249 systemd[1]: libpod-d0dbc16f8ef45f49cad7e16330c59bcbbe9bd6fd5fdb61afaf0938ae39949822.scope: Deactivated successfully.
Dec  2 06:22:04 np0005542249 systemd[1]: libpod-d0dbc16f8ef45f49cad7e16330c59bcbbe9bd6fd5fdb61afaf0938ae39949822.scope: Consumed 1.111s CPU time.
Dec  2 06:22:04 np0005542249 podman[275560]: 2025-12-02 11:22:04.85610278 +0000 UTC m=+1.338674987 container died d0dbc16f8ef45f49cad7e16330c59bcbbe9bd6fd5fdb61afaf0938ae39949822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:22:04 np0005542249 nova_compute[254900]: 2025-12-02 11:22:04.882 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:04 np0005542249 systemd[1]: var-lib-containers-storage-overlay-d644f00854615cc421e2ccdb43eb297cbd7df751914c12fcbcdee10d7d82c8d8-merged.mount: Deactivated successfully.
Dec  2 06:22:04 np0005542249 podman[275560]: 2025-12-02 11:22:04.920786299 +0000 UTC m=+1.403358506 container remove d0dbc16f8ef45f49cad7e16330c59bcbbe9bd6fd5fdb61afaf0938ae39949822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  2 06:22:04 np0005542249 nova_compute[254900]: 2025-12-02 11:22:04.955 254904 DEBUG oslo_concurrency.processutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:22:04 np0005542249 nova_compute[254900]: 2025-12-02 11:22:04.956 254904 DEBUG oslo_concurrency.lockutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Acquiring lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:04 np0005542249 nova_compute[254900]: 2025-12-02 11:22:04.957 254904 DEBUG oslo_concurrency.lockutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:04 np0005542249 nova_compute[254900]: 2025-12-02 11:22:04.958 254904 DEBUG oslo_concurrency.lockutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:04 np0005542249 systemd[1]: libpod-conmon-d0dbc16f8ef45f49cad7e16330c59bcbbe9bd6fd5fdb61afaf0938ae39949822.scope: Deactivated successfully.
Dec  2 06:22:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:22:04 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:22:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:22:04 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:22:04 np0005542249 nova_compute[254900]: 2025-12-02 11:22:04.995 254904 DEBUG nova.storage.rbd_utils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] rbd image d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:22:04 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev c02acae2-8a59-403a-9ef6-fb958ab931c0 does not exist
Dec  2 06:22:04 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 49fbc317-3452-49f6-bbe5-80eb16592dc3 does not exist
Dec  2 06:22:05 np0005542249 nova_compute[254900]: 2025-12-02 11:22:05.005 254904 DEBUG oslo_concurrency.processutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:22:05 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:22:05 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:22:05 np0005542249 nova_compute[254900]: 2025-12-02 11:22:05.234 254904 DEBUG nova.policy [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e0796090ff07418b99397a7f13f11633', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fff78a31f26746918caf04706b12b741', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:22:05 np0005542249 nova_compute[254900]: 2025-12-02 11:22:05.378 254904 DEBUG oslo_concurrency.processutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.373s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:22:05 np0005542249 nova_compute[254900]: 2025-12-02 11:22:05.448 254904 DEBUG nova.virt.libvirt.driver [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:22:05 np0005542249 nova_compute[254900]: 2025-12-02 11:22:05.449 254904 DEBUG nova.virt.libvirt.driver [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:22:05 np0005542249 nova_compute[254900]: 2025-12-02 11:22:05.449 254904 DEBUG nova.virt.libvirt.driver [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:22:05 np0005542249 nova_compute[254900]: 2025-12-02 11:22:05.449 254904 DEBUG nova.virt.libvirt.driver [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] No VIF found with MAC fa:16:3e:4c:b5:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:22:05 np0005542249 nova_compute[254900]: 2025-12-02 11:22:05.460 254904 DEBUG nova.storage.rbd_utils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] resizing rbd image d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  2 06:22:05 np0005542249 nova_compute[254900]: 2025-12-02 11:22:05.564 254904 DEBUG nova.objects.instance [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lazy-loading 'migration_context' on Instance uuid d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:22:05 np0005542249 nova_compute[254900]: 2025-12-02 11:22:05.577 254904 DEBUG nova.virt.libvirt.driver [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 06:22:05 np0005542249 nova_compute[254900]: 2025-12-02 11:22:05.577 254904 DEBUG nova.virt.libvirt.driver [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Ensure instance console log exists: /var/lib/nova/instances/d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:22:05 np0005542249 nova_compute[254900]: 2025-12-02 11:22:05.578 254904 DEBUG oslo_concurrency.lockutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:05 np0005542249 nova_compute[254900]: 2025-12-02 11:22:05.578 254904 DEBUG oslo_concurrency.lockutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:05 np0005542249 nova_compute[254900]: 2025-12-02 11:22:05.579 254904 DEBUG oslo_concurrency.lockutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:05 np0005542249 nova_compute[254900]: 2025-12-02 11:22:05.628 254904 DEBUG oslo_concurrency.lockutils [None req-0305f891-ad75-4362-900c-a4ea5ca090b7 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lock "415ead6c-ffe0-4426-a145-1cb487cfa30f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:05 np0005542249 nova_compute[254900]: 2025-12-02 11:22:05.951 254904 DEBUG nova.network.neutron [req-49f2d846-ba49-41e9-b598-337f2f0e955e req-01c95455-7937-4f03-a476-ff649e4f2145 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Updated VIF entry in instance network info cache for port 37bb6586-c8c5-4107-9775-10964531e11e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:22:05 np0005542249 nova_compute[254900]: 2025-12-02 11:22:05.952 254904 DEBUG nova.network.neutron [req-49f2d846-ba49-41e9-b598-337f2f0e955e req-01c95455-7937-4f03-a476-ff649e4f2145 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Updating instance_info_cache with network_info: [{"id": "37bb6586-c8c5-4107-9775-10964531e11e", "address": "fa:16:3e:d8:49:e3", "network": {"id": "29518419-c819-4471-8719-e72e25a6d1be", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-51492735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0fe4a9242c84683be1c02df04c2dbf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37bb6586-c8", "ovs_interfaceid": "37bb6586-c8c5-4107-9775-10964531e11e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:22:05 np0005542249 nova_compute[254900]: 2025-12-02 11:22:05.975 254904 DEBUG oslo_concurrency.lockutils [req-49f2d846-ba49-41e9-b598-337f2f0e955e req-01c95455-7937-4f03-a476-ff649e4f2145 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-b2e17f69-4b2a-4abd-b738-61cd5813d48e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:22:05 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1226: 321 pgs: 321 active+clean; 214 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 538 KiB/s rd, 1.3 MiB/s wr, 118 op/s
Dec  2 06:22:06 np0005542249 nova_compute[254900]: 2025-12-02 11:22:06.147 254904 DEBUG nova.network.neutron [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Successfully created port: 61896fe1-ac59-4781-93dd-2cf18f84208e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:22:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:22:06 np0005542249 nova_compute[254900]: 2025-12-02 11:22:06.969 254904 DEBUG nova.network.neutron [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Successfully updated port: 61896fe1-ac59-4781-93dd-2cf18f84208e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:22:06 np0005542249 nova_compute[254900]: 2025-12-02 11:22:06.984 254904 DEBUG oslo_concurrency.lockutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Acquiring lock "refresh_cache-d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:22:06 np0005542249 nova_compute[254900]: 2025-12-02 11:22:06.985 254904 DEBUG oslo_concurrency.lockutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Acquired lock "refresh_cache-d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:22:06 np0005542249 nova_compute[254900]: 2025-12-02 11:22:06.985 254904 DEBUG nova.network.neutron [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.044 254904 DEBUG nova.compute.manager [req-27550bb5-ec79-4be9-8985-2fc2bf2fd26c req-caa1656a-1744-49b9-803c-91c16561c1b1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Received event network-changed-61896fe1-ac59-4781-93dd-2cf18f84208e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.045 254904 DEBUG nova.compute.manager [req-27550bb5-ec79-4be9-8985-2fc2bf2fd26c req-caa1656a-1744-49b9-803c-91c16561c1b1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Refreshing instance network info cache due to event network-changed-61896fe1-ac59-4781-93dd-2cf18f84208e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.045 254904 DEBUG oslo_concurrency.lockutils [req-27550bb5-ec79-4be9-8985-2fc2bf2fd26c req-caa1656a-1744-49b9-803c-91c16561c1b1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.119 254904 DEBUG nova.network.neutron [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.144 254904 DEBUG oslo_concurrency.lockutils [None req-101543b8-8f53-42ea-9248-200e2f9498b3 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Acquiring lock "415ead6c-ffe0-4426-a145-1cb487cfa30f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.145 254904 DEBUG oslo_concurrency.lockutils [None req-101543b8-8f53-42ea-9248-200e2f9498b3 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lock "415ead6c-ffe0-4426-a145-1cb487cfa30f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.164 254904 INFO nova.compute.manager [None req-101543b8-8f53-42ea-9248-200e2f9498b3 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Detaching volume 18882187-066e-4edc-aa65-e42faf84995b#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.286 254904 INFO nova.virt.block_device [None req-101543b8-8f53-42ea-9248-200e2f9498b3 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Attempting to driver detach volume 18882187-066e-4edc-aa65-e42faf84995b from mountpoint /dev/vdb#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.401 254904 DEBUG os_brick.encryptors [None req-101543b8-8f53-42ea-9248-200e2f9498b3 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Using volume encryption metadata '{'encryption_key_id': 'd0beed2f-b4db-42ac-a101-5103997b1cd7', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-18882187-066e-4edc-aa65-e42faf84995b', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '18882187-066e-4edc-aa65-e42faf84995b', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '415ead6c-ffe0-4426-a145-1cb487cfa30f', 'attached_at': '', 'detached_at': '', 'volume_id': '18882187-066e-4edc-aa65-e42faf84995b', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.413 254904 DEBUG nova.virt.libvirt.driver [None req-101543b8-8f53-42ea-9248-200e2f9498b3 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Attempting to detach device vdb from instance 415ead6c-ffe0-4426-a145-1cb487cfa30f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.414 254904 DEBUG nova.virt.libvirt.guest [None req-101543b8-8f53-42ea-9248-200e2f9498b3 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] detach device xml: <disk type="network" device="disk">
Dec  2 06:22:07 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:22:07 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-18882187-066e-4edc-aa65-e42faf84995b">
Dec  2 06:22:07 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:22:07 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:22:07 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:22:07 np0005542249 nova_compute[254900]:  <serial>18882187-066e-4edc-aa65-e42faf84995b</serial>
Dec  2 06:22:07 np0005542249 nova_compute[254900]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  2 06:22:07 np0005542249 nova_compute[254900]:  <encryption format="luks">
Dec  2 06:22:07 np0005542249 nova_compute[254900]:    <secret type="passphrase" uuid="aafa6098-67c8-4c6a-9cf7-05b84c93b291"/>
Dec  2 06:22:07 np0005542249 nova_compute[254900]:  </encryption>
Dec  2 06:22:07 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:22:07 np0005542249 nova_compute[254900]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.433 254904 INFO nova.virt.libvirt.driver [None req-101543b8-8f53-42ea-9248-200e2f9498b3 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Successfully detached device vdb from instance 415ead6c-ffe0-4426-a145-1cb487cfa30f from the persistent domain config.#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.434 254904 DEBUG nova.virt.libvirt.driver [None req-101543b8-8f53-42ea-9248-200e2f9498b3 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 415ead6c-ffe0-4426-a145-1cb487cfa30f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.435 254904 DEBUG nova.virt.libvirt.guest [None req-101543b8-8f53-42ea-9248-200e2f9498b3 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] detach device xml: <disk type="network" device="disk">
Dec  2 06:22:07 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:22:07 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-18882187-066e-4edc-aa65-e42faf84995b">
Dec  2 06:22:07 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:22:07 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:22:07 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:22:07 np0005542249 nova_compute[254900]:  <serial>18882187-066e-4edc-aa65-e42faf84995b</serial>
Dec  2 06:22:07 np0005542249 nova_compute[254900]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  2 06:22:07 np0005542249 nova_compute[254900]:  <encryption format="luks">
Dec  2 06:22:07 np0005542249 nova_compute[254900]:    <secret type="passphrase" uuid="aafa6098-67c8-4c6a-9cf7-05b84c93b291"/>
Dec  2 06:22:07 np0005542249 nova_compute[254900]:  </encryption>
Dec  2 06:22:07 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:22:07 np0005542249 nova_compute[254900]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.496 254904 DEBUG nova.virt.libvirt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Received event <DeviceRemovedEvent: 1764674527.4965806, 415ead6c-ffe0-4426-a145-1cb487cfa30f => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.498 254904 DEBUG nova.virt.libvirt.driver [None req-101543b8-8f53-42ea-9248-200e2f9498b3 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 415ead6c-ffe0-4426-a145-1cb487cfa30f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.500 254904 INFO nova.virt.libvirt.driver [None req-101543b8-8f53-42ea-9248-200e2f9498b3 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Successfully detached device vdb from instance 415ead6c-ffe0-4426-a145-1cb487cfa30f from the live domain config.#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.683 254904 DEBUG nova.objects.instance [None req-101543b8-8f53-42ea-9248-200e2f9498b3 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lazy-loading 'flavor' on Instance uuid 415ead6c-ffe0-4426-a145-1cb487cfa30f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.731 254904 DEBUG oslo_concurrency.lockutils [None req-101543b8-8f53-42ea-9248-200e2f9498b3 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lock "415ead6c-ffe0-4426-a145-1cb487cfa30f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.779 254904 DEBUG nova.network.neutron [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Updating instance_info_cache with network_info: [{"id": "61896fe1-ac59-4781-93dd-2cf18f84208e", "address": "fa:16:3e:9c:35:f4", "network": {"id": "df73b9ab-de8d-40fa-9bf0-aa773bb32d3a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-950743948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff78a31f26746918caf04706b12b741", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61896fe1-ac", "ovs_interfaceid": "61896fe1-ac59-4781-93dd-2cf18f84208e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.802 254904 DEBUG oslo_concurrency.lockutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Releasing lock "refresh_cache-d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.803 254904 DEBUG nova.compute.manager [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Instance network_info: |[{"id": "61896fe1-ac59-4781-93dd-2cf18f84208e", "address": "fa:16:3e:9c:35:f4", "network": {"id": "df73b9ab-de8d-40fa-9bf0-aa773bb32d3a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-950743948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff78a31f26746918caf04706b12b741", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61896fe1-ac", "ovs_interfaceid": "61896fe1-ac59-4781-93dd-2cf18f84208e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.803 254904 DEBUG oslo_concurrency.lockutils [req-27550bb5-ec79-4be9-8985-2fc2bf2fd26c req-caa1656a-1744-49b9-803c-91c16561c1b1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.804 254904 DEBUG nova.network.neutron [req-27550bb5-ec79-4be9-8985-2fc2bf2fd26c req-caa1656a-1744-49b9-803c-91c16561c1b1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Refreshing network info cache for port 61896fe1-ac59-4781-93dd-2cf18f84208e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.807 254904 DEBUG nova.virt.libvirt.driver [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Start _get_guest_xml network_info=[{"id": "61896fe1-ac59-4781-93dd-2cf18f84208e", "address": "fa:16:3e:9c:35:f4", "network": {"id": "df73b9ab-de8d-40fa-9bf0-aa773bb32d3a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-950743948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff78a31f26746918caf04706b12b741", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61896fe1-ac", "ovs_interfaceid": "61896fe1-ac59-4781-93dd-2cf18f84208e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'image_id': '5a40f66c-ab43-47dd-9880-e59f9fa2c60e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.817 254904 WARNING nova.virt.libvirt.driver [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.841 254904 DEBUG nova.virt.libvirt.host [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.842 254904 DEBUG nova.virt.libvirt.host [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.847 254904 DEBUG nova.virt.libvirt.host [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.849 254904 DEBUG nova.virt.libvirt.host [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.850 254904 DEBUG nova.virt.libvirt.driver [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.851 254904 DEBUG nova.virt.hardware [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.852 254904 DEBUG nova.virt.hardware [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.853 254904 DEBUG nova.virt.hardware [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.853 254904 DEBUG nova.virt.hardware [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.854 254904 DEBUG nova.virt.hardware [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.855 254904 DEBUG nova.virt.hardware [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.855 254904 DEBUG nova.virt.hardware [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.856 254904 DEBUG nova.virt.hardware [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.857 254904 DEBUG nova.virt.hardware [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.857 254904 DEBUG nova.virt.hardware [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.858 254904 DEBUG nova.virt.hardware [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:22:07 np0005542249 nova_compute[254900]: 2025-12-02 11:22:07.865 254904 DEBUG oslo_concurrency.processutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:22:07 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1227: 321 pgs: 321 active+clean; 255 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 213 op/s
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.071 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:22:08 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3371395565' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.319 254904 DEBUG oslo_concurrency.processutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.344 254904 DEBUG nova.storage.rbd_utils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] rbd image d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.351 254904 DEBUG oslo_concurrency.processutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:22:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:22:08 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3192373506' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.802 254904 DEBUG oslo_concurrency.processutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.806 254904 DEBUG nova.virt.libvirt.vif [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1393004341',display_name='tempest-VolumesSnapshotTestJSON-instance-1393004341',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1393004341',id=11,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNw3JnJx+HMOTZ+AvQ489m0NYUlKvWSOqBmE/ujnmHqXRtIDwzDnPElm55wip1dZOPbB+3JyWL1W2JL1GKvQzzrvd/rxRQ9vkYSZZJhGCCZAICgsfaxPQK5nW0pvFEebVA==',key_name='tempest-keypair-926102481',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fff78a31f26746918caf04706b12b741',ramdisk_id='',reservation_id='r-kh3z2ki5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-1610940554',owner_user_name='tempest-VolumesSnapshotTestJSON-1610940554-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:22:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e0796090ff07418b99397a7f13f11633',uuid=d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "61896fe1-ac59-4781-93dd-2cf18f84208e", "address": "fa:16:3e:9c:35:f4", "network": {"id": "df73b9ab-de8d-40fa-9bf0-aa773bb32d3a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-950743948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff78a31f26746918caf04706b12b741", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61896fe1-ac", "ovs_interfaceid": "61896fe1-ac59-4781-93dd-2cf18f84208e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.807 254904 DEBUG nova.network.os_vif_util [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Converting VIF {"id": "61896fe1-ac59-4781-93dd-2cf18f84208e", "address": "fa:16:3e:9c:35:f4", "network": {"id": "df73b9ab-de8d-40fa-9bf0-aa773bb32d3a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-950743948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff78a31f26746918caf04706b12b741", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61896fe1-ac", "ovs_interfaceid": "61896fe1-ac59-4781-93dd-2cf18f84208e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.810 254904 DEBUG nova.network.os_vif_util [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:35:f4,bridge_name='br-int',has_traffic_filtering=True,id=61896fe1-ac59-4781-93dd-2cf18f84208e,network=Network(df73b9ab-de8d-40fa-9bf0-aa773bb32d3a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61896fe1-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.813 254904 DEBUG nova.objects.instance [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lazy-loading 'pci_devices' on Instance uuid d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.842 254904 DEBUG nova.virt.libvirt.driver [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:22:08 np0005542249 nova_compute[254900]:  <uuid>d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7</uuid>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:  <name>instance-0000000b</name>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <nova:name>tempest-VolumesSnapshotTestJSON-instance-1393004341</nova:name>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:22:07</nova:creationTime>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:22:08 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:        <nova:user uuid="e0796090ff07418b99397a7f13f11633">tempest-VolumesSnapshotTestJSON-1610940554-project-member</nova:user>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:        <nova:project uuid="fff78a31f26746918caf04706b12b741">tempest-VolumesSnapshotTestJSON-1610940554</nova:project>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <nova:root type="image" uuid="5a40f66c-ab43-47dd-9880-e59f9fa2c60e"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:        <nova:port uuid="61896fe1-ac59-4781-93dd-2cf18f84208e">
Dec  2 06:22:08 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <entry name="serial">d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7</entry>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <entry name="uuid">d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7</entry>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7_disk">
Dec  2 06:22:08 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:22:08 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7_disk.config">
Dec  2 06:22:08 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:22:08 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:9c:35:f4"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <target dev="tap61896fe1-ac"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7/console.log" append="off"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:22:08 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:22:08 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:22:08 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:22:08 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.853 254904 DEBUG nova.compute.manager [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Preparing to wait for external event network-vif-plugged-61896fe1-ac59-4781-93dd-2cf18f84208e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.853 254904 DEBUG oslo_concurrency.lockutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Acquiring lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.853 254904 DEBUG oslo_concurrency.lockutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.854 254904 DEBUG oslo_concurrency.lockutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.854 254904 DEBUG nova.virt.libvirt.vif [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1393004341',display_name='tempest-VolumesSnapshotTestJSON-instance-1393004341',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1393004341',id=11,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNw3JnJx+HMOTZ+AvQ489m0NYUlKvWSOqBmE/ujnmHqXRtIDwzDnPElm55wip1dZOPbB+3JyWL1W2JL1GKvQzzrvd/rxRQ9vkYSZZJhGCCZAICgsfaxPQK5nW0pvFEebVA==',key_name='tempest-keypair-926102481',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fff78a31f26746918caf04706b12b741',ramdisk_id='',reservation_id='r-kh3z2ki5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-1610940554',owner_user_name='tempest-VolumesSnapshotTestJSON-1610940554-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:22:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e0796090ff07418b99397a7f13f11633',uuid=d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "61896fe1-ac59-4781-93dd-2cf18f84208e", "address": "fa:16:3e:9c:35:f4", "network": {"id": "df73b9ab-de8d-40fa-9bf0-aa773bb32d3a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-950743948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff78a31f26746918caf04706b12b741", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61896fe1-ac", "ovs_interfaceid": "61896fe1-ac59-4781-93dd-2cf18f84208e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.855 254904 DEBUG nova.network.os_vif_util [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Converting VIF {"id": "61896fe1-ac59-4781-93dd-2cf18f84208e", "address": "fa:16:3e:9c:35:f4", "network": {"id": "df73b9ab-de8d-40fa-9bf0-aa773bb32d3a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-950743948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff78a31f26746918caf04706b12b741", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61896fe1-ac", "ovs_interfaceid": "61896fe1-ac59-4781-93dd-2cf18f84208e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.855 254904 DEBUG nova.network.os_vif_util [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:35:f4,bridge_name='br-int',has_traffic_filtering=True,id=61896fe1-ac59-4781-93dd-2cf18f84208e,network=Network(df73b9ab-de8d-40fa-9bf0-aa773bb32d3a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61896fe1-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.856 254904 DEBUG os_vif [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:35:f4,bridge_name='br-int',has_traffic_filtering=True,id=61896fe1-ac59-4781-93dd-2cf18f84208e,network=Network(df73b9ab-de8d-40fa-9bf0-aa773bb32d3a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61896fe1-ac') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.857 254904 DEBUG oslo_concurrency.lockutils [None req-bdf51df4-6a72-46ad-826d-a6f87581f283 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Acquiring lock "415ead6c-ffe0-4426-a145-1cb487cfa30f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.858 254904 DEBUG oslo_concurrency.lockutils [None req-bdf51df4-6a72-46ad-826d-a6f87581f283 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lock "415ead6c-ffe0-4426-a145-1cb487cfa30f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.858 254904 DEBUG oslo_concurrency.lockutils [None req-bdf51df4-6a72-46ad-826d-a6f87581f283 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Acquiring lock "415ead6c-ffe0-4426-a145-1cb487cfa30f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.858 254904 DEBUG oslo_concurrency.lockutils [None req-bdf51df4-6a72-46ad-826d-a6f87581f283 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lock "415ead6c-ffe0-4426-a145-1cb487cfa30f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.859 254904 DEBUG oslo_concurrency.lockutils [None req-bdf51df4-6a72-46ad-826d-a6f87581f283 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lock "415ead6c-ffe0-4426-a145-1cb487cfa30f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.861 254904 INFO nova.compute.manager [None req-bdf51df4-6a72-46ad-826d-a6f87581f283 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Terminating instance#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.862 254904 DEBUG nova.compute.manager [None req-bdf51df4-6a72-46ad-826d-a6f87581f283 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.863 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.863 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.864 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.869 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.869 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61896fe1-ac, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.870 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap61896fe1-ac, col_values=(('external_ids', {'iface-id': '61896fe1-ac59-4781-93dd-2cf18f84208e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:35:f4', 'vm-uuid': 'd9cd12f2-b6b7-41d4-a74b-2a89172f2ce7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:22:08 np0005542249 NetworkManager[48987]: <info>  [1764674528.8728] manager: (tap61896fe1-ac): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.873 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.883 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.885 254904 INFO os_vif [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:35:f4,bridge_name='br-int',has_traffic_filtering=True,id=61896fe1-ac59-4781-93dd-2cf18f84208e,network=Network(df73b9ab-de8d-40fa-9bf0-aa773bb32d3a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61896fe1-ac')#033[00m
Dec  2 06:22:08 np0005542249 kernel: tap41345e4a-b8 (unregistering): left promiscuous mode
Dec  2 06:22:08 np0005542249 NetworkManager[48987]: <info>  [1764674528.9326] device (tap41345e4a-b8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:22:08 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:08Z|00106|binding|INFO|Releasing lport 41345e4a-b8a8-4ed2-80a8-e69289b0f8fb from this chassis (sb_readonly=0)
Dec  2 06:22:08 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:08Z|00107|binding|INFO|Setting lport 41345e4a-b8a8-4ed2-80a8-e69289b0f8fb down in Southbound
Dec  2 06:22:08 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:08Z|00108|binding|INFO|Removing iface tap41345e4a-b8 ovn-installed in OVS
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.950 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:08.954 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:b5:d0 10.100.0.8'], port_security=['fa:16:3e:4c:b5:d0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '415ead6c-ffe0-4426-a145-1cb487cfa30f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '401c4eb4c3ea4ca886484161dcd637b6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b4000e73-fb51-40ce-b6d7-d2ba9743ca07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.218'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df6d8bd2-f456-4c02-be9f-8241c1cd11ce, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=41345e4a-b8a8-4ed2-80a8-e69289b0f8fb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:22:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:08.956 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 41345e4a-b8a8-4ed2-80a8-e69289b0f8fb in datapath d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa unbound from our chassis#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.957 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:08.958 163757 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 06:22:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:08.960 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[c412dbeb-2270-4379-9a19-72af709ec17f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:08.960 163757 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa namespace which is not needed anymore#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.971 254904 DEBUG nova.virt.libvirt.driver [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.972 254904 DEBUG nova.virt.libvirt.driver [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.973 254904 DEBUG nova.virt.libvirt.driver [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] No VIF found with MAC fa:16:3e:9c:35:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:22:08 np0005542249 nova_compute[254900]: 2025-12-02 11:22:08.973 254904 INFO nova.virt.libvirt.driver [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Using config drive#033[00m
Dec  2 06:22:09 np0005542249 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Dec  2 06:22:09 np0005542249 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 19.055s CPU time.
Dec  2 06:22:09 np0005542249 systemd-machined[216222]: Machine qemu-9-instance-00000009 terminated.
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.013 254904 DEBUG nova.storage.rbd_utils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] rbd image d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.025 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.113 254904 INFO nova.virt.libvirt.driver [-] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Instance destroyed successfully.#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.114 254904 DEBUG nova.objects.instance [None req-bdf51df4-6a72-46ad-826d-a6f87581f283 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lazy-loading 'resources' on Instance uuid 415ead6c-ffe0-4426-a145-1cb487cfa30f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.126 254904 DEBUG nova.virt.libvirt.vif [None req-bdf51df4-6a72-46ad-826d-a6f87581f283 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:21:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-133415757',display_name='tempest-TestEncryptedCinderVolumes-server-133415757',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-133415757',id=9,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEXEYwVhOXVaKmb16qIGzsPPwQMmvRw3pYewNYOxWs1L+HL4RkXgCdhuRfSTGpysaJFWg3ygNTzcxEctMScVQQSoZ5a4dTSi84EU1UrsvSAbV2JnkgGl+JMtFLIFfJ1zOA==',key_name='tempest-keypair-1068411774',keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:21:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='401c4eb4c3ea4ca886484161dcd637b6',ramdisk_id='',reservation_id='r-xxeybq0j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-1842533919',owner_user_name='tempest-TestEncryptedCinderVolumes-1842533919-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:21:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='33395809f6bd4db1bf1ab3a67fdbc5d5',uuid=415ead6c-ffe0-4426-a145-1cb487cfa30f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41345e4a-b8a8-4ed2-80a8-e69289b0f8fb", "address": "fa:16:3e:4c:b5:d0", "network": {"id": "d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1780654330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "401c4eb4c3ea4ca886484161dcd637b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41345e4a-b8", "ovs_interfaceid": "41345e4a-b8a8-4ed2-80a8-e69289b0f8fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.127 254904 DEBUG nova.network.os_vif_util [None req-bdf51df4-6a72-46ad-826d-a6f87581f283 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Converting VIF {"id": "41345e4a-b8a8-4ed2-80a8-e69289b0f8fb", "address": "fa:16:3e:4c:b5:d0", "network": {"id": "d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1780654330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "401c4eb4c3ea4ca886484161dcd637b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41345e4a-b8", "ovs_interfaceid": "41345e4a-b8a8-4ed2-80a8-e69289b0f8fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.127 254904 DEBUG nova.network.os_vif_util [None req-bdf51df4-6a72-46ad-826d-a6f87581f283 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4c:b5:d0,bridge_name='br-int',has_traffic_filtering=True,id=41345e4a-b8a8-4ed2-80a8-e69289b0f8fb,network=Network(d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41345e4a-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.128 254904 DEBUG os_vif [None req-bdf51df4-6a72-46ad-826d-a6f87581f283 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4c:b5:d0,bridge_name='br-int',has_traffic_filtering=True,id=41345e4a-b8a8-4ed2-80a8-e69289b0f8fb,network=Network(d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41345e4a-b8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.130 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.130 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41345e4a-b8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.132 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:22:09 np0005542249 neutron-haproxy-ovnmeta-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa[273998]: [NOTICE]   (274002) : haproxy version is 2.8.14-c23fe91
Dec  2 06:22:09 np0005542249 neutron-haproxy-ovnmeta-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa[273998]: [NOTICE]   (274002) : path to executable is /usr/sbin/haproxy
Dec  2 06:22:09 np0005542249 neutron-haproxy-ovnmeta-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa[273998]: [WARNING]  (274002) : Exiting Master process...
Dec  2 06:22:09 np0005542249 neutron-haproxy-ovnmeta-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa[273998]: [ALERT]    (274002) : Current worker (274004) exited with code 143 (Terminated)
Dec  2 06:22:09 np0005542249 neutron-haproxy-ovnmeta-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa[273998]: [WARNING]  (274002) : All workers exited. Exiting... (0)
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.136 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:09 np0005542249 systemd[1]: libpod-affd82d2e27ca8f17f1fa8604e76325c43d45e87a6303345d1603a4fe4b2ff92.scope: Deactivated successfully.
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.141 254904 INFO os_vif [None req-bdf51df4-6a72-46ad-826d-a6f87581f283 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4c:b5:d0,bridge_name='br-int',has_traffic_filtering=True,id=41345e4a-b8a8-4ed2-80a8-e69289b0f8fb,network=Network(d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41345e4a-b8')#033[00m
Dec  2 06:22:09 np0005542249 podman[275968]: 2025-12-02 11:22:09.145935106 +0000 UTC m=+0.059441383 container died affd82d2e27ca8f17f1fa8604e76325c43d45e87a6303345d1603a4fe4b2ff92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.163 254904 DEBUG nova.compute.manager [req-8bc172d4-bffb-481a-86f4-20d2f64e0003 req-3746e1c5-5bbd-4073-9d25-a5985e013bcd 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Received event network-vif-unplugged-41345e4a-b8a8-4ed2-80a8-e69289b0f8fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.164 254904 DEBUG oslo_concurrency.lockutils [req-8bc172d4-bffb-481a-86f4-20d2f64e0003 req-3746e1c5-5bbd-4073-9d25-a5985e013bcd 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "415ead6c-ffe0-4426-a145-1cb487cfa30f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.165 254904 DEBUG oslo_concurrency.lockutils [req-8bc172d4-bffb-481a-86f4-20d2f64e0003 req-3746e1c5-5bbd-4073-9d25-a5985e013bcd 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "415ead6c-ffe0-4426-a145-1cb487cfa30f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.165 254904 DEBUG oslo_concurrency.lockutils [req-8bc172d4-bffb-481a-86f4-20d2f64e0003 req-3746e1c5-5bbd-4073-9d25-a5985e013bcd 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "415ead6c-ffe0-4426-a145-1cb487cfa30f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.165 254904 DEBUG nova.compute.manager [req-8bc172d4-bffb-481a-86f4-20d2f64e0003 req-3746e1c5-5bbd-4073-9d25-a5985e013bcd 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] No waiting events found dispatching network-vif-unplugged-41345e4a-b8a8-4ed2-80a8-e69289b0f8fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.165 254904 DEBUG nova.compute.manager [req-8bc172d4-bffb-481a-86f4-20d2f64e0003 req-3746e1c5-5bbd-4073-9d25-a5985e013bcd 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Received event network-vif-unplugged-41345e4a-b8a8-4ed2-80a8-e69289b0f8fb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 06:22:09 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-affd82d2e27ca8f17f1fa8604e76325c43d45e87a6303345d1603a4fe4b2ff92-userdata-shm.mount: Deactivated successfully.
Dec  2 06:22:09 np0005542249 systemd[1]: var-lib-containers-storage-overlay-ffd20b40bd8a416c1eaf800b6b4678cdbf6ba9cccc0990cb4367f684ac89b651-merged.mount: Deactivated successfully.
Dec  2 06:22:09 np0005542249 podman[275968]: 2025-12-02 11:22:09.2008735 +0000 UTC m=+0.114379757 container cleanup affd82d2e27ca8f17f1fa8604e76325c43d45e87a6303345d1603a4fe4b2ff92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:22:09 np0005542249 systemd[1]: libpod-conmon-affd82d2e27ca8f17f1fa8604e76325c43d45e87a6303345d1603a4fe4b2ff92.scope: Deactivated successfully.
Dec  2 06:22:09 np0005542249 podman[276028]: 2025-12-02 11:22:09.276948269 +0000 UTC m=+0.049395789 container remove affd82d2e27ca8f17f1fa8604e76325c43d45e87a6303345d1603a4fe4b2ff92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 06:22:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:09.288 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[3b5e97c9-2221-4133-bd75-16dd828b6bfc]: (4, ('Tue Dec  2 11:22:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa (affd82d2e27ca8f17f1fa8604e76325c43d45e87a6303345d1603a4fe4b2ff92)\naffd82d2e27ca8f17f1fa8604e76325c43d45e87a6303345d1603a4fe4b2ff92\nTue Dec  2 11:22:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa (affd82d2e27ca8f17f1fa8604e76325c43d45e87a6303345d1603a4fe4b2ff92)\naffd82d2e27ca8f17f1fa8604e76325c43d45e87a6303345d1603a4fe4b2ff92\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:09.291 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[f907d1bb-85de-474d-bd16-47b63b1167e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:09.292 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5b1cd69-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.294 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:09 np0005542249 kernel: tapd5b1cd69-c0: left promiscuous mode
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.343 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:09.348 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[0531e885-9c51-4e66-985d-512317e0e782]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:09.363 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[f89eabc1-e8cf-4fca-b6b9-050e06b0702c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:09.365 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[aed7e36d-55f5-49d6-a7c8-5120d7dd8c3e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:09.387 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[e51e695a-1b60-4556-aab8-228063ac9fdf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467683, 'reachable_time': 43297, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276046, 'error': None, 'target': 'ovnmeta-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:09 np0005542249 systemd[1]: run-netns-ovnmeta\x2dd5b1cd69\x2dc4b4\x2d4153\x2dba26\x2d4c8fdd4ac8aa.mount: Deactivated successfully.
Dec  2 06:22:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:09.394 164036 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d5b1cd69-c4b4-4153-ba26-4c8fdd4ac8aa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 06:22:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:09.394 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[0658f792-f855-4907-bada-a95708fef720]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.472 254904 INFO nova.virt.libvirt.driver [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Creating config drive at /var/lib/nova/instances/d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7/disk.config#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.486 254904 DEBUG oslo_concurrency.processutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr1zh66is execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.556 254904 INFO nova.virt.libvirt.driver [None req-bdf51df4-6a72-46ad-826d-a6f87581f283 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Deleting instance files /var/lib/nova/instances/415ead6c-ffe0-4426-a145-1cb487cfa30f_del#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.557 254904 INFO nova.virt.libvirt.driver [None req-bdf51df4-6a72-46ad-826d-a6f87581f283 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Deletion of /var/lib/nova/instances/415ead6c-ffe0-4426-a145-1cb487cfa30f_del complete#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.584 254904 DEBUG nova.network.neutron [req-27550bb5-ec79-4be9-8985-2fc2bf2fd26c req-caa1656a-1744-49b9-803c-91c16561c1b1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Updated VIF entry in instance network info cache for port 61896fe1-ac59-4781-93dd-2cf18f84208e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.585 254904 DEBUG nova.network.neutron [req-27550bb5-ec79-4be9-8985-2fc2bf2fd26c req-caa1656a-1744-49b9-803c-91c16561c1b1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Updating instance_info_cache with network_info: [{"id": "61896fe1-ac59-4781-93dd-2cf18f84208e", "address": "fa:16:3e:9c:35:f4", "network": {"id": "df73b9ab-de8d-40fa-9bf0-aa773bb32d3a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-950743948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff78a31f26746918caf04706b12b741", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61896fe1-ac", "ovs_interfaceid": "61896fe1-ac59-4781-93dd-2cf18f84208e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.608 254904 DEBUG oslo_concurrency.lockutils [req-27550bb5-ec79-4be9-8985-2fc2bf2fd26c req-caa1656a-1744-49b9-803c-91c16561c1b1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.618 254904 INFO nova.compute.manager [None req-bdf51df4-6a72-46ad-826d-a6f87581f283 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.619 254904 DEBUG oslo.service.loopingcall [None req-bdf51df4-6a72-46ad-826d-a6f87581f283 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.620 254904 DEBUG oslo_concurrency.processutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr1zh66is" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.620 254904 DEBUG nova.compute.manager [-] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.620 254904 DEBUG nova.network.neutron [-] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.650 254904 DEBUG nova.storage.rbd_utils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] rbd image d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.654 254904 DEBUG oslo_concurrency.processutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7/disk.config d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.825 254904 DEBUG oslo_concurrency.processutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7/disk.config d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.827 254904 INFO nova.virt.libvirt.driver [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Deleting local config drive /var/lib/nova/instances/d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7/disk.config because it was imported into RBD.#033[00m
Dec  2 06:22:09 np0005542249 systemd-udevd[275931]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:22:09 np0005542249 NetworkManager[48987]: <info>  [1764674529.9075] manager: (tap61896fe1-ac): new Tun device (/org/freedesktop/NetworkManager/Devices/69)
Dec  2 06:22:09 np0005542249 kernel: tap61896fe1-ac: entered promiscuous mode
Dec  2 06:22:09 np0005542249 NetworkManager[48987]: <info>  [1764674529.9429] device (tap61896fe1-ac): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:22:09 np0005542249 NetworkManager[48987]: <info>  [1764674529.9443] device (tap61896fe1-ac): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:22:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:09Z|00109|binding|INFO|Claiming lport 61896fe1-ac59-4781-93dd-2cf18f84208e for this chassis.
Dec  2 06:22:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:09Z|00110|binding|INFO|61896fe1-ac59-4781-93dd-2cf18f84208e: Claiming fa:16:3e:9c:35:f4 10.100.0.9
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.953 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:09.961 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:35:f4 10.100.0.9'], port_security=['fa:16:3e:9c:35:f4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd9cd12f2-b6b7-41d4-a74b-2a89172f2ce7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fff78a31f26746918caf04706b12b741', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ca4a25f5-876c-47dc-9a35-23e0785f3cf3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff54e9e6-62e9-4528-92f5-a3bb97a08852, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=61896fe1-ac59-4781-93dd-2cf18f84208e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:22:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:09.963 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 61896fe1-ac59-4781-93dd-2cf18f84208e in datapath df73b9ab-de8d-40fa-9bf0-aa773bb32d3a bound to our chassis#033[00m
Dec  2 06:22:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:09.967 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network df73b9ab-de8d-40fa-9bf0-aa773bb32d3a#033[00m
Dec  2 06:22:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:09Z|00111|binding|INFO|Setting lport 61896fe1-ac59-4781-93dd-2cf18f84208e ovn-installed in OVS
Dec  2 06:22:09 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:09Z|00112|binding|INFO|Setting lport 61896fe1-ac59-4781-93dd-2cf18f84208e up in Southbound
Dec  2 06:22:09 np0005542249 nova_compute[254900]: 2025-12-02 11:22:09.981 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:09.984 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[10160fba-97c3-4c66-b67a-aeb064027449]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:09.985 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdf73b9ab-d1 in ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 06:22:09 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1228: 321 pgs: 321 active+clean; 260 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 196 op/s
Dec  2 06:22:09 np0005542249 systemd-machined[216222]: New machine qemu-11-instance-0000000b.
Dec  2 06:22:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:09.991 262398 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdf73b9ab-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 06:22:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:09.991 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[ebdb3654-06d7-494d-b4d7-5913ef221ca0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:09.992 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[91f8c2ba-70bb-4808-b05a-575ef4c542cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:10 np0005542249 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:10.015 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[9b01bc06-3a05-4079-a103-648a246f2978]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:10.047 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[6e04beeb-163e-4bc3-917d-dee74f79a5e5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:10.090 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[9b37ae3b-71a8-4e43-9454-e2e04b761e91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:10.098 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[2b65c5d4-b9a3-4597-81f0-54eec742eb30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:10 np0005542249 NetworkManager[48987]: <info>  [1764674530.0999] manager: (tapdf73b9ab-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/70)
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:10.151 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[d249de65-14ce-4806-8c6e-f2d0547dedd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:10.156 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[e7ed71a6-e06e-4674-86e1-e55f3259a2dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:10 np0005542249 NetworkManager[48987]: <info>  [1764674530.1960] device (tapdf73b9ab-d0): carrier: link connected
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:10.202 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[ec958baa-5ead-4380-8e58-14645f12bf97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:10.244 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[2d3e4932-e0d0-43dc-b960-8eae78a7e76c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf73b9ab-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:51:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 472301, 'reachable_time': 40826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276139, 'error': None, 'target': 'ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:10.263 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[04ab6c3f-9762-4cf5-9dc2-4871dffd1058]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe15:5153'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 472301, 'tstamp': 472301}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276151, 'error': None, 'target': 'ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:10.308 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[c84cc8e7-5d11-44b9-b820-f202377d27d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf73b9ab-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:51:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 472301, 'reachable_time': 40826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276168, 'error': None, 'target': 'ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:10.361 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[b3138c41-a10a-48b4-8e4a-75c3b07b1bc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:10 np0005542249 nova_compute[254900]: 2025-12-02 11:22:10.431 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674530.4310422, d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:22:10 np0005542249 nova_compute[254900]: 2025-12-02 11:22:10.432 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] VM Started (Lifecycle Event)#033[00m
Dec  2 06:22:10 np0005542249 nova_compute[254900]: 2025-12-02 11:22:10.463 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:22:10 np0005542249 nova_compute[254900]: 2025-12-02 11:22:10.470 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674530.431441, d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:22:10 np0005542249 nova_compute[254900]: 2025-12-02 11:22:10.471 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:10.487 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a8bf49-ebff-460f-b87d-3cef704ab489]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:10.489 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf73b9ab-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:10.490 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:10.491 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf73b9ab-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:22:10 np0005542249 NetworkManager[48987]: <info>  [1764674530.4954] manager: (tapdf73b9ab-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Dec  2 06:22:10 np0005542249 kernel: tapdf73b9ab-d0: entered promiscuous mode
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:10.506 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdf73b9ab-d0, col_values=(('external_ids', {'iface-id': '21f7ba63-7631-4701-921a-a830d0f08e6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:22:10 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:10Z|00113|binding|INFO|Releasing lport 21f7ba63-7631-4701-921a-a830d0f08e6f from this chassis (sb_readonly=0)
Dec  2 06:22:10 np0005542249 nova_compute[254900]: 2025-12-02 11:22:10.493 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:10 np0005542249 nova_compute[254900]: 2025-12-02 11:22:10.516 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:22:10 np0005542249 nova_compute[254900]: 2025-12-02 11:22:10.526 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:10.531 163757 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/df73b9ab-de8d-40fa-9bf0-aa773bb32d3a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/df73b9ab-de8d-40fa-9bf0-aa773bb32d3a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 06:22:10 np0005542249 nova_compute[254900]: 2025-12-02 11:22:10.532 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:10.532 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[5f3753ea-a19a-4fae-814c-91143710e7c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:10.533 163757 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: global
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]:    log         /dev/log local0 debug
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]:    log-tag     haproxy-metadata-proxy-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]:    user        root
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]:    group       root
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]:    maxconn     1024
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]:    pidfile     /var/lib/neutron/external/pids/df73b9ab-de8d-40fa-9bf0-aa773bb32d3a.pid.haproxy
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]:    daemon
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: defaults
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]:    log global
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]:    mode http
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]:    option httplog
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]:    option dontlognull
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]:    option http-server-close
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]:    option forwardfor
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]:    retries                 3
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]:    timeout http-request    30s
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]:    timeout connect         30s
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]:    timeout client          32s
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]:    timeout server          32s
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]:    timeout http-keep-alive 30s
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: listen listener
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]:    bind 169.254.169.254:80
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]:    http-request add-header X-OVN-Network-ID df73b9ab-de8d-40fa-9bf0-aa773bb32d3a
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 06:22:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:10.535 163757 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a', 'env', 'PROCESS_TAG=haproxy-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/df73b9ab-de8d-40fa-9bf0-aa773bb32d3a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 06:22:10 np0005542249 nova_compute[254900]: 2025-12-02 11:22:10.560 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:22:11 np0005542249 podman[276208]: 2025-12-02 11:22:11.004538342 +0000 UTC m=+0.086958926 container create ff0bc7be46280e7f5bb0920d4712e4505064f03c1ea3ae6c3a91d2eac3ac60cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:22:11 np0005542249 systemd[1]: Started libpod-conmon-ff0bc7be46280e7f5bb0920d4712e4505064f03c1ea3ae6c3a91d2eac3ac60cc.scope.
Dec  2 06:22:11 np0005542249 podman[276208]: 2025-12-02 11:22:10.963184745 +0000 UTC m=+0.045605379 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:22:11 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:22:11 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c589a2b259c833d86acb4922904212abb10c895afe4ab61ecffea2f2618e33e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 06:22:11 np0005542249 podman[276208]: 2025-12-02 11:22:11.109701315 +0000 UTC m=+0.192121939 container init ff0bc7be46280e7f5bb0920d4712e4505064f03c1ea3ae6c3a91d2eac3ac60cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  2 06:22:11 np0005542249 podman[276208]: 2025-12-02 11:22:11.117641664 +0000 UTC m=+0.200062278 container start ff0bc7be46280e7f5bb0920d4712e4505064f03c1ea3ae6c3a91d2eac3ac60cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 06:22:11 np0005542249 neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a[276224]: [NOTICE]   (276239) : New worker (276247) forked
Dec  2 06:22:11 np0005542249 neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a[276224]: [NOTICE]   (276239) : Loading success.
Dec  2 06:22:11 np0005542249 podman[276221]: 2025-12-02 11:22:11.187201191 +0000 UTC m=+0.128356303 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.277 254904 DEBUG nova.network.neutron [-] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.295 254904 DEBUG nova.compute.manager [req-734424bf-eef3-42d9-88a9-9f34c1fa2c59 req-1fa255dd-8ab4-4724-8554-d133c31baa05 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Received event network-vif-plugged-41345e4a-b8a8-4ed2-80a8-e69289b0f8fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.296 254904 DEBUG oslo_concurrency.lockutils [req-734424bf-eef3-42d9-88a9-9f34c1fa2c59 req-1fa255dd-8ab4-4724-8554-d133c31baa05 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "415ead6c-ffe0-4426-a145-1cb487cfa30f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.297 254904 DEBUG oslo_concurrency.lockutils [req-734424bf-eef3-42d9-88a9-9f34c1fa2c59 req-1fa255dd-8ab4-4724-8554-d133c31baa05 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "415ead6c-ffe0-4426-a145-1cb487cfa30f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.297 254904 DEBUG oslo_concurrency.lockutils [req-734424bf-eef3-42d9-88a9-9f34c1fa2c59 req-1fa255dd-8ab4-4724-8554-d133c31baa05 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "415ead6c-ffe0-4426-a145-1cb487cfa30f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.297 254904 DEBUG nova.compute.manager [req-734424bf-eef3-42d9-88a9-9f34c1fa2c59 req-1fa255dd-8ab4-4724-8554-d133c31baa05 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] No waiting events found dispatching network-vif-plugged-41345e4a-b8a8-4ed2-80a8-e69289b0f8fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.298 254904 WARNING nova.compute.manager [req-734424bf-eef3-42d9-88a9-9f34c1fa2c59 req-1fa255dd-8ab4-4724-8554-d133c31baa05 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Received unexpected event network-vif-plugged-41345e4a-b8a8-4ed2-80a8-e69289b0f8fb for instance with vm_state active and task_state deleting.#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.298 254904 DEBUG nova.compute.manager [req-734424bf-eef3-42d9-88a9-9f34c1fa2c59 req-1fa255dd-8ab4-4724-8554-d133c31baa05 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Received event network-vif-plugged-61896fe1-ac59-4781-93dd-2cf18f84208e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.298 254904 DEBUG oslo_concurrency.lockutils [req-734424bf-eef3-42d9-88a9-9f34c1fa2c59 req-1fa255dd-8ab4-4724-8554-d133c31baa05 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.299 254904 DEBUG oslo_concurrency.lockutils [req-734424bf-eef3-42d9-88a9-9f34c1fa2c59 req-1fa255dd-8ab4-4724-8554-d133c31baa05 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.299 254904 DEBUG oslo_concurrency.lockutils [req-734424bf-eef3-42d9-88a9-9f34c1fa2c59 req-1fa255dd-8ab4-4724-8554-d133c31baa05 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.299 254904 DEBUG nova.compute.manager [req-734424bf-eef3-42d9-88a9-9f34c1fa2c59 req-1fa255dd-8ab4-4724-8554-d133c31baa05 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Processing event network-vif-plugged-61896fe1-ac59-4781-93dd-2cf18f84208e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.299 254904 DEBUG nova.compute.manager [req-734424bf-eef3-42d9-88a9-9f34c1fa2c59 req-1fa255dd-8ab4-4724-8554-d133c31baa05 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Received event network-vif-plugged-61896fe1-ac59-4781-93dd-2cf18f84208e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.300 254904 DEBUG oslo_concurrency.lockutils [req-734424bf-eef3-42d9-88a9-9f34c1fa2c59 req-1fa255dd-8ab4-4724-8554-d133c31baa05 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.300 254904 DEBUG oslo_concurrency.lockutils [req-734424bf-eef3-42d9-88a9-9f34c1fa2c59 req-1fa255dd-8ab4-4724-8554-d133c31baa05 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.300 254904 DEBUG oslo_concurrency.lockutils [req-734424bf-eef3-42d9-88a9-9f34c1fa2c59 req-1fa255dd-8ab4-4724-8554-d133c31baa05 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.300 254904 DEBUG nova.compute.manager [req-734424bf-eef3-42d9-88a9-9f34c1fa2c59 req-1fa255dd-8ab4-4724-8554-d133c31baa05 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] No waiting events found dispatching network-vif-plugged-61896fe1-ac59-4781-93dd-2cf18f84208e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.301 254904 WARNING nova.compute.manager [req-734424bf-eef3-42d9-88a9-9f34c1fa2c59 req-1fa255dd-8ab4-4724-8554-d133c31baa05 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Received unexpected event network-vif-plugged-61896fe1-ac59-4781-93dd-2cf18f84208e for instance with vm_state building and task_state spawning.#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.302 254904 DEBUG nova.compute.manager [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.302 254904 INFO nova.compute.manager [-] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Took 1.68 seconds to deallocate network for instance.#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.315 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674531.3131473, d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.316 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.321 254904 DEBUG nova.virt.libvirt.driver [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.326 254904 INFO nova.virt.libvirt.driver [-] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Instance spawned successfully.#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.327 254904 DEBUG nova.virt.libvirt.driver [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.362 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.369 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.373 254904 DEBUG nova.virt.libvirt.driver [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.374 254904 DEBUG nova.virt.libvirt.driver [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.374 254904 DEBUG nova.virt.libvirt.driver [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.375 254904 DEBUG nova.virt.libvirt.driver [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.375 254904 DEBUG nova.virt.libvirt.driver [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.375 254904 DEBUG nova.virt.libvirt.driver [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.380 254904 DEBUG oslo_concurrency.lockutils [None req-bdf51df4-6a72-46ad-826d-a6f87581f283 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.381 254904 DEBUG oslo_concurrency.lockutils [None req-bdf51df4-6a72-46ad-826d-a6f87581f283 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.404 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.439 254904 INFO nova.compute.manager [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Took 6.72 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.440 254904 DEBUG nova.compute.manager [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.505 254904 DEBUG oslo_concurrency.processutils [None req-bdf51df4-6a72-46ad-826d-a6f87581f283 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.542 254904 INFO nova.compute.manager [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Took 7.93 seconds to build instance.#033[00m
Dec  2 06:22:11 np0005542249 nova_compute[254900]: 2025-12-02 11:22:11.565 254904 DEBUG oslo_concurrency.lockutils [None req-76fe591f-34d1-427c-918f-7483550e46de e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:22:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:22:11 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/548976450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:22:11 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1229: 321 pgs: 321 active+clean; 225 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 210 op/s
Dec  2 06:22:12 np0005542249 nova_compute[254900]: 2025-12-02 11:22:12.003 254904 DEBUG oslo_concurrency.processutils [None req-bdf51df4-6a72-46ad-826d-a6f87581f283 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:22:12 np0005542249 nova_compute[254900]: 2025-12-02 11:22:12.011 254904 DEBUG nova.compute.provider_tree [None req-bdf51df4-6a72-46ad-826d-a6f87581f283 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:22:12 np0005542249 nova_compute[254900]: 2025-12-02 11:22:12.036 254904 DEBUG nova.scheduler.client.report [None req-bdf51df4-6a72-46ad-826d-a6f87581f283 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:22:12 np0005542249 nova_compute[254900]: 2025-12-02 11:22:12.085 254904 DEBUG oslo_concurrency.lockutils [None req-bdf51df4-6a72-46ad-826d-a6f87581f283 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:12 np0005542249 nova_compute[254900]: 2025-12-02 11:22:12.124 254904 INFO nova.scheduler.client.report [None req-bdf51df4-6a72-46ad-826d-a6f87581f283 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Deleted allocations for instance 415ead6c-ffe0-4426-a145-1cb487cfa30f#033[00m
Dec  2 06:22:12 np0005542249 nova_compute[254900]: 2025-12-02 11:22:12.240 254904 DEBUG oslo_concurrency.lockutils [None req-bdf51df4-6a72-46ad-826d-a6f87581f283 33395809f6bd4db1bf1ab3a67fdbc5d5 401c4eb4c3ea4ca886484161dcd637b6 - - default default] Lock "415ead6c-ffe0-4426-a145-1cb487cfa30f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.382s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:13 np0005542249 nova_compute[254900]: 2025-12-02 11:22:13.072 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:13 np0005542249 nova_compute[254900]: 2025-12-02 11:22:13.487 254904 DEBUG nova.compute.manager [req-9451b2fc-6471-4d3e-88b9-3684122e4ce9 req-fbb5e99b-ffb2-4348-afbb-9cd2210add40 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Received event network-vif-deleted-41345e4a-b8a8-4ed2-80a8-e69289b0f8fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:22:13 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1230: 321 pgs: 321 active+clean; 195 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 222 op/s
Dec  2 06:22:14 np0005542249 nova_compute[254900]: 2025-12-02 11:22:14.132 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:14 np0005542249 nova_compute[254900]: 2025-12-02 11:22:14.474 254904 DEBUG nova.compute.manager [req-5fb0e35b-bc3d-478b-bccd-034003325ce6 req-b3a97092-ca70-4db3-8baf-b68104624400 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Received event network-changed-61896fe1-ac59-4781-93dd-2cf18f84208e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:22:14 np0005542249 nova_compute[254900]: 2025-12-02 11:22:14.475 254904 DEBUG nova.compute.manager [req-5fb0e35b-bc3d-478b-bccd-034003325ce6 req-b3a97092-ca70-4db3-8baf-b68104624400 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Refreshing instance network info cache due to event network-changed-61896fe1-ac59-4781-93dd-2cf18f84208e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:22:14 np0005542249 nova_compute[254900]: 2025-12-02 11:22:14.476 254904 DEBUG oslo_concurrency.lockutils [req-5fb0e35b-bc3d-478b-bccd-034003325ce6 req-b3a97092-ca70-4db3-8baf-b68104624400 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:22:14 np0005542249 nova_compute[254900]: 2025-12-02 11:22:14.476 254904 DEBUG oslo_concurrency.lockutils [req-5fb0e35b-bc3d-478b-bccd-034003325ce6 req-b3a97092-ca70-4db3-8baf-b68104624400 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:22:14 np0005542249 nova_compute[254900]: 2025-12-02 11:22:14.476 254904 DEBUG nova.network.neutron [req-5fb0e35b-bc3d-478b-bccd-034003325ce6 req-b3a97092-ca70-4db3-8baf-b68104624400 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Refreshing network info cache for port 61896fe1-ac59-4781-93dd-2cf18f84208e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:22:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:22:14 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3560942448' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:22:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:22:14 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3560942448' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:22:15 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:15Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d8:49:e3 10.100.0.9
Dec  2 06:22:15 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:15Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d8:49:e3 10.100.0.9
Dec  2 06:22:15 np0005542249 podman[276285]: 2025-12-02 11:22:15.035825435 +0000 UTC m=+0.096595969 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  2 06:22:15 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1231: 321 pgs: 321 active+clean; 181 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 202 op/s
Dec  2 06:22:16 np0005542249 nova_compute[254900]: 2025-12-02 11:22:16.231 254904 DEBUG nova.network.neutron [req-5fb0e35b-bc3d-478b-bccd-034003325ce6 req-b3a97092-ca70-4db3-8baf-b68104624400 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Updated VIF entry in instance network info cache for port 61896fe1-ac59-4781-93dd-2cf18f84208e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:22:16 np0005542249 nova_compute[254900]: 2025-12-02 11:22:16.234 254904 DEBUG nova.network.neutron [req-5fb0e35b-bc3d-478b-bccd-034003325ce6 req-b3a97092-ca70-4db3-8baf-b68104624400 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Updating instance_info_cache with network_info: [{"id": "61896fe1-ac59-4781-93dd-2cf18f84208e", "address": "fa:16:3e:9c:35:f4", "network": {"id": "df73b9ab-de8d-40fa-9bf0-aa773bb32d3a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-950743948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff78a31f26746918caf04706b12b741", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61896fe1-ac", "ovs_interfaceid": "61896fe1-ac59-4781-93dd-2cf18f84208e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:22:16 np0005542249 nova_compute[254900]: 2025-12-02 11:22:16.262 254904 DEBUG oslo_concurrency.lockutils [req-5fb0e35b-bc3d-478b-bccd-034003325ce6 req-b3a97092-ca70-4db3-8baf-b68104624400 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:22:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:22:17 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1232: 321 pgs: 321 active+clean; 203 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.4 MiB/s wr, 272 op/s
Dec  2 06:22:18 np0005542249 nova_compute[254900]: 2025-12-02 11:22:18.074 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:19 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:19Z|00114|binding|INFO|Releasing lport 21f7ba63-7631-4701-921a-a830d0f08e6f from this chassis (sb_readonly=0)
Dec  2 06:22:19 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:19Z|00115|binding|INFO|Releasing lport 32301a91-103a-44ac-b580-0ae264006e72 from this chassis (sb_readonly=0)
Dec  2 06:22:19 np0005542249 nova_compute[254900]: 2025-12-02 11:22:19.167 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:19 np0005542249 nova_compute[254900]: 2025-12-02 11:22:19.185 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:19.837 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:19.837 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:19.838 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:19 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1233: 321 pgs: 321 active+clean; 213 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 182 op/s
Dec  2 06:22:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:22:21 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1234: 321 pgs: 321 active+clean; 213 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 181 op/s
Dec  2 06:22:23 np0005542249 nova_compute[254900]: 2025-12-02 11:22:23.077 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:23 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:23Z|00116|binding|INFO|Releasing lport 21f7ba63-7631-4701-921a-a830d0f08e6f from this chassis (sb_readonly=0)
Dec  2 06:22:23 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:23Z|00117|binding|INFO|Releasing lport 32301a91-103a-44ac-b580-0ae264006e72 from this chassis (sb_readonly=0)
Dec  2 06:22:23 np0005542249 nova_compute[254900]: 2025-12-02 11:22:23.662 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:23 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1235: 321 pgs: 321 active+clean; 222 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 171 op/s
Dec  2 06:22:24 np0005542249 nova_compute[254900]: 2025-12-02 11:22:24.111 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764674529.1093955, 415ead6c-ffe0-4426-a145-1cb487cfa30f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:22:24 np0005542249 nova_compute[254900]: 2025-12-02 11:22:24.111 254904 INFO nova.compute.manager [-] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:22:24 np0005542249 nova_compute[254900]: 2025-12-02 11:22:24.150 254904 DEBUG nova.compute.manager [None req-175b2dd1-65af-4c9f-b181-185b5711fd4a - - - - - -] [instance: 415ead6c-ffe0-4426-a145-1cb487cfa30f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:22:24 np0005542249 nova_compute[254900]: 2025-12-02 11:22:24.201 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:25 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:25Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9c:35:f4 10.100.0.9
Dec  2 06:22:25 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:25Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9c:35:f4 10.100.0.9
Dec  2 06:22:25 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1236: 321 pgs: 321 active+clean; 222 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 126 op/s
Dec  2 06:22:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:22:26
Dec  2 06:22:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:22:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:22:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', '.mgr', 'vms', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'images', 'volumes']
Dec  2 06:22:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:22:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:22:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:22:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:22:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:22:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:22:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:22:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:22:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:22:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:22:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:22:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:22:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:22:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:22:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:22:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:22:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:22:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:22:27 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1237: 321 pgs: 321 active+clean; 243 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 4.2 MiB/s wr, 166 op/s
Dec  2 06:22:28 np0005542249 nova_compute[254900]: 2025-12-02 11:22:28.080 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:29 np0005542249 nova_compute[254900]: 2025-12-02 11:22:29.250 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:29 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1238: 321 pgs: 321 active+clean; 246 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 348 KiB/s rd, 2.7 MiB/s wr, 75 op/s
Dec  2 06:22:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:22:31 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1239: 321 pgs: 321 active+clean; 246 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 2.2 MiB/s wr, 72 op/s
Dec  2 06:22:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:22:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1154137862' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:22:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:22:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1154137862' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:22:33 np0005542249 podman[276307]: 2025-12-02 11:22:33.002200967 +0000 UTC m=+0.072807514 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  2 06:22:33 np0005542249 nova_compute[254900]: 2025-12-02 11:22:33.083 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:22:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2611535687' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:22:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:22:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2611535687' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:22:33 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1240: 321 pgs: 321 active+clean; 246 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Dec  2 06:22:34 np0005542249 nova_compute[254900]: 2025-12-02 11:22:34.253 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:22:34 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3858095115' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:22:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:22:34 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3858095115' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:22:35 np0005542249 nova_compute[254900]: 2025-12-02 11:22:35.771 254904 DEBUG oslo_concurrency.lockutils [None req-9b2eaadb-8275-4c21-b975-1e638fa0e970 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Acquiring lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:35 np0005542249 nova_compute[254900]: 2025-12-02 11:22:35.771 254904 DEBUG oslo_concurrency.lockutils [None req-9b2eaadb-8275-4c21-b975-1e638fa0e970 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:35 np0005542249 nova_compute[254900]: 2025-12-02 11:22:35.788 254904 DEBUG nova.objects.instance [None req-9b2eaadb-8275-4c21-b975-1e638fa0e970 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lazy-loading 'flavor' on Instance uuid b2e17f69-4b2a-4abd-b738-61cd5813d48e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:22:35 np0005542249 nova_compute[254900]: 2025-12-02 11:22:35.813 254904 INFO nova.virt.libvirt.driver [None req-9b2eaadb-8275-4c21-b975-1e638fa0e970 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Ignoring supplied device name: /dev/vdb#033[00m
Dec  2 06:22:35 np0005542249 nova_compute[254900]: 2025-12-02 11:22:35.827 254904 DEBUG oslo_concurrency.lockutils [None req-9b2eaadb-8275-4c21-b975-1e638fa0e970 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:22:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:22:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:22:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:22:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015188004655857263 of space, bias 1.0, pg target 0.45564013967571787 quantized to 32 (current 32)
Dec  2 06:22:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:22:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003474596275314189 of space, bias 1.0, pg target 0.10423788825942568 quantized to 32 (current 32)
Dec  2 06:22:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:22:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  2 06:22:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:22:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  2 06:22:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:22:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:22:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:22:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:22:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:22:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:22:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:22:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:22:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:22:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:22:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:22:35 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:22:35 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1241: 321 pgs: 321 active+clean; 246 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 299 KiB/s rd, 1.6 MiB/s wr, 69 op/s
Dec  2 06:22:36 np0005542249 nova_compute[254900]: 2025-12-02 11:22:36.031 254904 DEBUG oslo_concurrency.lockutils [None req-9b2eaadb-8275-4c21-b975-1e638fa0e970 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Acquiring lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:36 np0005542249 nova_compute[254900]: 2025-12-02 11:22:36.032 254904 DEBUG oslo_concurrency.lockutils [None req-9b2eaadb-8275-4c21-b975-1e638fa0e970 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:36 np0005542249 nova_compute[254900]: 2025-12-02 11:22:36.033 254904 INFO nova.compute.manager [None req-9b2eaadb-8275-4c21-b975-1e638fa0e970 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Attaching volume f5a8d4a3-e7af-4bfa-bc3d-bf51e8df73c6 to /dev/vdb#033[00m
Dec  2 06:22:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:22:36 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1495491802' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:22:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:22:36 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1495491802' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:22:36 np0005542249 nova_compute[254900]: 2025-12-02 11:22:36.206 254904 DEBUG os_brick.utils [None req-9b2eaadb-8275-4c21-b975-1e638fa0e970 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:22:36 np0005542249 nova_compute[254900]: 2025-12-02 11:22:36.208 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:22:36 np0005542249 nova_compute[254900]: 2025-12-02 11:22:36.219 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:22:36 np0005542249 nova_compute[254900]: 2025-12-02 11:22:36.220 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[1197d7b5-2f83-49d7-9971-c9e11dddfb21]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:36 np0005542249 nova_compute[254900]: 2025-12-02 11:22:36.221 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:22:36 np0005542249 nova_compute[254900]: 2025-12-02 11:22:36.234 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:22:36 np0005542249 nova_compute[254900]: 2025-12-02 11:22:36.235 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[166611ae-677d-4b67-b6ae-897a798fb816]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:36 np0005542249 nova_compute[254900]: 2025-12-02 11:22:36.237 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:22:36 np0005542249 nova_compute[254900]: 2025-12-02 11:22:36.248 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:22:36 np0005542249 nova_compute[254900]: 2025-12-02 11:22:36.248 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[62907f75-3775-484b-b0cb-9952ccf54133]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:36 np0005542249 nova_compute[254900]: 2025-12-02 11:22:36.252 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[cbc94976-1540-4294-89dc-2ec5afb1e071]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:36 np0005542249 nova_compute[254900]: 2025-12-02 11:22:36.253 254904 DEBUG oslo_concurrency.processutils [None req-9b2eaadb-8275-4c21-b975-1e638fa0e970 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:22:36 np0005542249 nova_compute[254900]: 2025-12-02 11:22:36.285 254904 DEBUG oslo_concurrency.processutils [None req-9b2eaadb-8275-4c21-b975-1e638fa0e970 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:22:36 np0005542249 nova_compute[254900]: 2025-12-02 11:22:36.291 254904 DEBUG os_brick.initiator.connectors.lightos [None req-9b2eaadb-8275-4c21-b975-1e638fa0e970 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:22:36 np0005542249 nova_compute[254900]: 2025-12-02 11:22:36.292 254904 DEBUG os_brick.initiator.connectors.lightos [None req-9b2eaadb-8275-4c21-b975-1e638fa0e970 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:22:36 np0005542249 nova_compute[254900]: 2025-12-02 11:22:36.293 254904 DEBUG os_brick.initiator.connectors.lightos [None req-9b2eaadb-8275-4c21-b975-1e638fa0e970 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:22:36 np0005542249 nova_compute[254900]: 2025-12-02 11:22:36.294 254904 DEBUG os_brick.utils [None req-9b2eaadb-8275-4c21-b975-1e638fa0e970 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] <== get_connector_properties: return (86ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:22:36 np0005542249 nova_compute[254900]: 2025-12-02 11:22:36.295 254904 DEBUG nova.virt.block_device [None req-9b2eaadb-8275-4c21-b975-1e638fa0e970 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Updating existing volume attachment record: f874b040-3388-44be-a7a2-ce2c0f7c02a6 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:22:36 np0005542249 nova_compute[254900]: 2025-12-02 11:22:36.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:22:36 np0005542249 nova_compute[254900]: 2025-12-02 11:22:36.383 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:22:36 np0005542249 nova_compute[254900]: 2025-12-02 11:22:36.449 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 06:22:36 np0005542249 nova_compute[254900]: 2025-12-02 11:22:36.450 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:22:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:22:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:22:36 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/23691097' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:22:37 np0005542249 nova_compute[254900]: 2025-12-02 11:22:37.037 254904 DEBUG nova.objects.instance [None req-9b2eaadb-8275-4c21-b975-1e638fa0e970 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lazy-loading 'flavor' on Instance uuid b2e17f69-4b2a-4abd-b738-61cd5813d48e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:22:37 np0005542249 nova_compute[254900]: 2025-12-02 11:22:37.063 254904 DEBUG nova.virt.libvirt.driver [None req-9b2eaadb-8275-4c21-b975-1e638fa0e970 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Attempting to attach volume f5a8d4a3-e7af-4bfa-bc3d-bf51e8df73c6 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Dec  2 06:22:37 np0005542249 nova_compute[254900]: 2025-12-02 11:22:37.068 254904 DEBUG nova.virt.libvirt.guest [None req-9b2eaadb-8275-4c21-b975-1e638fa0e970 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] attach device xml: <disk type="network" device="disk">
Dec  2 06:22:37 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:22:37 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-f5a8d4a3-e7af-4bfa-bc3d-bf51e8df73c6">
Dec  2 06:22:37 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:22:37 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:22:37 np0005542249 nova_compute[254900]:  <auth username="openstack">
Dec  2 06:22:37 np0005542249 nova_compute[254900]:    <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:22:37 np0005542249 nova_compute[254900]:  </auth>
Dec  2 06:22:37 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:22:37 np0005542249 nova_compute[254900]:  <serial>f5a8d4a3-e7af-4bfa-bc3d-bf51e8df73c6</serial>
Dec  2 06:22:37 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:22:37 np0005542249 nova_compute[254900]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  2 06:22:37 np0005542249 nova_compute[254900]: 2025-12-02 11:22:37.215 254904 DEBUG nova.virt.libvirt.driver [None req-9b2eaadb-8275-4c21-b975-1e638fa0e970 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:22:37 np0005542249 nova_compute[254900]: 2025-12-02 11:22:37.216 254904 DEBUG nova.virt.libvirt.driver [None req-9b2eaadb-8275-4c21-b975-1e638fa0e970 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:22:37 np0005542249 nova_compute[254900]: 2025-12-02 11:22:37.216 254904 DEBUG nova.virt.libvirt.driver [None req-9b2eaadb-8275-4c21-b975-1e638fa0e970 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:22:37 np0005542249 nova_compute[254900]: 2025-12-02 11:22:37.216 254904 DEBUG nova.virt.libvirt.driver [None req-9b2eaadb-8275-4c21-b975-1e638fa0e970 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] No VIF found with MAC fa:16:3e:d8:49:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:22:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:22:37 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2852990864' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:22:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:22:37 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2852990864' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:22:37 np0005542249 nova_compute[254900]: 2025-12-02 11:22:37.438 254904 DEBUG oslo_concurrency.lockutils [None req-9b2eaadb-8275-4c21-b975-1e638fa0e970 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.405s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:37 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1242: 321 pgs: 321 active+clean; 246 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 1.6 MiB/s wr, 106 op/s
Dec  2 06:22:38 np0005542249 nova_compute[254900]: 2025-12-02 11:22:38.116 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:38 np0005542249 nova_compute[254900]: 2025-12-02 11:22:38.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:22:38 np0005542249 nova_compute[254900]: 2025-12-02 11:22:38.382 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:22:39 np0005542249 nova_compute[254900]: 2025-12-02 11:22:39.299 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:39 np0005542249 nova_compute[254900]: 2025-12-02 11:22:39.519 254904 DEBUG nova.compute.manager [req-dad0a66a-5bac-4631-94c1-3c5c8deb775e req-28fb1987-a7ea-4eb6-958e-75b525b0bd0b caf61d23c25d448cb59945abd08cf614 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Received event volume-extended-f5a8d4a3-e7af-4bfa-bc3d-bf51e8df73c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:22:39 np0005542249 nova_compute[254900]: 2025-12-02 11:22:39.545 254904 DEBUG nova.compute.manager [req-dad0a66a-5bac-4631-94c1-3c5c8deb775e req-28fb1987-a7ea-4eb6-958e-75b525b0bd0b caf61d23c25d448cb59945abd08cf614 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Handling volume-extended event for volume f5a8d4a3-e7af-4bfa-bc3d-bf51e8df73c6 extend_volume /usr/lib/python3.9/site-packages/nova/compute/manager.py:10896#033[00m
Dec  2 06:22:39 np0005542249 nova_compute[254900]: 2025-12-02 11:22:39.563 254904 INFO nova.compute.manager [req-dad0a66a-5bac-4631-94c1-3c5c8deb775e req-28fb1987-a7ea-4eb6-958e-75b525b0bd0b caf61d23c25d448cb59945abd08cf614 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Cinder extended volume f5a8d4a3-e7af-4bfa-bc3d-bf51e8df73c6; extending it to detect new size#033[00m
Dec  2 06:22:39 np0005542249 nova_compute[254900]: 2025-12-02 11:22:39.974 254904 DEBUG nova.virt.libvirt.driver [req-dad0a66a-5bac-4631-94c1-3c5c8deb775e req-28fb1987-a7ea-4eb6-958e-75b525b0bd0b caf61d23c25d448cb59945abd08cf614 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Resizing target device vdb to 2147483648 _resize_attached_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2756#033[00m
Dec  2 06:22:40 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1243: 321 pgs: 321 active+clean; 246 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 55 KiB/s wr, 72 op/s
Dec  2 06:22:40 np0005542249 nova_compute[254900]: 2025-12-02 11:22:40.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:22:40 np0005542249 nova_compute[254900]: 2025-12-02 11:22:40.383 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:22:40 np0005542249 nova_compute[254900]: 2025-12-02 11:22:40.717 254904 DEBUG oslo_concurrency.lockutils [None req-b2d8bb9b-dc6a-4148-89c1-115b9c0d720a 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Acquiring lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:40 np0005542249 nova_compute[254900]: 2025-12-02 11:22:40.718 254904 DEBUG oslo_concurrency.lockutils [None req-b2d8bb9b-dc6a-4148-89c1-115b9c0d720a 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:40 np0005542249 nova_compute[254900]: 2025-12-02 11:22:40.737 254904 INFO nova.compute.manager [None req-b2d8bb9b-dc6a-4148-89c1-115b9c0d720a 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Detaching volume f5a8d4a3-e7af-4bfa-bc3d-bf51e8df73c6#033[00m
Dec  2 06:22:40 np0005542249 nova_compute[254900]: 2025-12-02 11:22:40.862 254904 INFO nova.virt.block_device [None req-b2d8bb9b-dc6a-4148-89c1-115b9c0d720a 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Attempting to driver detach volume f5a8d4a3-e7af-4bfa-bc3d-bf51e8df73c6 from mountpoint /dev/vdb#033[00m
Dec  2 06:22:40 np0005542249 nova_compute[254900]: 2025-12-02 11:22:40.869 254904 DEBUG nova.virt.libvirt.driver [None req-b2d8bb9b-dc6a-4148-89c1-115b9c0d720a 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Attempting to detach device vdb from instance b2e17f69-4b2a-4abd-b738-61cd5813d48e from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  2 06:22:40 np0005542249 nova_compute[254900]: 2025-12-02 11:22:40.870 254904 DEBUG nova.virt.libvirt.guest [None req-b2d8bb9b-dc6a-4148-89c1-115b9c0d720a 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] detach device xml: <disk type="network" device="disk">
Dec  2 06:22:40 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:22:40 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-f5a8d4a3-e7af-4bfa-bc3d-bf51e8df73c6">
Dec  2 06:22:40 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:22:40 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:22:40 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:22:40 np0005542249 nova_compute[254900]:  <serial>f5a8d4a3-e7af-4bfa-bc3d-bf51e8df73c6</serial>
Dec  2 06:22:40 np0005542249 nova_compute[254900]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  2 06:22:40 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:22:40 np0005542249 nova_compute[254900]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  2 06:22:40 np0005542249 nova_compute[254900]: 2025-12-02 11:22:40.876 254904 INFO nova.virt.libvirt.driver [None req-b2d8bb9b-dc6a-4148-89c1-115b9c0d720a 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Successfully detached device vdb from instance b2e17f69-4b2a-4abd-b738-61cd5813d48e from the persistent domain config.#033[00m
Dec  2 06:22:40 np0005542249 nova_compute[254900]: 2025-12-02 11:22:40.876 254904 DEBUG nova.virt.libvirt.driver [None req-b2d8bb9b-dc6a-4148-89c1-115b9c0d720a 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance b2e17f69-4b2a-4abd-b738-61cd5813d48e from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  2 06:22:40 np0005542249 nova_compute[254900]: 2025-12-02 11:22:40.877 254904 DEBUG nova.virt.libvirt.guest [None req-b2d8bb9b-dc6a-4148-89c1-115b9c0d720a 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] detach device xml: <disk type="network" device="disk">
Dec  2 06:22:40 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:22:40 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-f5a8d4a3-e7af-4bfa-bc3d-bf51e8df73c6">
Dec  2 06:22:40 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:22:40 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:22:40 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:22:40 np0005542249 nova_compute[254900]:  <serial>f5a8d4a3-e7af-4bfa-bc3d-bf51e8df73c6</serial>
Dec  2 06:22:40 np0005542249 nova_compute[254900]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  2 06:22:40 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:22:40 np0005542249 nova_compute[254900]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  2 06:22:40 np0005542249 nova_compute[254900]: 2025-12-02 11:22:40.936 254904 DEBUG nova.virt.libvirt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Received event <DeviceRemovedEvent: 1764674560.9365048, b2e17f69-4b2a-4abd-b738-61cd5813d48e => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  2 06:22:40 np0005542249 nova_compute[254900]: 2025-12-02 11:22:40.939 254904 DEBUG nova.virt.libvirt.driver [None req-b2d8bb9b-dc6a-4148-89c1-115b9c0d720a 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance b2e17f69-4b2a-4abd-b738-61cd5813d48e _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  2 06:22:40 np0005542249 nova_compute[254900]: 2025-12-02 11:22:40.940 254904 INFO nova.virt.libvirt.driver [None req-b2d8bb9b-dc6a-4148-89c1-115b9c0d720a 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Successfully detached device vdb from instance b2e17f69-4b2a-4abd-b738-61cd5813d48e from the live domain config.#033[00m
Dec  2 06:22:41 np0005542249 nova_compute[254900]: 2025-12-02 11:22:41.231 254904 DEBUG nova.objects.instance [None req-b2d8bb9b-dc6a-4148-89c1-115b9c0d720a 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lazy-loading 'flavor' on Instance uuid b2e17f69-4b2a-4abd-b738-61cd5813d48e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:22:41 np0005542249 nova_compute[254900]: 2025-12-02 11:22:41.275 254904 DEBUG oslo_concurrency.lockutils [None req-b2d8bb9b-dc6a-4148-89c1-115b9c0d720a 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:41 np0005542249 nova_compute[254900]: 2025-12-02 11:22:41.377 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:22:41 np0005542249 nova_compute[254900]: 2025-12-02 11:22:41.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:22:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:22:42 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1244: 321 pgs: 321 active+clean; 246 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 21 KiB/s wr, 79 op/s
Dec  2 06:22:42 np0005542249 podman[276356]: 2025-12-02 11:22:42.034083819 +0000 UTC m=+0.114787365 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.094 254904 DEBUG oslo_concurrency.lockutils [None req-af34c65f-ea48-45e7-b8e8-ee8752ee9102 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Acquiring lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.095 254904 DEBUG oslo_concurrency.lockutils [None req-af34c65f-ea48-45e7-b8e8-ee8752ee9102 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.095 254904 DEBUG oslo_concurrency.lockutils [None req-af34c65f-ea48-45e7-b8e8-ee8752ee9102 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Acquiring lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.096 254904 DEBUG oslo_concurrency.lockutils [None req-af34c65f-ea48-45e7-b8e8-ee8752ee9102 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.096 254904 DEBUG oslo_concurrency.lockutils [None req-af34c65f-ea48-45e7-b8e8-ee8752ee9102 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.097 254904 INFO nova.compute.manager [None req-af34c65f-ea48-45e7-b8e8-ee8752ee9102 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Terminating instance#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.098 254904 DEBUG nova.compute.manager [None req-af34c65f-ea48-45e7-b8e8-ee8752ee9102 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:22:42 np0005542249 kernel: tap37bb6586-c8 (unregistering): left promiscuous mode
Dec  2 06:22:42 np0005542249 NetworkManager[48987]: <info>  [1764674562.1615] device (tap37bb6586-c8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.169 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:42 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:42Z|00118|binding|INFO|Releasing lport 37bb6586-c8c5-4107-9775-10964531e11e from this chassis (sb_readonly=0)
Dec  2 06:22:42 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:42Z|00119|binding|INFO|Setting lport 37bb6586-c8c5-4107-9775-10964531e11e down in Southbound
Dec  2 06:22:42 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:42Z|00120|binding|INFO|Removing iface tap37bb6586-c8 ovn-installed in OVS
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.173 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:42 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:42.181 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:49:e3 10.100.0.9'], port_security=['fa:16:3e:d8:49:e3 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b2e17f69-4b2a-4abd-b738-61cd5813d48e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-29518419-c819-4471-8719-e72e25a6d1be', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0fe4a9242c84683be1c02df04c2dbf3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4740505d-999d-42bd-bfbf-9331ea52ddcd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.222'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cab70206-79f0-4509-94be-3054675bd95c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=37bb6586-c8c5-4107-9775-10964531e11e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:22:42 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:42.182 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 37bb6586-c8c5-4107-9775-10964531e11e in datapath 29518419-c819-4471-8719-e72e25a6d1be unbound from our chassis#033[00m
Dec  2 06:22:42 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:42.183 163757 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 29518419-c819-4471-8719-e72e25a6d1be, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 06:22:42 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:42.184 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[936f10e6-545f-48be-9bd0-d65b0d1ff70f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:42 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:42.185 163757 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-29518419-c819-4471-8719-e72e25a6d1be namespace which is not needed anymore#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.195 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:42 np0005542249 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Dec  2 06:22:42 np0005542249 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 15.539s CPU time.
Dec  2 06:22:42 np0005542249 systemd-machined[216222]: Machine qemu-10-instance-0000000a terminated.
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.337 254904 INFO nova.virt.libvirt.driver [-] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Instance destroyed successfully.#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.338 254904 DEBUG nova.objects.instance [None req-af34c65f-ea48-45e7-b8e8-ee8752ee9102 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lazy-loading 'resources' on Instance uuid b2e17f69-4b2a-4abd-b738-61cd5813d48e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.354 254904 DEBUG nova.virt.libvirt.vif [None req-af34c65f-ea48-45e7-b8e8-ee8752ee9102 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:21:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-1483529601',display_name='tempest-VolumesExtendAttachedTest-instance-1483529601',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-1483529601',id=10,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCnOZUzLN6BwPEo2cXeff79zs4c1KWXCWq8pmV3Yw2NDx+MKi80mVUlDZ0dl8tKqAxFlSUkXWRk36c9C/Kg4Ld+Rv8uR01bm1saW6nRqne/ZVSRaX5WsvDoDbjF8u71n0A==',key_name='tempest-keypair-1334584430',keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:22:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d0fe4a9242c84683be1c02df04c2dbf3',ramdisk_id='',reservation_id='r-gir9z6nx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesExtendAttachedTest-1434628380',owner_user_name='tempest-VolumesExtendAttachedTest-1434628380-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:22:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8dfcddc04ea44d4085721856fb3f3d12',uuid=b2e17f69-4b2a-4abd-b738-61cd5813d48e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "37bb6586-c8c5-4107-9775-10964531e11e", "address": "fa:16:3e:d8:49:e3", "network": {"id": "29518419-c819-4471-8719-e72e25a6d1be", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-51492735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0fe4a9242c84683be1c02df04c2dbf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37bb6586-c8", "ovs_interfaceid": "37bb6586-c8c5-4107-9775-10964531e11e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.354 254904 DEBUG nova.network.os_vif_util [None req-af34c65f-ea48-45e7-b8e8-ee8752ee9102 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Converting VIF {"id": "37bb6586-c8c5-4107-9775-10964531e11e", "address": "fa:16:3e:d8:49:e3", "network": {"id": "29518419-c819-4471-8719-e72e25a6d1be", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-51492735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0fe4a9242c84683be1c02df04c2dbf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37bb6586-c8", "ovs_interfaceid": "37bb6586-c8c5-4107-9775-10964531e11e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.355 254904 DEBUG nova.network.os_vif_util [None req-af34c65f-ea48-45e7-b8e8-ee8752ee9102 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d8:49:e3,bridge_name='br-int',has_traffic_filtering=True,id=37bb6586-c8c5-4107-9775-10964531e11e,network=Network(29518419-c819-4471-8719-e72e25a6d1be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37bb6586-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.355 254904 DEBUG os_vif [None req-af34c65f-ea48-45e7-b8e8-ee8752ee9102 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:49:e3,bridge_name='br-int',has_traffic_filtering=True,id=37bb6586-c8c5-4107-9775-10964531e11e,network=Network(29518419-c819-4471-8719-e72e25a6d1be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37bb6586-c8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.357 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.358 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37bb6586-c8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.360 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.363 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.368 254904 INFO os_vif [None req-af34c65f-ea48-45e7-b8e8-ee8752ee9102 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:49:e3,bridge_name='br-int',has_traffic_filtering=True,id=37bb6586-c8c5-4107-9775-10964531e11e,network=Network(29518419-c819-4471-8719-e72e25a6d1be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37bb6586-c8')#033[00m
Dec  2 06:22:42 np0005542249 neutron-haproxy-ovnmeta-29518419-c819-4471-8719-e72e25a6d1be[275305]: [NOTICE]   (275309) : haproxy version is 2.8.14-c23fe91
Dec  2 06:22:42 np0005542249 neutron-haproxy-ovnmeta-29518419-c819-4471-8719-e72e25a6d1be[275305]: [NOTICE]   (275309) : path to executable is /usr/sbin/haproxy
Dec  2 06:22:42 np0005542249 neutron-haproxy-ovnmeta-29518419-c819-4471-8719-e72e25a6d1be[275305]: [WARNING]  (275309) : Exiting Master process...
Dec  2 06:22:42 np0005542249 neutron-haproxy-ovnmeta-29518419-c819-4471-8719-e72e25a6d1be[275305]: [WARNING]  (275309) : Exiting Master process...
Dec  2 06:22:42 np0005542249 neutron-haproxy-ovnmeta-29518419-c819-4471-8719-e72e25a6d1be[275305]: [ALERT]    (275309) : Current worker (275311) exited with code 143 (Terminated)
Dec  2 06:22:42 np0005542249 neutron-haproxy-ovnmeta-29518419-c819-4471-8719-e72e25a6d1be[275305]: [WARNING]  (275309) : All workers exited. Exiting... (0)
Dec  2 06:22:42 np0005542249 systemd[1]: libpod-8ffff86f6a81c33623b60c66ee322ec0d93ec5aec01ff5110ee233753fca0558.scope: Deactivated successfully.
Dec  2 06:22:42 np0005542249 podman[276405]: 2025-12-02 11:22:42.381500341 +0000 UTC m=+0.059967157 container died 8ffff86f6a81c33623b60c66ee322ec0d93ec5aec01ff5110ee233753fca0558 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29518419-c819-4471-8719-e72e25a6d1be, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.396 254904 DEBUG nova.compute.manager [req-6f809c39-c760-4b9b-aa5f-00659561ccd4 req-d7faf291-b014-4760-8029-d15a976bb43e 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Received event network-vif-unplugged-37bb6586-c8c5-4107-9775-10964531e11e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.397 254904 DEBUG oslo_concurrency.lockutils [req-6f809c39-c760-4b9b-aa5f-00659561ccd4 req-d7faf291-b014-4760-8029-d15a976bb43e 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.397 254904 DEBUG oslo_concurrency.lockutils [req-6f809c39-c760-4b9b-aa5f-00659561ccd4 req-d7faf291-b014-4760-8029-d15a976bb43e 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.397 254904 DEBUG oslo_concurrency.lockutils [req-6f809c39-c760-4b9b-aa5f-00659561ccd4 req-d7faf291-b014-4760-8029-d15a976bb43e 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.398 254904 DEBUG nova.compute.manager [req-6f809c39-c760-4b9b-aa5f-00659561ccd4 req-d7faf291-b014-4760-8029-d15a976bb43e 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] No waiting events found dispatching network-vif-unplugged-37bb6586-c8c5-4107-9775-10964531e11e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.398 254904 DEBUG nova.compute.manager [req-6f809c39-c760-4b9b-aa5f-00659561ccd4 req-d7faf291-b014-4760-8029-d15a976bb43e 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Received event network-vif-unplugged-37bb6586-c8c5-4107-9775-10964531e11e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 06:22:42 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ffff86f6a81c33623b60c66ee322ec0d93ec5aec01ff5110ee233753fca0558-userdata-shm.mount: Deactivated successfully.
Dec  2 06:22:42 np0005542249 systemd[1]: var-lib-containers-storage-overlay-53e412ef9797afc1c6807d089bcae206eaf169095d7a460f9bab4d82117d7987-merged.mount: Deactivated successfully.
Dec  2 06:22:42 np0005542249 podman[276405]: 2025-12-02 11:22:42.429052133 +0000 UTC m=+0.107518959 container cleanup 8ffff86f6a81c33623b60c66ee322ec0d93ec5aec01ff5110ee233753fca0558 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29518419-c819-4471-8719-e72e25a6d1be, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 06:22:42 np0005542249 systemd[1]: libpod-conmon-8ffff86f6a81c33623b60c66ee322ec0d93ec5aec01ff5110ee233753fca0558.scope: Deactivated successfully.
Dec  2 06:22:42 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:42.468 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:23:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:25:d0:a1:75:ed'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.469 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:42 np0005542249 podman[276461]: 2025-12-02 11:22:42.5384067 +0000 UTC m=+0.080553253 container remove 8ffff86f6a81c33623b60c66ee322ec0d93ec5aec01ff5110ee233753fca0558 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29518419-c819-4471-8719-e72e25a6d1be, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  2 06:22:42 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:42.544 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[3e0d9f9f-30d8-4c07-bce0-be500e61061e]: (4, ('Tue Dec  2 11:22:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-29518419-c819-4471-8719-e72e25a6d1be (8ffff86f6a81c33623b60c66ee322ec0d93ec5aec01ff5110ee233753fca0558)\n8ffff86f6a81c33623b60c66ee322ec0d93ec5aec01ff5110ee233753fca0558\nTue Dec  2 11:22:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-29518419-c819-4471-8719-e72e25a6d1be (8ffff86f6a81c33623b60c66ee322ec0d93ec5aec01ff5110ee233753fca0558)\n8ffff86f6a81c33623b60c66ee322ec0d93ec5aec01ff5110ee233753fca0558\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:42 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:42.548 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[b8b9d313-ada8-4454-b9f1-977f133ebd28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:42 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:42.550 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29518419-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.554 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:42 np0005542249 kernel: tap29518419-c0: left promiscuous mode
Dec  2 06:22:42 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:42.561 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[cea8f6e2-6fc3-4fea-bd48-c1eee1d060cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.573 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:42 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:42.580 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[d8e90fed-bb10-4b18-b1fd-50e20c4cf3d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:42 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:42.581 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[58aeab1b-2694-4d32-ae01-89b7751d5dee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:42 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:42.600 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[f970021b-21f1-4e4a-8c4b-af889548da39]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471381, 'reachable_time': 28964, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276477, 'error': None, 'target': 'ovnmeta-29518419-c819-4471-8719-e72e25a6d1be', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:42 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:42.603 164036 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-29518419-c819-4471-8719-e72e25a6d1be deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 06:22:42 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:42.603 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[fbfe5387-aa6d-43ff-8e9d-1d5fb42e9cc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:42 np0005542249 systemd[1]: run-netns-ovnmeta\x2d29518419\x2dc819\x2d4471\x2d8719\x2de72e25a6d1be.mount: Deactivated successfully.
Dec  2 06:22:42 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:42.604 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.763 254904 INFO nova.virt.libvirt.driver [None req-af34c65f-ea48-45e7-b8e8-ee8752ee9102 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Deleting instance files /var/lib/nova/instances/b2e17f69-4b2a-4abd-b738-61cd5813d48e_del#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.764 254904 INFO nova.virt.libvirt.driver [None req-af34c65f-ea48-45e7-b8e8-ee8752ee9102 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Deletion of /var/lib/nova/instances/b2e17f69-4b2a-4abd-b738-61cd5813d48e_del complete#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.809 254904 INFO nova.compute.manager [None req-af34c65f-ea48-45e7-b8e8-ee8752ee9102 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.809 254904 DEBUG oslo.service.loopingcall [None req-af34c65f-ea48-45e7-b8e8-ee8752ee9102 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.809 254904 DEBUG nova.compute.manager [-] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:22:42 np0005542249 nova_compute[254900]: 2025-12-02 11:22:42.810 254904 DEBUG nova.network.neutron [-] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:22:43 np0005542249 nova_compute[254900]: 2025-12-02 11:22:43.118 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:43 np0005542249 nova_compute[254900]: 2025-12-02 11:22:43.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:22:43 np0005542249 nova_compute[254900]: 2025-12-02 11:22:43.410 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:43 np0005542249 nova_compute[254900]: 2025-12-02 11:22:43.410 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:43 np0005542249 nova_compute[254900]: 2025-12-02 11:22:43.411 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:43 np0005542249 nova_compute[254900]: 2025-12-02 11:22:43.411 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:22:43 np0005542249 nova_compute[254900]: 2025-12-02 11:22:43.411 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:22:43 np0005542249 nova_compute[254900]: 2025-12-02 11:22:43.848 254904 DEBUG nova.network.neutron [-] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:22:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:22:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1808652007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:22:43 np0005542249 nova_compute[254900]: 2025-12-02 11:22:43.880 254904 INFO nova.compute.manager [-] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Took 1.07 seconds to deallocate network for instance.#033[00m
Dec  2 06:22:43 np0005542249 nova_compute[254900]: 2025-12-02 11:22:43.892 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:22:43 np0005542249 nova_compute[254900]: 2025-12-02 11:22:43.929 254904 DEBUG oslo_concurrency.lockutils [None req-af34c65f-ea48-45e7-b8e8-ee8752ee9102 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:43 np0005542249 nova_compute[254900]: 2025-12-02 11:22:43.930 254904 DEBUG oslo_concurrency.lockutils [None req-af34c65f-ea48-45e7-b8e8-ee8752ee9102 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:43 np0005542249 nova_compute[254900]: 2025-12-02 11:22:43.944 254904 DEBUG nova.compute.manager [req-09b9bcad-96c6-47be-928b-a993e8e29b9b req-5ce11a5f-5689-48b7-95cc-d9a7472b0bd2 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Received event network-vif-deleted-37bb6586-c8c5-4107-9775-10964531e11e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:22:43 np0005542249 nova_compute[254900]: 2025-12-02 11:22:43.966 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:22:43 np0005542249 nova_compute[254900]: 2025-12-02 11:22:43.967 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:22:44 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1245: 321 pgs: 321 active+clean; 197 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 9.7 KiB/s wr, 88 op/s
Dec  2 06:22:44 np0005542249 nova_compute[254900]: 2025-12-02 11:22:44.024 254904 DEBUG oslo_concurrency.processutils [None req-af34c65f-ea48-45e7-b8e8-ee8752ee9102 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:22:44 np0005542249 nova_compute[254900]: 2025-12-02 11:22:44.195 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:22:44 np0005542249 nova_compute[254900]: 2025-12-02 11:22:44.196 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4415MB free_disk=59.897098541259766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:22:44 np0005542249 nova_compute[254900]: 2025-12-02 11:22:44.197 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:22:44 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2920297454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:22:44 np0005542249 nova_compute[254900]: 2025-12-02 11:22:44.473 254904 DEBUG oslo_concurrency.processutils [None req-af34c65f-ea48-45e7-b8e8-ee8752ee9102 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:22:44 np0005542249 nova_compute[254900]: 2025-12-02 11:22:44.479 254904 DEBUG nova.compute.provider_tree [None req-af34c65f-ea48-45e7-b8e8-ee8752ee9102 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:22:44 np0005542249 nova_compute[254900]: 2025-12-02 11:22:44.524 254904 DEBUG nova.compute.manager [req-2bffdccc-e618-48bd-9a31-d7728d20a02f req-9bd9c4e4-0374-4e17-acf4-f2387d26e3ea 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Received event network-vif-plugged-37bb6586-c8c5-4107-9775-10964531e11e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:22:44 np0005542249 nova_compute[254900]: 2025-12-02 11:22:44.525 254904 DEBUG oslo_concurrency.lockutils [req-2bffdccc-e618-48bd-9a31-d7728d20a02f req-9bd9c4e4-0374-4e17-acf4-f2387d26e3ea 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:44 np0005542249 nova_compute[254900]: 2025-12-02 11:22:44.525 254904 DEBUG oslo_concurrency.lockutils [req-2bffdccc-e618-48bd-9a31-d7728d20a02f req-9bd9c4e4-0374-4e17-acf4-f2387d26e3ea 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:44 np0005542249 nova_compute[254900]: 2025-12-02 11:22:44.526 254904 DEBUG oslo_concurrency.lockutils [req-2bffdccc-e618-48bd-9a31-d7728d20a02f req-9bd9c4e4-0374-4e17-acf4-f2387d26e3ea 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:44 np0005542249 nova_compute[254900]: 2025-12-02 11:22:44.526 254904 DEBUG nova.compute.manager [req-2bffdccc-e618-48bd-9a31-d7728d20a02f req-9bd9c4e4-0374-4e17-acf4-f2387d26e3ea 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] No waiting events found dispatching network-vif-plugged-37bb6586-c8c5-4107-9775-10964531e11e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:22:44 np0005542249 nova_compute[254900]: 2025-12-02 11:22:44.526 254904 WARNING nova.compute.manager [req-2bffdccc-e618-48bd-9a31-d7728d20a02f req-9bd9c4e4-0374-4e17-acf4-f2387d26e3ea 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Received unexpected event network-vif-plugged-37bb6586-c8c5-4107-9775-10964531e11e for instance with vm_state deleted and task_state None.#033[00m
Dec  2 06:22:44 np0005542249 nova_compute[254900]: 2025-12-02 11:22:44.547 254904 DEBUG nova.scheduler.client.report [None req-af34c65f-ea48-45e7-b8e8-ee8752ee9102 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:22:44 np0005542249 nova_compute[254900]: 2025-12-02 11:22:44.569 254904 DEBUG oslo_concurrency.lockutils [None req-af34c65f-ea48-45e7-b8e8-ee8752ee9102 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:44 np0005542249 nova_compute[254900]: 2025-12-02 11:22:44.572 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.375s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:44 np0005542249 nova_compute[254900]: 2025-12-02 11:22:44.603 254904 INFO nova.scheduler.client.report [None req-af34c65f-ea48-45e7-b8e8-ee8752ee9102 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Deleted allocations for instance b2e17f69-4b2a-4abd-b738-61cd5813d48e#033[00m
Dec  2 06:22:44 np0005542249 nova_compute[254900]: 2025-12-02 11:22:44.643 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Instance d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 06:22:44 np0005542249 nova_compute[254900]: 2025-12-02 11:22:44.644 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:22:44 np0005542249 nova_compute[254900]: 2025-12-02 11:22:44.644 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:22:44 np0005542249 nova_compute[254900]: 2025-12-02 11:22:44.665 254904 DEBUG oslo_concurrency.lockutils [None req-af34c65f-ea48-45e7-b8e8-ee8752ee9102 8dfcddc04ea44d4085721856fb3f3d12 d0fe4a9242c84683be1c02df04c2dbf3 - - default default] Lock "b2e17f69-4b2a-4abd-b738-61cd5813d48e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:44 np0005542249 nova_compute[254900]: 2025-12-02 11:22:44.692 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:22:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:22:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1822103907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:22:45 np0005542249 nova_compute[254900]: 2025-12-02 11:22:45.172 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:22:45 np0005542249 nova_compute[254900]: 2025-12-02 11:22:45.180 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:22:45 np0005542249 nova_compute[254900]: 2025-12-02 11:22:45.203 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:22:45 np0005542249 nova_compute[254900]: 2025-12-02 11:22:45.229 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:22:45 np0005542249 nova_compute[254900]: 2025-12-02 11:22:45.229 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:45 np0005542249 podman[276546]: 2025-12-02 11:22:45.980400654 +0000 UTC m=+0.063104051 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 06:22:46 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1246: 321 pgs: 321 active+clean; 197 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 7.8 KiB/s wr, 82 op/s
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.229 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.335 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.520 254904 DEBUG oslo_concurrency.lockutils [None req-bd7df543-f483-4987-84ba-eebf9cf0a81f e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Acquiring lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.520 254904 DEBUG oslo_concurrency.lockutils [None req-bd7df543-f483-4987-84ba-eebf9cf0a81f e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.541 254904 DEBUG nova.objects.instance [None req-bd7df543-f483-4987-84ba-eebf9cf0a81f e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lazy-loading 'flavor' on Instance uuid d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.564 254904 INFO nova.virt.libvirt.driver [None req-bd7df543-f483-4987-84ba-eebf9cf0a81f e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Ignoring supplied device name: /dev/vdb#033[00m
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.582 254904 DEBUG oslo_concurrency.lockutils [None req-bd7df543-f483-4987-84ba-eebf9cf0a81f e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.760 254904 DEBUG oslo_concurrency.lockutils [None req-bd7df543-f483-4987-84ba-eebf9cf0a81f e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Acquiring lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.760 254904 DEBUG oslo_concurrency.lockutils [None req-bd7df543-f483-4987-84ba-eebf9cf0a81f e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.761 254904 INFO nova.compute.manager [None req-bd7df543-f483-4987-84ba-eebf9cf0a81f e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Attaching volume c0d47652-43fd-47b9-a044-855096e59bcf to /dev/vdb#033[00m
Dec  2 06:22:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.902 254904 DEBUG os_brick.utils [None req-bd7df543-f483-4987-84ba-eebf9cf0a81f e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.904 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.916 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.917 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[95eee068-ac56-4880-afaf-070250e64fbb]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.918 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.927 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.927 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[d605304e-0480-4f36-8958-87b8018c6ac6]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.928 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.936 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.937 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[5848893a-af0c-462d-9f2d-e01ad15dfecc]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.937 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[e61bd6f0-001c-4e3d-8c87-aa64d1e03866]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.938 254904 DEBUG oslo_concurrency.processutils [None req-bd7df543-f483-4987-84ba-eebf9cf0a81f e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.955 254904 DEBUG oslo_concurrency.processutils [None req-bd7df543-f483-4987-84ba-eebf9cf0a81f e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] CMD "nvme version" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.957 254904 DEBUG os_brick.initiator.connectors.lightos [None req-bd7df543-f483-4987-84ba-eebf9cf0a81f e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.958 254904 DEBUG os_brick.initiator.connectors.lightos [None req-bd7df543-f483-4987-84ba-eebf9cf0a81f e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.958 254904 DEBUG os_brick.initiator.connectors.lightos [None req-bd7df543-f483-4987-84ba-eebf9cf0a81f e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.958 254904 DEBUG os_brick.utils [None req-bd7df543-f483-4987-84ba-eebf9cf0a81f e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] <== get_connector_properties: return (55ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:22:46 np0005542249 nova_compute[254900]: 2025-12-02 11:22:46.959 254904 DEBUG nova.virt.block_device [None req-bd7df543-f483-4987-84ba-eebf9cf0a81f e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Updating existing volume attachment record: 9aa9ce61-71b6-4035-8d68-4e19898bf12d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:22:47 np0005542249 nova_compute[254900]: 2025-12-02 11:22:47.382 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:22:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3215861195' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:22:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:22:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3215861195' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:22:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:22:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2931084657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:22:47 np0005542249 nova_compute[254900]: 2025-12-02 11:22:47.664 254904 DEBUG nova.objects.instance [None req-bd7df543-f483-4987-84ba-eebf9cf0a81f e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lazy-loading 'flavor' on Instance uuid d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:22:47 np0005542249 nova_compute[254900]: 2025-12-02 11:22:47.685 254904 DEBUG nova.virt.libvirt.driver [None req-bd7df543-f483-4987-84ba-eebf9cf0a81f e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Attempting to attach volume c0d47652-43fd-47b9-a044-855096e59bcf with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Dec  2 06:22:47 np0005542249 nova_compute[254900]: 2025-12-02 11:22:47.687 254904 DEBUG nova.virt.libvirt.guest [None req-bd7df543-f483-4987-84ba-eebf9cf0a81f e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] attach device xml: <disk type="network" device="disk">
Dec  2 06:22:47 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:22:47 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-c0d47652-43fd-47b9-a044-855096e59bcf">
Dec  2 06:22:47 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:22:47 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:22:47 np0005542249 nova_compute[254900]:  <auth username="openstack">
Dec  2 06:22:47 np0005542249 nova_compute[254900]:    <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:22:47 np0005542249 nova_compute[254900]:  </auth>
Dec  2 06:22:47 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:22:47 np0005542249 nova_compute[254900]:  <serial>c0d47652-43fd-47b9-a044-855096e59bcf</serial>
Dec  2 06:22:47 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:22:47 np0005542249 nova_compute[254900]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  2 06:22:47 np0005542249 nova_compute[254900]: 2025-12-02 11:22:47.796 254904 DEBUG nova.virt.libvirt.driver [None req-bd7df543-f483-4987-84ba-eebf9cf0a81f e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:22:47 np0005542249 nova_compute[254900]: 2025-12-02 11:22:47.796 254904 DEBUG nova.virt.libvirt.driver [None req-bd7df543-f483-4987-84ba-eebf9cf0a81f e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:22:47 np0005542249 nova_compute[254900]: 2025-12-02 11:22:47.796 254904 DEBUG nova.virt.libvirt.driver [None req-bd7df543-f483-4987-84ba-eebf9cf0a81f e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:22:47 np0005542249 nova_compute[254900]: 2025-12-02 11:22:47.797 254904 DEBUG nova.virt.libvirt.driver [None req-bd7df543-f483-4987-84ba-eebf9cf0a81f e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] No VIF found with MAC fa:16:3e:9c:35:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:22:48 np0005542249 nova_compute[254900]: 2025-12-02 11:22:48.001 254904 DEBUG oslo_concurrency.lockutils [None req-bd7df543-f483-4987-84ba-eebf9cf0a81f e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:48 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1247: 321 pgs: 321 active+clean; 167 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 9.9 KiB/s wr, 98 op/s
Dec  2 06:22:48 np0005542249 nova_compute[254900]: 2025-12-02 11:22:48.119 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:50 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1248: 321 pgs: 321 active+clean; 167 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 8.0 KiB/s wr, 75 op/s
Dec  2 06:22:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:22:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2627731984' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:22:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:22:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2627731984' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:22:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Dec  2 06:22:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Dec  2 06:22:50 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Dec  2 06:22:51 np0005542249 nova_compute[254900]: 2025-12-02 11:22:51.318 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:22:52 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1250: 321 pgs: 321 active+clean; 167 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 5.0 KiB/s wr, 58 op/s
Dec  2 06:22:52 np0005542249 nova_compute[254900]: 2025-12-02 11:22:52.385 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Dec  2 06:22:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Dec  2 06:22:52 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Dec  2 06:22:52 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:52Z|00121|binding|INFO|Releasing lport 21f7ba63-7631-4701-921a-a830d0f08e6f from this chassis (sb_readonly=0)
Dec  2 06:22:52 np0005542249 nova_compute[254900]: 2025-12-02 11:22:52.595 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:52 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:52.605 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:22:53 np0005542249 nova_compute[254900]: 2025-12-02 11:22:53.122 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Dec  2 06:22:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Dec  2 06:22:53 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Dec  2 06:22:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:22:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3274597713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:22:54 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1253: 321 pgs: 321 active+clean; 167 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 6.2 KiB/s wr, 86 op/s
Dec  2 06:22:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Dec  2 06:22:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Dec  2 06:22:54 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Dec  2 06:22:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Dec  2 06:22:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Dec  2 06:22:55 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Dec  2 06:22:55 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:55Z|00122|binding|INFO|Releasing lport 21f7ba63-7631-4701-921a-a830d0f08e6f from this chassis (sb_readonly=0)
Dec  2 06:22:55 np0005542249 nova_compute[254900]: 2025-12-02 11:22:55.888 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:56 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1256: 321 pgs: 321 active+clean; 167 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 4.0 KiB/s wr, 82 op/s
Dec  2 06:22:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:22:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:22:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:22:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:22:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:22:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:22:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Dec  2 06:22:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Dec  2 06:22:56 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Dec  2 06:22:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:22:57 np0005542249 nova_compute[254900]: 2025-12-02 11:22:57.335 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764674562.3344715, b2e17f69-4b2a-4abd-b738-61cd5813d48e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:22:57 np0005542249 nova_compute[254900]: 2025-12-02 11:22:57.336 254904 INFO nova.compute.manager [-] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:22:57 np0005542249 nova_compute[254900]: 2025-12-02 11:22:57.357 254904 DEBUG nova.compute.manager [None req-66659d03-731c-40b7-806c-63be0150611f - - - - - -] [instance: b2e17f69-4b2a-4abd-b738-61cd5813d48e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:22:57 np0005542249 nova_compute[254900]: 2025-12-02 11:22:57.436 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:22:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/483751654' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:22:57 np0005542249 nova_compute[254900]: 2025-12-02 11:22:57.836 254904 DEBUG oslo_concurrency.lockutils [None req-f4ed500e-91dd-4ce1-bc37-12bc09fe64ba e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Acquiring lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:57 np0005542249 nova_compute[254900]: 2025-12-02 11:22:57.837 254904 DEBUG oslo_concurrency.lockutils [None req-f4ed500e-91dd-4ce1-bc37-12bc09fe64ba e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:57 np0005542249 nova_compute[254900]: 2025-12-02 11:22:57.849 254904 INFO nova.compute.manager [None req-f4ed500e-91dd-4ce1-bc37-12bc09fe64ba e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Detaching volume c0d47652-43fd-47b9-a044-855096e59bcf#033[00m
Dec  2 06:22:57 np0005542249 nova_compute[254900]: 2025-12-02 11:22:57.988 254904 DEBUG oslo_concurrency.lockutils [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Acquiring lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:58 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1258: 321 pgs: 321 active+clean; 167 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 9.8 KiB/s wr, 89 op/s
Dec  2 06:22:58 np0005542249 nova_compute[254900]: 2025-12-02 11:22:58.015 254904 INFO nova.virt.block_device [None req-f4ed500e-91dd-4ce1-bc37-12bc09fe64ba e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Attempting to driver detach volume c0d47652-43fd-47b9-a044-855096e59bcf from mountpoint /dev/vdb#033[00m
Dec  2 06:22:58 np0005542249 nova_compute[254900]: 2025-12-02 11:22:58.030 254904 DEBUG nova.virt.libvirt.driver [None req-f4ed500e-91dd-4ce1-bc37-12bc09fe64ba e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Attempting to detach device vdb from instance d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  2 06:22:58 np0005542249 nova_compute[254900]: 2025-12-02 11:22:58.031 254904 DEBUG nova.virt.libvirt.guest [None req-f4ed500e-91dd-4ce1-bc37-12bc09fe64ba e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] detach device xml: <disk type="network" device="disk">
Dec  2 06:22:58 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:22:58 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-c0d47652-43fd-47b9-a044-855096e59bcf">
Dec  2 06:22:58 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:22:58 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:22:58 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:22:58 np0005542249 nova_compute[254900]:  <serial>c0d47652-43fd-47b9-a044-855096e59bcf</serial>
Dec  2 06:22:58 np0005542249 nova_compute[254900]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  2 06:22:58 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:22:58 np0005542249 nova_compute[254900]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  2 06:22:58 np0005542249 nova_compute[254900]: 2025-12-02 11:22:58.042 254904 INFO nova.virt.libvirt.driver [None req-f4ed500e-91dd-4ce1-bc37-12bc09fe64ba e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Successfully detached device vdb from instance d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7 from the persistent domain config.#033[00m
Dec  2 06:22:58 np0005542249 nova_compute[254900]: 2025-12-02 11:22:58.043 254904 DEBUG nova.virt.libvirt.driver [None req-f4ed500e-91dd-4ce1-bc37-12bc09fe64ba e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  2 06:22:58 np0005542249 nova_compute[254900]: 2025-12-02 11:22:58.043 254904 DEBUG nova.virt.libvirt.guest [None req-f4ed500e-91dd-4ce1-bc37-12bc09fe64ba e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] detach device xml: <disk type="network" device="disk">
Dec  2 06:22:58 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:22:58 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-c0d47652-43fd-47b9-a044-855096e59bcf">
Dec  2 06:22:58 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:22:58 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:22:58 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:22:58 np0005542249 nova_compute[254900]:  <serial>c0d47652-43fd-47b9-a044-855096e59bcf</serial>
Dec  2 06:22:58 np0005542249 nova_compute[254900]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  2 06:22:58 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:22:58 np0005542249 nova_compute[254900]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  2 06:22:58 np0005542249 nova_compute[254900]: 2025-12-02 11:22:58.124 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:58 np0005542249 nova_compute[254900]: 2025-12-02 11:22:58.181 254904 DEBUG nova.virt.libvirt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Received event <DeviceRemovedEvent: 1764674578.1807218, d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  2 06:22:58 np0005542249 nova_compute[254900]: 2025-12-02 11:22:58.187 254904 DEBUG nova.virt.libvirt.driver [None req-f4ed500e-91dd-4ce1-bc37-12bc09fe64ba e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  2 06:22:58 np0005542249 nova_compute[254900]: 2025-12-02 11:22:58.190 254904 INFO nova.virt.libvirt.driver [None req-f4ed500e-91dd-4ce1-bc37-12bc09fe64ba e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Successfully detached device vdb from instance d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7 from the live domain config.#033[00m
Dec  2 06:22:58 np0005542249 nova_compute[254900]: 2025-12-02 11:22:58.506 254904 DEBUG nova.objects.instance [None req-f4ed500e-91dd-4ce1-bc37-12bc09fe64ba e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lazy-loading 'flavor' on Instance uuid d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:22:58 np0005542249 nova_compute[254900]: 2025-12-02 11:22:58.550 254904 DEBUG oslo_concurrency.lockutils [None req-f4ed500e-91dd-4ce1-bc37-12bc09fe64ba e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:58 np0005542249 nova_compute[254900]: 2025-12-02 11:22:58.551 254904 DEBUG oslo_concurrency.lockutils [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:58 np0005542249 nova_compute[254900]: 2025-12-02 11:22:58.551 254904 DEBUG oslo_concurrency.lockutils [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Acquiring lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:58 np0005542249 nova_compute[254900]: 2025-12-02 11:22:58.552 254904 DEBUG oslo_concurrency.lockutils [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:58 np0005542249 nova_compute[254900]: 2025-12-02 11:22:58.552 254904 DEBUG oslo_concurrency.lockutils [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:58 np0005542249 nova_compute[254900]: 2025-12-02 11:22:58.553 254904 INFO nova.compute.manager [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Terminating instance#033[00m
Dec  2 06:22:58 np0005542249 nova_compute[254900]: 2025-12-02 11:22:58.555 254904 DEBUG nova.compute.manager [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:22:58 np0005542249 kernel: tap61896fe1-ac (unregistering): left promiscuous mode
Dec  2 06:22:58 np0005542249 NetworkManager[48987]: <info>  [1764674578.9107] device (tap61896fe1-ac): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:22:58 np0005542249 nova_compute[254900]: 2025-12-02 11:22:58.928 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:58 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:58Z|00123|binding|INFO|Releasing lport 61896fe1-ac59-4781-93dd-2cf18f84208e from this chassis (sb_readonly=0)
Dec  2 06:22:58 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:58Z|00124|binding|INFO|Setting lport 61896fe1-ac59-4781-93dd-2cf18f84208e down in Southbound
Dec  2 06:22:58 np0005542249 ovn_controller[153849]: 2025-12-02T11:22:58Z|00125|binding|INFO|Removing iface tap61896fe1-ac ovn-installed in OVS
Dec  2 06:22:58 np0005542249 nova_compute[254900]: 2025-12-02 11:22:58.932 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:58.948 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:35:f4 10.100.0.9'], port_security=['fa:16:3e:9c:35:f4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd9cd12f2-b6b7-41d4-a74b-2a89172f2ce7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fff78a31f26746918caf04706b12b741', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ca4a25f5-876c-47dc-9a35-23e0785f3cf3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.232'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff54e9e6-62e9-4528-92f5-a3bb97a08852, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=61896fe1-ac59-4781-93dd-2cf18f84208e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:22:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:58.951 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 61896fe1-ac59-4781-93dd-2cf18f84208e in datapath df73b9ab-de8d-40fa-9bf0-aa773bb32d3a unbound from our chassis#033[00m
Dec  2 06:22:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:58.954 163757 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network df73b9ab-de8d-40fa-9bf0-aa773bb32d3a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 06:22:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:58.957 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[ae93314b-e0fa-43e6-8bec-edc6c917dd71]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:58.958 163757 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a namespace which is not needed anymore#033[00m
Dec  2 06:22:58 np0005542249 nova_compute[254900]: 2025-12-02 11:22:58.973 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:58 np0005542249 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Dec  2 06:22:58 np0005542249 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 15.174s CPU time.
Dec  2 06:22:58 np0005542249 systemd-machined[216222]: Machine qemu-11-instance-0000000b terminated.
Dec  2 06:22:59 np0005542249 neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a[276224]: [NOTICE]   (276239) : haproxy version is 2.8.14-c23fe91
Dec  2 06:22:59 np0005542249 neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a[276224]: [NOTICE]   (276239) : path to executable is /usr/sbin/haproxy
Dec  2 06:22:59 np0005542249 neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a[276224]: [WARNING]  (276239) : Exiting Master process...
Dec  2 06:22:59 np0005542249 neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a[276224]: [WARNING]  (276239) : Exiting Master process...
Dec  2 06:22:59 np0005542249 neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a[276224]: [ALERT]    (276239) : Current worker (276247) exited with code 143 (Terminated)
Dec  2 06:22:59 np0005542249 neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a[276224]: [WARNING]  (276239) : All workers exited. Exiting... (0)
Dec  2 06:22:59 np0005542249 systemd[1]: libpod-ff0bc7be46280e7f5bb0920d4712e4505064f03c1ea3ae6c3a91d2eac3ac60cc.scope: Deactivated successfully.
Dec  2 06:22:59 np0005542249 podman[276620]: 2025-12-02 11:22:59.122563617 +0000 UTC m=+0.050518822 container died ff0bc7be46280e7f5bb0920d4712e4505064f03c1ea3ae6c3a91d2eac3ac60cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:22:59 np0005542249 nova_compute[254900]: 2025-12-02 11:22:59.201 254904 INFO nova.virt.libvirt.driver [-] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Instance destroyed successfully.#033[00m
Dec  2 06:22:59 np0005542249 nova_compute[254900]: 2025-12-02 11:22:59.202 254904 DEBUG nova.objects.instance [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lazy-loading 'resources' on Instance uuid d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:22:59 np0005542249 nova_compute[254900]: 2025-12-02 11:22:59.219 254904 DEBUG nova.compute.manager [req-18411d3a-17b4-46ab-bec3-6d058434edf7 req-5f268088-d172-406c-93fc-1407e3b96902 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Received event network-vif-unplugged-61896fe1-ac59-4781-93dd-2cf18f84208e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:22:59 np0005542249 nova_compute[254900]: 2025-12-02 11:22:59.219 254904 DEBUG oslo_concurrency.lockutils [req-18411d3a-17b4-46ab-bec3-6d058434edf7 req-5f268088-d172-406c-93fc-1407e3b96902 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:22:59 np0005542249 nova_compute[254900]: 2025-12-02 11:22:59.220 254904 DEBUG oslo_concurrency.lockutils [req-18411d3a-17b4-46ab-bec3-6d058434edf7 req-5f268088-d172-406c-93fc-1407e3b96902 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:22:59 np0005542249 nova_compute[254900]: 2025-12-02 11:22:59.220 254904 DEBUG oslo_concurrency.lockutils [req-18411d3a-17b4-46ab-bec3-6d058434edf7 req-5f268088-d172-406c-93fc-1407e3b96902 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:22:59 np0005542249 nova_compute[254900]: 2025-12-02 11:22:59.220 254904 DEBUG nova.compute.manager [req-18411d3a-17b4-46ab-bec3-6d058434edf7 req-5f268088-d172-406c-93fc-1407e3b96902 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] No waiting events found dispatching network-vif-unplugged-61896fe1-ac59-4781-93dd-2cf18f84208e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:22:59 np0005542249 nova_compute[254900]: 2025-12-02 11:22:59.220 254904 DEBUG nova.compute.manager [req-18411d3a-17b4-46ab-bec3-6d058434edf7 req-5f268088-d172-406c-93fc-1407e3b96902 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Received event network-vif-unplugged-61896fe1-ac59-4781-93dd-2cf18f84208e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 06:22:59 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ff0bc7be46280e7f5bb0920d4712e4505064f03c1ea3ae6c3a91d2eac3ac60cc-userdata-shm.mount: Deactivated successfully.
Dec  2 06:22:59 np0005542249 nova_compute[254900]: 2025-12-02 11:22:59.237 254904 DEBUG nova.virt.libvirt.vif [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1393004341',display_name='tempest-VolumesSnapshotTestJSON-instance-1393004341',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1393004341',id=11,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNw3JnJx+HMOTZ+AvQ489m0NYUlKvWSOqBmE/ujnmHqXRtIDwzDnPElm55wip1dZOPbB+3JyWL1W2JL1GKvQzzrvd/rxRQ9vkYSZZJhGCCZAICgsfaxPQK5nW0pvFEebVA==',key_name='tempest-keypair-926102481',keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:22:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fff78a31f26746918caf04706b12b741',ramdisk_id='',reservation_id='r-kh3z2ki5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesSnapshotTestJSON-1610940554',owner_user_name='tempest-VolumesSnapshotTestJSON-1610940554-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:22:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e0796090ff07418b99397a7f13f11633',uuid=d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "61896fe1-ac59-4781-93dd-2cf18f84208e", "address": "fa:16:3e:9c:35:f4", "network": {"id": "df73b9ab-de8d-40fa-9bf0-aa773bb32d3a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-950743948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff78a31f26746918caf04706b12b741", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61896fe1-ac", "ovs_interfaceid": "61896fe1-ac59-4781-93dd-2cf18f84208e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:22:59 np0005542249 nova_compute[254900]: 2025-12-02 11:22:59.238 254904 DEBUG nova.network.os_vif_util [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Converting VIF {"id": "61896fe1-ac59-4781-93dd-2cf18f84208e", "address": "fa:16:3e:9c:35:f4", "network": {"id": "df73b9ab-de8d-40fa-9bf0-aa773bb32d3a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-950743948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff78a31f26746918caf04706b12b741", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61896fe1-ac", "ovs_interfaceid": "61896fe1-ac59-4781-93dd-2cf18f84208e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:22:59 np0005542249 nova_compute[254900]: 2025-12-02 11:22:59.238 254904 DEBUG nova.network.os_vif_util [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:35:f4,bridge_name='br-int',has_traffic_filtering=True,id=61896fe1-ac59-4781-93dd-2cf18f84208e,network=Network(df73b9ab-de8d-40fa-9bf0-aa773bb32d3a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61896fe1-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:22:59 np0005542249 nova_compute[254900]: 2025-12-02 11:22:59.239 254904 DEBUG os_vif [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:35:f4,bridge_name='br-int',has_traffic_filtering=True,id=61896fe1-ac59-4781-93dd-2cf18f84208e,network=Network(df73b9ab-de8d-40fa-9bf0-aa773bb32d3a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61896fe1-ac') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:22:59 np0005542249 systemd[1]: var-lib-containers-storage-overlay-2c589a2b259c833d86acb4922904212abb10c895afe4ab61ecffea2f2618e33e-merged.mount: Deactivated successfully.
Dec  2 06:22:59 np0005542249 nova_compute[254900]: 2025-12-02 11:22:59.240 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:59 np0005542249 nova_compute[254900]: 2025-12-02 11:22:59.241 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61896fe1-ac, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:22:59 np0005542249 nova_compute[254900]: 2025-12-02 11:22:59.242 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:59 np0005542249 nova_compute[254900]: 2025-12-02 11:22:59.245 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:59 np0005542249 podman[276620]: 2025-12-02 11:22:59.246047794 +0000 UTC m=+0.174002999 container cleanup ff0bc7be46280e7f5bb0920d4712e4505064f03c1ea3ae6c3a91d2eac3ac60cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:22:59 np0005542249 nova_compute[254900]: 2025-12-02 11:22:59.247 254904 INFO os_vif [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:35:f4,bridge_name='br-int',has_traffic_filtering=True,id=61896fe1-ac59-4781-93dd-2cf18f84208e,network=Network(df73b9ab-de8d-40fa-9bf0-aa773bb32d3a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61896fe1-ac')#033[00m
Dec  2 06:22:59 np0005542249 systemd[1]: libpod-conmon-ff0bc7be46280e7f5bb0920d4712e4505064f03c1ea3ae6c3a91d2eac3ac60cc.scope: Deactivated successfully.
Dec  2 06:22:59 np0005542249 podman[276669]: 2025-12-02 11:22:59.399578452 +0000 UTC m=+0.112688347 container remove ff0bc7be46280e7f5bb0920d4712e4505064f03c1ea3ae6c3a91d2eac3ac60cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:22:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:59.408 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[47ffe295-f6ff-401b-b2b9-4352dd4350b8]: (4, ('Tue Dec  2 11:22:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a (ff0bc7be46280e7f5bb0920d4712e4505064f03c1ea3ae6c3a91d2eac3ac60cc)\nff0bc7be46280e7f5bb0920d4712e4505064f03c1ea3ae6c3a91d2eac3ac60cc\nTue Dec  2 11:22:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a (ff0bc7be46280e7f5bb0920d4712e4505064f03c1ea3ae6c3a91d2eac3ac60cc)\nff0bc7be46280e7f5bb0920d4712e4505064f03c1ea3ae6c3a91d2eac3ac60cc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:59.410 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[6e4264b9-2a16-4392-beb4-dbf4eadbd2fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:59.411 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf73b9ab-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:22:59 np0005542249 kernel: tapdf73b9ab-d0: left promiscuous mode
Dec  2 06:22:59 np0005542249 nova_compute[254900]: 2025-12-02 11:22:59.413 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:59 np0005542249 nova_compute[254900]: 2025-12-02 11:22:59.427 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:59 np0005542249 nova_compute[254900]: 2025-12-02 11:22:59.429 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:22:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:59.431 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[1e8faa4b-619c-417d-b394-e603daffd8e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:59.444 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[39fc251b-b275-49cd-a5ce-8e75053f4350]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:59.446 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[0bdd8f77-2681-48f0-8109-bb08bf4090cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:59.462 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[69945cac-7efe-41c3-b90c-b9e3f9614925]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 472290, 'reachable_time': 38219, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276695, 'error': None, 'target': 'ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:22:59 np0005542249 systemd[1]: run-netns-ovnmeta\x2ddf73b9ab\x2dde8d\x2d40fa\x2d9bf0\x2daa773bb32d3a.mount: Deactivated successfully.
Dec  2 06:22:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:59.466 164036 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-df73b9ab-de8d-40fa-9bf0-aa773bb32d3a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 06:22:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:22:59.466 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c71cd2-9656-466e-86f3-5d0359a81438]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:23:00 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1259: 321 pgs: 321 active+clean; 207 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 6.7 MiB/s wr, 91 op/s
Dec  2 06:23:00 np0005542249 nova_compute[254900]: 2025-12-02 11:23:00.644 254904 INFO nova.virt.libvirt.driver [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Deleting instance files /var/lib/nova/instances/d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7_del#033[00m
Dec  2 06:23:00 np0005542249 nova_compute[254900]: 2025-12-02 11:23:00.645 254904 INFO nova.virt.libvirt.driver [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Deletion of /var/lib/nova/instances/d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7_del complete#033[00m
Dec  2 06:23:00 np0005542249 nova_compute[254900]: 2025-12-02 11:23:00.710 254904 INFO nova.compute.manager [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Took 2.15 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:23:00 np0005542249 nova_compute[254900]: 2025-12-02 11:23:00.710 254904 DEBUG oslo.service.loopingcall [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:23:00 np0005542249 nova_compute[254900]: 2025-12-02 11:23:00.711 254904 DEBUG nova.compute.manager [-] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:23:00 np0005542249 nova_compute[254900]: 2025-12-02 11:23:00.711 254904 DEBUG nova.network.neutron [-] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:23:01 np0005542249 nova_compute[254900]: 2025-12-02 11:23:01.362 254904 DEBUG nova.compute.manager [req-4023db31-c996-4f11-99fa-6ba327ba6202 req-e63f05e8-6e69-4c03-aa30-d267971ba9c8 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Received event network-vif-plugged-61896fe1-ac59-4781-93dd-2cf18f84208e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:23:01 np0005542249 nova_compute[254900]: 2025-12-02 11:23:01.363 254904 DEBUG oslo_concurrency.lockutils [req-4023db31-c996-4f11-99fa-6ba327ba6202 req-e63f05e8-6e69-4c03-aa30-d267971ba9c8 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:23:01 np0005542249 nova_compute[254900]: 2025-12-02 11:23:01.364 254904 DEBUG oslo_concurrency.lockutils [req-4023db31-c996-4f11-99fa-6ba327ba6202 req-e63f05e8-6e69-4c03-aa30-d267971ba9c8 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:23:01 np0005542249 nova_compute[254900]: 2025-12-02 11:23:01.364 254904 DEBUG oslo_concurrency.lockutils [req-4023db31-c996-4f11-99fa-6ba327ba6202 req-e63f05e8-6e69-4c03-aa30-d267971ba9c8 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:23:01 np0005542249 nova_compute[254900]: 2025-12-02 11:23:01.364 254904 DEBUG nova.compute.manager [req-4023db31-c996-4f11-99fa-6ba327ba6202 req-e63f05e8-6e69-4c03-aa30-d267971ba9c8 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] No waiting events found dispatching network-vif-plugged-61896fe1-ac59-4781-93dd-2cf18f84208e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:23:01 np0005542249 nova_compute[254900]: 2025-12-02 11:23:01.365 254904 WARNING nova.compute.manager [req-4023db31-c996-4f11-99fa-6ba327ba6202 req-e63f05e8-6e69-4c03-aa30-d267971ba9c8 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Received unexpected event network-vif-plugged-61896fe1-ac59-4781-93dd-2cf18f84208e for instance with vm_state active and task_state deleting.#033[00m
Dec  2 06:23:01 np0005542249 nova_compute[254900]: 2025-12-02 11:23:01.756 254904 DEBUG nova.network.neutron [-] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:23:01 np0005542249 nova_compute[254900]: 2025-12-02 11:23:01.803 254904 INFO nova.compute.manager [-] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Took 1.09 seconds to deallocate network for instance.#033[00m
Dec  2 06:23:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:23:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Dec  2 06:23:01 np0005542249 nova_compute[254900]: 2025-12-02 11:23:01.864 254904 DEBUG nova.compute.manager [req-1b9a917b-a059-48ba-bf17-b921e69e50f5 req-68672e71-298d-4602-b7ba-d77354bddb2c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Received event network-vif-deleted-61896fe1-ac59-4781-93dd-2cf18f84208e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:23:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Dec  2 06:23:01 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Dec  2 06:23:01 np0005542249 nova_compute[254900]: 2025-12-02 11:23:01.967 254904 WARNING nova.volume.cinder [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Attachment 9aa9ce61-71b6-4035-8d68-4e19898bf12d does not exist. Ignoring.: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = 9aa9ce61-71b6-4035-8d68-4e19898bf12d. (HTTP 404) (Request-ID: req-3d7f0c27-6a76-4e51-9ea6-28a507f201ee)#033[00m
Dec  2 06:23:01 np0005542249 nova_compute[254900]: 2025-12-02 11:23:01.968 254904 INFO nova.compute.manager [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Took 0.16 seconds to detach 1 volumes for instance.#033[00m
Dec  2 06:23:02 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1261: 321 pgs: 321 active+clean; 320 MiB data, 467 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 27 MiB/s wr, 125 op/s
Dec  2 06:23:02 np0005542249 nova_compute[254900]: 2025-12-02 11:23:02.011 254904 DEBUG oslo_concurrency.lockutils [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:23:02 np0005542249 nova_compute[254900]: 2025-12-02 11:23:02.012 254904 DEBUG oslo_concurrency.lockutils [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:23:02 np0005542249 nova_compute[254900]: 2025-12-02 11:23:02.071 254904 DEBUG oslo_concurrency.processutils [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:23:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:23:02 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2145705204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:23:02 np0005542249 nova_compute[254900]: 2025-12-02 11:23:02.581 254904 DEBUG oslo_concurrency.processutils [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:23:02 np0005542249 nova_compute[254900]: 2025-12-02 11:23:02.591 254904 DEBUG nova.compute.provider_tree [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:23:02 np0005542249 nova_compute[254900]: 2025-12-02 11:23:02.619 254904 DEBUG nova.scheduler.client.report [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:23:02 np0005542249 nova_compute[254900]: 2025-12-02 11:23:02.645 254904 DEBUG oslo_concurrency.lockutils [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:23:02 np0005542249 nova_compute[254900]: 2025-12-02 11:23:02.684 254904 INFO nova.scheduler.client.report [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Deleted allocations for instance d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7#033[00m
Dec  2 06:23:02 np0005542249 nova_compute[254900]: 2025-12-02 11:23:02.804 254904 DEBUG oslo_concurrency.lockutils [None req-61c5d76e-d103-46bc-88b8-50ee98fdc834 e0796090ff07418b99397a7f13f11633 fff78a31f26746918caf04706b12b741 - - default default] Lock "d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.252s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:23:03 np0005542249 nova_compute[254900]: 2025-12-02 11:23:03.127 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:04 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1262: 321 pgs: 321 active+clean; 485 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 168 KiB/s rd, 48 MiB/s wr, 267 op/s
Dec  2 06:23:04 np0005542249 podman[276720]: 2025-12-02 11:23:04.067844381 +0000 UTC m=+0.134224488 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:23:04 np0005542249 nova_compute[254900]: 2025-12-02 11:23:04.245 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:06 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1263: 321 pgs: 321 active+clean; 660 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 142 KiB/s rd, 60 MiB/s wr, 231 op/s
Dec  2 06:23:06 np0005542249 ceph-osd[88961]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec  2 06:23:06 np0005542249 podman[276913]: 2025-12-02 11:23:06.353358302 +0000 UTC m=+0.161134003 container exec cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:23:06 np0005542249 podman[276913]: 2025-12-02 11:23:06.510688861 +0000 UTC m=+0.318464552 container exec_died cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Dec  2 06:23:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:23:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:23:07 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:23:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:23:07 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:23:08 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1264: 321 pgs: 321 active+clean; 1008 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 189 KiB/s rd, 92 MiB/s wr, 312 op/s
Dec  2 06:23:08 np0005542249 nova_compute[254900]: 2025-12-02 11:23:08.130 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:08 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:23:08 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:23:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Dec  2 06:23:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Dec  2 06:23:08 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Dec  2 06:23:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:23:08 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:23:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:23:08 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:23:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:23:08 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:23:08 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev c495cb08-f03c-4ff3-a357-b291fe493e3b does not exist
Dec  2 06:23:08 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 2a5c9ef1-50aa-44e0-8b32-74cde6600ee8 does not exist
Dec  2 06:23:08 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev f0d2699e-0ad6-4a02-ae89-406be63db39d does not exist
Dec  2 06:23:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:23:08 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:23:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:23:08 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:23:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:23:08 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:23:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:23:08 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1507624297' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:23:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:23:08 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1507624297' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:23:09 np0005542249 nova_compute[254900]: 2025-12-02 11:23:09.247 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:09 np0005542249 podman[277348]: 2025-12-02 11:23:09.464821119 +0000 UTC m=+0.066433052 container create 1b5e76c6a775885794bb16eef23da8ade59489d1a51a441335cece38aaf7681b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  2 06:23:09 np0005542249 systemd[1]: Started libpod-conmon-1b5e76c6a775885794bb16eef23da8ade59489d1a51a441335cece38aaf7681b.scope.
Dec  2 06:23:09 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:23:09 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:23:09 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:23:09 np0005542249 podman[277348]: 2025-12-02 11:23:09.439547088 +0000 UTC m=+0.041159071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:23:09 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:23:09 np0005542249 podman[277348]: 2025-12-02 11:23:09.566333274 +0000 UTC m=+0.167945267 container init 1b5e76c6a775885794bb16eef23da8ade59489d1a51a441335cece38aaf7681b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec  2 06:23:09 np0005542249 podman[277348]: 2025-12-02 11:23:09.580588089 +0000 UTC m=+0.182200022 container start 1b5e76c6a775885794bb16eef23da8ade59489d1a51a441335cece38aaf7681b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_rhodes, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  2 06:23:09 np0005542249 podman[277348]: 2025-12-02 11:23:09.585171702 +0000 UTC m=+0.186783645 container attach 1b5e76c6a775885794bb16eef23da8ade59489d1a51a441335cece38aaf7681b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_rhodes, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Dec  2 06:23:09 np0005542249 nifty_rhodes[277364]: 167 167
Dec  2 06:23:09 np0005542249 systemd[1]: libpod-1b5e76c6a775885794bb16eef23da8ade59489d1a51a441335cece38aaf7681b.scope: Deactivated successfully.
Dec  2 06:23:09 np0005542249 podman[277348]: 2025-12-02 11:23:09.590545647 +0000 UTC m=+0.192157640 container died 1b5e76c6a775885794bb16eef23da8ade59489d1a51a441335cece38aaf7681b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  2 06:23:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Dec  2 06:23:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Dec  2 06:23:09 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Dec  2 06:23:09 np0005542249 systemd[1]: var-lib-containers-storage-overlay-82975a171766a903f228d4f79147edeb63b54686d2ebd28d35d01b83d46319d9-merged.mount: Deactivated successfully.
Dec  2 06:23:09 np0005542249 podman[277348]: 2025-12-02 11:23:09.657536011 +0000 UTC m=+0.259147954 container remove 1b5e76c6a775885794bb16eef23da8ade59489d1a51a441335cece38aaf7681b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_rhodes, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:23:09 np0005542249 systemd[1]: libpod-conmon-1b5e76c6a775885794bb16eef23da8ade59489d1a51a441335cece38aaf7681b.scope: Deactivated successfully.
Dec  2 06:23:09 np0005542249 podman[277388]: 2025-12-02 11:23:09.929361377 +0000 UTC m=+0.077913401 container create 6a9c36514f92352285018debce997b1ee7c68c977cf93697dcd144644994530c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hamilton, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:23:09 np0005542249 systemd[1]: Started libpod-conmon-6a9c36514f92352285018debce997b1ee7c68c977cf93697dcd144644994530c.scope.
Dec  2 06:23:09 np0005542249 podman[277388]: 2025-12-02 11:23:09.897991361 +0000 UTC m=+0.046543435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:23:10 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:23:10 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1267: 321 pgs: 321 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 188 KiB/s rd, 106 MiB/s wr, 325 op/s
Dec  2 06:23:10 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a3993863b41821d71af93902440c4225f1109a89dff5fb82a265cbdba19831/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:23:10 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a3993863b41821d71af93902440c4225f1109a89dff5fb82a265cbdba19831/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:23:10 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a3993863b41821d71af93902440c4225f1109a89dff5fb82a265cbdba19831/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:23:10 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a3993863b41821d71af93902440c4225f1109a89dff5fb82a265cbdba19831/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:23:10 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a3993863b41821d71af93902440c4225f1109a89dff5fb82a265cbdba19831/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:23:10 np0005542249 podman[277388]: 2025-12-02 11:23:10.051211271 +0000 UTC m=+0.199763335 container init 6a9c36514f92352285018debce997b1ee7c68c977cf93697dcd144644994530c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hamilton, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:23:10 np0005542249 podman[277388]: 2025-12-02 11:23:10.07010572 +0000 UTC m=+0.218657734 container start 6a9c36514f92352285018debce997b1ee7c68c977cf93697dcd144644994530c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hamilton, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:23:10 np0005542249 podman[277388]: 2025-12-02 11:23:10.074723734 +0000 UTC m=+0.223275798 container attach 6a9c36514f92352285018debce997b1ee7c68c977cf93697dcd144644994530c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:23:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:23:10 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3827809322' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:23:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:23:10 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3827809322' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:23:11 np0005542249 optimistic_hamilton[277404]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:23:11 np0005542249 optimistic_hamilton[277404]: --> relative data size: 1.0
Dec  2 06:23:11 np0005542249 optimistic_hamilton[277404]: --> All data devices are unavailable
Dec  2 06:23:11 np0005542249 systemd[1]: libpod-6a9c36514f92352285018debce997b1ee7c68c977cf93697dcd144644994530c.scope: Deactivated successfully.
Dec  2 06:23:11 np0005542249 systemd[1]: libpod-6a9c36514f92352285018debce997b1ee7c68c977cf93697dcd144644994530c.scope: Consumed 1.046s CPU time.
Dec  2 06:23:11 np0005542249 podman[277388]: 2025-12-02 11:23:11.173154135 +0000 UTC m=+1.321706189 container died 6a9c36514f92352285018debce997b1ee7c68c977cf93697dcd144644994530c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:23:11 np0005542249 systemd[1]: var-lib-containers-storage-overlay-e8a3993863b41821d71af93902440c4225f1109a89dff5fb82a265cbdba19831-merged.mount: Deactivated successfully.
Dec  2 06:23:11 np0005542249 podman[277388]: 2025-12-02 11:23:11.25610612 +0000 UTC m=+1.404658144 container remove 6a9c36514f92352285018debce997b1ee7c68c977cf93697dcd144644994530c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hamilton, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:23:11 np0005542249 systemd[1]: libpod-conmon-6a9c36514f92352285018debce997b1ee7c68c977cf93697dcd144644994530c.scope: Deactivated successfully.
Dec  2 06:23:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:23:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:23:11 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2461528000' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:23:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:23:11 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2461528000' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:23:12 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1268: 321 pgs: 321 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 124 KiB/s rd, 80 MiB/s wr, 203 op/s
Dec  2 06:23:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:23:12 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/807292149' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:23:12 np0005542249 podman[277585]: 2025-12-02 11:23:12.233310203 +0000 UTC m=+0.062834844 container create 7d60ab459b47242e3f05faf9b0b1a844e6bc5374c6286ff27a4635eac3a4ae83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  2 06:23:12 np0005542249 systemd[1]: Started libpod-conmon-7d60ab459b47242e3f05faf9b0b1a844e6bc5374c6286ff27a4635eac3a4ae83.scope.
Dec  2 06:23:12 np0005542249 podman[277585]: 2025-12-02 11:23:12.20276056 +0000 UTC m=+0.032285261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:23:12 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:23:12 np0005542249 podman[277585]: 2025-12-02 11:23:12.355982249 +0000 UTC m=+0.185506900 container init 7d60ab459b47242e3f05faf9b0b1a844e6bc5374c6286ff27a4635eac3a4ae83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  2 06:23:12 np0005542249 podman[277585]: 2025-12-02 11:23:12.370984943 +0000 UTC m=+0.200509584 container start 7d60ab459b47242e3f05faf9b0b1a844e6bc5374c6286ff27a4635eac3a4ae83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:23:12 np0005542249 podman[277585]: 2025-12-02 11:23:12.376835471 +0000 UTC m=+0.206360332 container attach 7d60ab459b47242e3f05faf9b0b1a844e6bc5374c6286ff27a4635eac3a4ae83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_liskov, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:23:12 np0005542249 vibrant_liskov[277602]: 167 167
Dec  2 06:23:12 np0005542249 systemd[1]: libpod-7d60ab459b47242e3f05faf9b0b1a844e6bc5374c6286ff27a4635eac3a4ae83.scope: Deactivated successfully.
Dec  2 06:23:12 np0005542249 podman[277585]: 2025-12-02 11:23:12.383991574 +0000 UTC m=+0.213516215 container died 7d60ab459b47242e3f05faf9b0b1a844e6bc5374c6286ff27a4635eac3a4ae83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_liskov, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  2 06:23:12 np0005542249 systemd[1]: var-lib-containers-storage-overlay-fb57f7cb17411dbeba506fb21c47264d5338da669d2f583a2761038655d921f5-merged.mount: Deactivated successfully.
Dec  2 06:23:12 np0005542249 podman[277585]: 2025-12-02 11:23:12.449092208 +0000 UTC m=+0.278616849 container remove 7d60ab459b47242e3f05faf9b0b1a844e6bc5374c6286ff27a4635eac3a4ae83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  2 06:23:12 np0005542249 systemd[1]: libpod-conmon-7d60ab459b47242e3f05faf9b0b1a844e6bc5374c6286ff27a4635eac3a4ae83.scope: Deactivated successfully.
Dec  2 06:23:12 np0005542249 podman[277599]: 2025-12-02 11:23:12.462614693 +0000 UTC m=+0.168348168 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 06:23:12 np0005542249 podman[277650]: 2025-12-02 11:23:12.706832943 +0000 UTC m=+0.080799227 container create 7fd1590898ca701351b229db4a2b44a4be5c6aaef76c60fc385f65e19ff11d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rubin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:23:12 np0005542249 podman[277650]: 2025-12-02 11:23:12.669907359 +0000 UTC m=+0.043873743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:23:12 np0005542249 systemd[1]: Started libpod-conmon-7fd1590898ca701351b229db4a2b44a4be5c6aaef76c60fc385f65e19ff11d8d.scope.
Dec  2 06:23:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:23:12 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/81259565' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:23:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:23:12 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/81259565' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:23:12 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:23:12 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07c338e33631ae0db16796163ee8f625f8b17f9bb318fc0ace384c56b0db077e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:23:12 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07c338e33631ae0db16796163ee8f625f8b17f9bb318fc0ace384c56b0db077e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:23:12 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07c338e33631ae0db16796163ee8f625f8b17f9bb318fc0ace384c56b0db077e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:23:12 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07c338e33631ae0db16796163ee8f625f8b17f9bb318fc0ace384c56b0db077e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:23:12 np0005542249 podman[277650]: 2025-12-02 11:23:12.834733021 +0000 UTC m=+0.208699395 container init 7fd1590898ca701351b229db4a2b44a4be5c6aaef76c60fc385f65e19ff11d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rubin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:23:12 np0005542249 podman[277650]: 2025-12-02 11:23:12.849104098 +0000 UTC m=+0.223070412 container start 7fd1590898ca701351b229db4a2b44a4be5c6aaef76c60fc385f65e19ff11d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  2 06:23:12 np0005542249 podman[277650]: 2025-12-02 11:23:12.854059131 +0000 UTC m=+0.228025465 container attach 7fd1590898ca701351b229db4a2b44a4be5c6aaef76c60fc385f65e19ff11d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rubin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:23:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Dec  2 06:23:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Dec  2 06:23:13 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Dec  2 06:23:13 np0005542249 nova_compute[254900]: 2025-12-02 11:23:13.133 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]: {
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:    "0": [
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:        {
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "devices": [
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "/dev/loop3"
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            ],
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "lv_name": "ceph_lv0",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "lv_size": "21470642176",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "name": "ceph_lv0",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "tags": {
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.cluster_name": "ceph",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.crush_device_class": "",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.encrypted": "0",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.osd_id": "0",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.type": "block",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.vdo": "0"
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            },
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "type": "block",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "vg_name": "ceph_vg0"
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:        }
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:    ],
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:    "1": [
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:        {
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "devices": [
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "/dev/loop4"
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            ],
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "lv_name": "ceph_lv1",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "lv_size": "21470642176",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "name": "ceph_lv1",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "tags": {
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.cluster_name": "ceph",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.crush_device_class": "",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.encrypted": "0",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.osd_id": "1",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.type": "block",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.vdo": "0"
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            },
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "type": "block",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "vg_name": "ceph_vg1"
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:        }
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:    ],
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:    "2": [
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:        {
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "devices": [
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "/dev/loop5"
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            ],
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "lv_name": "ceph_lv2",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "lv_size": "21470642176",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "name": "ceph_lv2",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "tags": {
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.cluster_name": "ceph",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.crush_device_class": "",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.encrypted": "0",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.osd_id": "2",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.type": "block",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:                "ceph.vdo": "0"
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            },
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "type": "block",
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:            "vg_name": "ceph_vg2"
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:        }
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]:    ]
Dec  2 06:23:13 np0005542249 vibrant_rubin[277667]: }
Dec  2 06:23:13 np0005542249 systemd[1]: libpod-7fd1590898ca701351b229db4a2b44a4be5c6aaef76c60fc385f65e19ff11d8d.scope: Deactivated successfully.
Dec  2 06:23:13 np0005542249 podman[277650]: 2025-12-02 11:23:13.648806008 +0000 UTC m=+1.022772322 container died 7fd1590898ca701351b229db4a2b44a4be5c6aaef76c60fc385f65e19ff11d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rubin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  2 06:23:13 np0005542249 systemd[1]: var-lib-containers-storage-overlay-07c338e33631ae0db16796163ee8f625f8b17f9bb318fc0ace384c56b0db077e-merged.mount: Deactivated successfully.
Dec  2 06:23:13 np0005542249 podman[277650]: 2025-12-02 11:23:13.72939826 +0000 UTC m=+1.103364574 container remove 7fd1590898ca701351b229db4a2b44a4be5c6aaef76c60fc385f65e19ff11d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rubin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:23:13 np0005542249 systemd[1]: libpod-conmon-7fd1590898ca701351b229db4a2b44a4be5c6aaef76c60fc385f65e19ff11d8d.scope: Deactivated successfully.
Dec  2 06:23:14 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1270: 321 pgs: 321 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 109 KiB/s rd, 17 MiB/s wr, 153 op/s
Dec  2 06:23:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Dec  2 06:23:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Dec  2 06:23:14 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Dec  2 06:23:14 np0005542249 nova_compute[254900]: 2025-12-02 11:23:14.201 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764674579.199813, d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:23:14 np0005542249 nova_compute[254900]: 2025-12-02 11:23:14.203 254904 INFO nova.compute.manager [-] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:23:14 np0005542249 nova_compute[254900]: 2025-12-02 11:23:14.224 254904 DEBUG nova.compute.manager [None req-310e5f9a-755e-4c12-85fb-140c50d327e6 - - - - - -] [instance: d9cd12f2-b6b7-41d4-a74b-2a89172f2ce7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:23:14 np0005542249 nova_compute[254900]: 2025-12-02 11:23:14.250 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:14 np0005542249 podman[277830]: 2025-12-02 11:23:14.567553736 +0000 UTC m=+0.054349956 container create bd0d224ae2994c6bd91ec602e0d1042d256fb83079b1632fba973ef9d0e565d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_banach, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  2 06:23:14 np0005542249 systemd[1]: Started libpod-conmon-bd0d224ae2994c6bd91ec602e0d1042d256fb83079b1632fba973ef9d0e565d9.scope.
Dec  2 06:23:14 np0005542249 podman[277830]: 2025-12-02 11:23:14.547354412 +0000 UTC m=+0.034150662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:23:14 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:23:14 np0005542249 podman[277830]: 2025-12-02 11:23:14.689550163 +0000 UTC m=+0.176346453 container init bd0d224ae2994c6bd91ec602e0d1042d256fb83079b1632fba973ef9d0e565d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_banach, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:23:14 np0005542249 podman[277830]: 2025-12-02 11:23:14.702517273 +0000 UTC m=+0.189313513 container start bd0d224ae2994c6bd91ec602e0d1042d256fb83079b1632fba973ef9d0e565d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_banach, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  2 06:23:14 np0005542249 podman[277830]: 2025-12-02 11:23:14.707514557 +0000 UTC m=+0.194310767 container attach bd0d224ae2994c6bd91ec602e0d1042d256fb83079b1632fba973ef9d0e565d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_banach, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:23:14 np0005542249 nostalgic_banach[277846]: 167 167
Dec  2 06:23:14 np0005542249 systemd[1]: libpod-bd0d224ae2994c6bd91ec602e0d1042d256fb83079b1632fba973ef9d0e565d9.scope: Deactivated successfully.
Dec  2 06:23:14 np0005542249 podman[277830]: 2025-12-02 11:23:14.714160457 +0000 UTC m=+0.200956687 container died bd0d224ae2994c6bd91ec602e0d1042d256fb83079b1632fba973ef9d0e565d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:23:14 np0005542249 systemd[1]: var-lib-containers-storage-overlay-0429de084f3e9799e5cea7ca69126db9c1852a487fb55450c3e14fd0cc711f2e-merged.mount: Deactivated successfully.
Dec  2 06:23:14 np0005542249 podman[277830]: 2025-12-02 11:23:14.775139271 +0000 UTC m=+0.261935511 container remove bd0d224ae2994c6bd91ec602e0d1042d256fb83079b1632fba973ef9d0e565d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  2 06:23:14 np0005542249 systemd[1]: libpod-conmon-bd0d224ae2994c6bd91ec602e0d1042d256fb83079b1632fba973ef9d0e565d9.scope: Deactivated successfully.
Dec  2 06:23:15 np0005542249 podman[277873]: 2025-12-02 11:23:15.025811346 +0000 UTC m=+0.073826461 container create e74bdc24ef4bada952fb9d43ced77a72927b5dac63eff842c5e20477b7821781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:23:15 np0005542249 systemd[1]: Started libpod-conmon-e74bdc24ef4bada952fb9d43ced77a72927b5dac63eff842c5e20477b7821781.scope.
Dec  2 06:23:15 np0005542249 podman[277873]: 2025-12-02 11:23:14.995983261 +0000 UTC m=+0.043998406 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:23:15 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:23:15 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71e65129b99ef5bce838d43955dc1756b46aa824508f1eb11d640c279d1c48b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:23:15 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71e65129b99ef5bce838d43955dc1756b46aa824508f1eb11d640c279d1c48b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:23:15 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71e65129b99ef5bce838d43955dc1756b46aa824508f1eb11d640c279d1c48b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:23:15 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71e65129b99ef5bce838d43955dc1756b46aa824508f1eb11d640c279d1c48b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:23:15 np0005542249 podman[277873]: 2025-12-02 11:23:15.142342256 +0000 UTC m=+0.190357391 container init e74bdc24ef4bada952fb9d43ced77a72927b5dac63eff842c5e20477b7821781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  2 06:23:15 np0005542249 podman[277873]: 2025-12-02 11:23:15.163405113 +0000 UTC m=+0.211420198 container start e74bdc24ef4bada952fb9d43ced77a72927b5dac63eff842c5e20477b7821781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:23:15 np0005542249 podman[277873]: 2025-12-02 11:23:15.167318449 +0000 UTC m=+0.215333594 container attach e74bdc24ef4bada952fb9d43ced77a72927b5dac63eff842c5e20477b7821781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 06:23:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:23:15 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1719645146' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:23:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:23:16 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2196623068' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:23:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:23:16 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2196623068' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:23:16 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1272: 321 pgs: 321 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 97 KiB/s rd, 7.2 KiB/s wr, 136 op/s
Dec  2 06:23:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Dec  2 06:23:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Dec  2 06:23:16 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Dec  2 06:23:16 np0005542249 beautiful_banzai[277889]: {
Dec  2 06:23:16 np0005542249 beautiful_banzai[277889]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:23:16 np0005542249 beautiful_banzai[277889]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:23:16 np0005542249 beautiful_banzai[277889]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:23:16 np0005542249 beautiful_banzai[277889]:        "osd_id": 0,
Dec  2 06:23:16 np0005542249 beautiful_banzai[277889]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:23:16 np0005542249 beautiful_banzai[277889]:        "type": "bluestore"
Dec  2 06:23:16 np0005542249 beautiful_banzai[277889]:    },
Dec  2 06:23:16 np0005542249 beautiful_banzai[277889]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:23:16 np0005542249 beautiful_banzai[277889]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:23:16 np0005542249 beautiful_banzai[277889]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:23:16 np0005542249 beautiful_banzai[277889]:        "osd_id": 2,
Dec  2 06:23:16 np0005542249 beautiful_banzai[277889]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:23:16 np0005542249 beautiful_banzai[277889]:        "type": "bluestore"
Dec  2 06:23:16 np0005542249 beautiful_banzai[277889]:    },
Dec  2 06:23:16 np0005542249 beautiful_banzai[277889]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:23:16 np0005542249 beautiful_banzai[277889]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:23:16 np0005542249 beautiful_banzai[277889]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:23:16 np0005542249 beautiful_banzai[277889]:        "osd_id": 1,
Dec  2 06:23:16 np0005542249 beautiful_banzai[277889]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:23:16 np0005542249 beautiful_banzai[277889]:        "type": "bluestore"
Dec  2 06:23:16 np0005542249 beautiful_banzai[277889]:    }
Dec  2 06:23:16 np0005542249 beautiful_banzai[277889]: }
Dec  2 06:23:16 np0005542249 systemd[1]: libpod-e74bdc24ef4bada952fb9d43ced77a72927b5dac63eff842c5e20477b7821781.scope: Deactivated successfully.
Dec  2 06:23:16 np0005542249 systemd[1]: libpod-e74bdc24ef4bada952fb9d43ced77a72927b5dac63eff842c5e20477b7821781.scope: Consumed 1.038s CPU time.
Dec  2 06:23:16 np0005542249 podman[277922]: 2025-12-02 11:23:16.244435024 +0000 UTC m=+0.026287559 container died e74bdc24ef4bada952fb9d43ced77a72927b5dac63eff842c5e20477b7821781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  2 06:23:16 np0005542249 systemd[1]: var-lib-containers-storage-overlay-71e65129b99ef5bce838d43955dc1756b46aa824508f1eb11d640c279d1c48b3-merged.mount: Deactivated successfully.
Dec  2 06:23:16 np0005542249 podman[277922]: 2025-12-02 11:23:16.343549306 +0000 UTC m=+0.125401801 container remove e74bdc24ef4bada952fb9d43ced77a72927b5dac63eff842c5e20477b7821781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:23:16 np0005542249 systemd[1]: libpod-conmon-e74bdc24ef4bada952fb9d43ced77a72927b5dac63eff842c5e20477b7821781.scope: Deactivated successfully.
Dec  2 06:23:16 np0005542249 podman[277923]: 2025-12-02 11:23:16.393544102 +0000 UTC m=+0.137916657 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Dec  2 06:23:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:23:16 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:23:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:23:16 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:23:16 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 6d061e43-3fc9-4fc3-af18-3bfde6429ad5 does not exist
Dec  2 06:23:16 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev cd5999fd-a2a7-4e90-a8f3-7661ae4aa280 does not exist
Dec  2 06:23:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:23:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Dec  2 06:23:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Dec  2 06:23:16 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Dec  2 06:23:17 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:23:17 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:23:18 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1275: 321 pgs: 321 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 150 KiB/s rd, 5.3 KiB/s wr, 197 op/s
Dec  2 06:23:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Dec  2 06:23:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Dec  2 06:23:18 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Dec  2 06:23:18 np0005542249 nova_compute[254900]: 2025-12-02 11:23:18.145 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:23:18 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1611414928' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:23:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Dec  2 06:23:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Dec  2 06:23:19 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Dec  2 06:23:19 np0005542249 nova_compute[254900]: 2025-12-02 11:23:19.285 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:19.838 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:23:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:19.838 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:23:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:19.838 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:23:20 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1278: 321 pgs: 321 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 164 KiB/s rd, 6.0 MiB/s wr, 218 op/s
Dec  2 06:23:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Dec  2 06:23:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Dec  2 06:23:20 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:23:21.397944) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674601397998, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1321, "num_deletes": 255, "total_data_size": 1802065, "memory_usage": 1840144, "flush_reason": "Manual Compaction"}
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674601417549, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1758970, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25031, "largest_seqno": 26351, "table_properties": {"data_size": 1752499, "index_size": 3610, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14396, "raw_average_key_size": 20, "raw_value_size": 1739316, "raw_average_value_size": 2502, "num_data_blocks": 161, "num_entries": 695, "num_filter_entries": 695, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764674512, "oldest_key_time": 1764674512, "file_creation_time": 1764674601, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 19671 microseconds, and 11369 cpu microseconds.
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:23:21.417614) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1758970 bytes OK
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:23:21.417645) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:23:21.418799) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:23:21.418814) EVENT_LOG_v1 {"time_micros": 1764674601418809, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:23:21.418838) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1795899, prev total WAL file size 1795899, number of live WAL files 2.
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:23:21.419911) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1717KB)], [56(10MB)]
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674601419995, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12457325, "oldest_snapshot_seqno": -1}
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5409 keys, 10762432 bytes, temperature: kUnknown
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674601515250, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 10762432, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10720021, "index_size": 27773, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 134530, "raw_average_key_size": 24, "raw_value_size": 10616434, "raw_average_value_size": 1962, "num_data_blocks": 1146, "num_entries": 5409, "num_filter_entries": 5409, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672515, "oldest_key_time": 0, "file_creation_time": 1764674601, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:23:21.515617) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 10762432 bytes
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:23:21.541853) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.6 rd, 112.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 10.2 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(13.2) write-amplify(6.1) OK, records in: 5933, records dropped: 524 output_compression: NoCompression
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:23:21.541921) EVENT_LOG_v1 {"time_micros": 1764674601541894, "job": 30, "event": "compaction_finished", "compaction_time_micros": 95349, "compaction_time_cpu_micros": 38572, "output_level": 6, "num_output_files": 1, "total_output_size": 10762432, "num_input_records": 5933, "num_output_records": 5409, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674601543299, "job": 30, "event": "table_file_deletion", "file_number": 58}
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674601547995, "job": 30, "event": "table_file_deletion", "file_number": 56}
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:23:21.419766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:23:21.548174) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:23:21.548184) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:23:21.548188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:23:21.548191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:23:21.548195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Dec  2 06:23:21 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Dec  2 06:23:22 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1281: 321 pgs: 321 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 41 MiB/s wr, 52 op/s
Dec  2 06:23:23 np0005542249 nova_compute[254900]: 2025-12-02 11:23:23.151 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:24 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1282: 321 pgs: 321 active+clean; 1.4 GiB data, 1.6 GiB used, 58 GiB / 60 GiB avail; 187 KiB/s rd, 58 MiB/s wr, 294 op/s
Dec  2 06:23:24 np0005542249 nova_compute[254900]: 2025-12-02 11:23:24.289 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:23:24 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/462338400' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:23:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:23:24 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/462338400' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:23:26 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1283: 321 pgs: 321 active+clean; 1.6 GiB data, 1.7 GiB used, 58 GiB / 60 GiB avail; 175 KiB/s rd, 68 MiB/s wr, 274 op/s
Dec  2 06:23:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Dec  2 06:23:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Dec  2 06:23:26 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Dec  2 06:23:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:23:26
Dec  2 06:23:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:23:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:23:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', '.mgr', 'backups', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta']
Dec  2 06:23:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:23:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:23:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:23:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:23:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:23:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:23:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:23:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:23:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:23:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:23:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:23:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:23:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:23:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:23:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:23:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:23:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:23:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:23:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Dec  2 06:23:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Dec  2 06:23:27 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Dec  2 06:23:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:23:27 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1954849096' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:23:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:23:27 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1954849096' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:23:28 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1286: 321 pgs: 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 311 active+clean; 1.9 GiB data, 2.1 GiB used, 58 GiB / 60 GiB avail; 319 KiB/s rd, 117 MiB/s wr, 511 op/s
Dec  2 06:23:28 np0005542249 nova_compute[254900]: 2025-12-02 11:23:28.198 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:23:29 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4197724835' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:23:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:23:29 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4197724835' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:23:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Dec  2 06:23:29 np0005542249 nova_compute[254900]: 2025-12-02 11:23:29.309 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Dec  2 06:23:29 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Dec  2 06:23:30 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1288: 321 pgs: 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 311 active+clean; 2.1 GiB data, 2.2 GiB used, 58 GiB / 60 GiB avail; 206 KiB/s rd, 109 MiB/s wr, 327 op/s
Dec  2 06:23:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Dec  2 06:23:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Dec  2 06:23:30 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Dec  2 06:23:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Dec  2 06:23:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Dec  2 06:23:31 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Dec  2 06:23:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:23:32 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1291: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 236 KiB/s rd, 71 MiB/s wr, 366 op/s
Dec  2 06:23:32 np0005542249 nova_compute[254900]: 2025-12-02 11:23:32.390 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:32 np0005542249 nova_compute[254900]: 2025-12-02 11:23:32.567 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:33 np0005542249 nova_compute[254900]: 2025-12-02 11:23:33.199 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:23:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2284697807' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:23:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:23:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2284697807' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:23:34 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1292: 321 pgs: 321 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 120 KiB/s rd, 25 MiB/s wr, 165 op/s
Dec  2 06:23:34 np0005542249 nova_compute[254900]: 2025-12-02 11:23:34.312 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:35 np0005542249 podman[278011]: 2025-12-02 11:23:35.032240777 +0000 UTC m=+0.103771795 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec  2 06:23:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:23:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:23:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:23:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:23:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  2 06:23:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:23:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.033687050286514295 of space, bias 1.0, pg target 10.106115085954288 quantized to 32 (current 32)
Dec  2 06:23:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:23:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.532391580386794e-05 quantized to 32 (current 32)
Dec  2 06:23:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:23:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19309890746076708 quantized to 32 (current 32)
Dec  2 06:23:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:23:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0005901217685745913 quantized to 16 (current 32)
Dec  2 06:23:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:23:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:23:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:23:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.376522107182392e-05 quantized to 32 (current 32)
Dec  2 06:23:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:23:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006270043791105033 quantized to 32 (current 32)
Dec  2 06:23:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:23:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:23:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:23:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00014753044214364783 quantized to 32 (current 32)
Dec  2 06:23:36 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1293: 321 pgs: 321 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 68 KiB/s rd, 4.2 MiB/s wr, 94 op/s
Dec  2 06:23:36 np0005542249 nova_compute[254900]: 2025-12-02 11:23:36.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:23:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:23:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Dec  2 06:23:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Dec  2 06:23:36 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Dec  2 06:23:37 np0005542249 nova_compute[254900]: 2025-12-02 11:23:37.383 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:23:37 np0005542249 nova_compute[254900]: 2025-12-02 11:23:37.384 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:23:37 np0005542249 nova_compute[254900]: 2025-12-02 11:23:37.385 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 06:23:37 np0005542249 nova_compute[254900]: 2025-12-02 11:23:37.408 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 06:23:38 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1295: 321 pgs: 321 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 667 KiB/s rd, 3.6 MiB/s wr, 105 op/s
Dec  2 06:23:38 np0005542249 nova_compute[254900]: 2025-12-02 11:23:38.201 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:39 np0005542249 nova_compute[254900]: 2025-12-02 11:23:39.315 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Dec  2 06:23:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Dec  2 06:23:39 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Dec  2 06:23:40 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1297: 321 pgs: 321 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.0 MiB/s wr, 119 op/s
Dec  2 06:23:40 np0005542249 nova_compute[254900]: 2025-12-02 11:23:40.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:23:40 np0005542249 nova_compute[254900]: 2025-12-02 11:23:40.383 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:23:41 np0005542249 nova_compute[254900]: 2025-12-02 11:23:41.378 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:23:41 np0005542249 nova_compute[254900]: 2025-12-02 11:23:41.404 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:23:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Dec  2 06:23:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Dec  2 06:23:41 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Dec  2 06:23:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:23:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3828662603' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:23:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:23:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Dec  2 06:23:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Dec  2 06:23:41 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Dec  2 06:23:42 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1300: 321 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 314 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.5 MiB/s wr, 79 op/s
Dec  2 06:23:42 np0005542249 nova_compute[254900]: 2025-12-02 11:23:42.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:23:42 np0005542249 nova_compute[254900]: 2025-12-02 11:23:42.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:23:43 np0005542249 podman[278033]: 2025-12-02 11:23:43.001997299 +0000 UTC m=+0.081535533 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:23:43 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:43.126 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:23:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:25:d0:a1:75:ed'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:23:43 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:43.127 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 06:23:43 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:43.129 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:23:43 np0005542249 nova_compute[254900]: 2025-12-02 11:23:43.175 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:43 np0005542249 nova_compute[254900]: 2025-12-02 11:23:43.202 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:43 np0005542249 nova_compute[254900]: 2025-12-02 11:23:43.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:23:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:23:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1774802419' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:23:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:23:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1774802419' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:23:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Dec  2 06:23:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Dec  2 06:23:43 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Dec  2 06:23:44 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1302: 321 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 314 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 1.2 MiB/s rd, 5.2 MiB/s wr, 94 op/s
Dec  2 06:23:44 np0005542249 nova_compute[254900]: 2025-12-02 11:23:44.355 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:44 np0005542249 nova_compute[254900]: 2025-12-02 11:23:44.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:23:44 np0005542249 nova_compute[254900]: 2025-12-02 11:23:44.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:23:44 np0005542249 nova_compute[254900]: 2025-12-02 11:23:44.413 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:23:44 np0005542249 nova_compute[254900]: 2025-12-02 11:23:44.414 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:23:44 np0005542249 nova_compute[254900]: 2025-12-02 11:23:44.414 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:23:44 np0005542249 nova_compute[254900]: 2025-12-02 11:23:44.414 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:23:44 np0005542249 nova_compute[254900]: 2025-12-02 11:23:44.415 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:23:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:23:44 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2364633654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:23:44 np0005542249 nova_compute[254900]: 2025-12-02 11:23:44.878 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:23:45 np0005542249 nova_compute[254900]: 2025-12-02 11:23:45.112 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:23:45 np0005542249 nova_compute[254900]: 2025-12-02 11:23:45.114 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4611MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:23:45 np0005542249 nova_compute[254900]: 2025-12-02 11:23:45.114 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:23:45 np0005542249 nova_compute[254900]: 2025-12-02 11:23:45.115 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:23:45 np0005542249 nova_compute[254900]: 2025-12-02 11:23:45.180 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:23:45 np0005542249 nova_compute[254900]: 2025-12-02 11:23:45.181 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:23:45 np0005542249 nova_compute[254900]: 2025-12-02 11:23:45.199 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:23:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:23:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3211990526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:23:45 np0005542249 nova_compute[254900]: 2025-12-02 11:23:45.658 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:23:45 np0005542249 nova_compute[254900]: 2025-12-02 11:23:45.666 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:23:45 np0005542249 nova_compute[254900]: 2025-12-02 11:23:45.685 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:23:45 np0005542249 nova_compute[254900]: 2025-12-02 11:23:45.721 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:23:45 np0005542249 nova_compute[254900]: 2025-12-02 11:23:45.721 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:23:46 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1303: 321 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 314 active+clean; 2.2 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.5 MiB/s wr, 128 op/s
Dec  2 06:23:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:23:47 np0005542249 podman[278104]: 2025-12-02 11:23:47.045868952 +0000 UTC m=+0.118687998 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  2 06:23:48 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1304: 321 pgs: 321 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.3 MiB/s rd, 6.5 MiB/s wr, 149 op/s
Dec  2 06:23:48 np0005542249 nova_compute[254900]: 2025-12-02 11:23:48.205 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:49 np0005542249 nova_compute[254900]: 2025-12-02 11:23:49.358 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:50 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1305: 321 pgs: 321 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.3 MiB/s wr, 115 op/s
Dec  2 06:23:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:23:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4082000702' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:23:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:23:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4082000702' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:23:51 np0005542249 nova_compute[254900]: 2025-12-02 11:23:51.583 254904 DEBUG oslo_concurrency.lockutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Acquiring lock "2c9c5605-aec2-4e63-a5b5-49c57c3e9e17" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:23:51 np0005542249 nova_compute[254900]: 2025-12-02 11:23:51.584 254904 DEBUG oslo_concurrency.lockutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Lock "2c9c5605-aec2-4e63-a5b5-49c57c3e9e17" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:23:51 np0005542249 nova_compute[254900]: 2025-12-02 11:23:51.603 254904 DEBUG nova.compute.manager [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:23:51 np0005542249 nova_compute[254900]: 2025-12-02 11:23:51.689 254904 DEBUG oslo_concurrency.lockutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:23:51 np0005542249 nova_compute[254900]: 2025-12-02 11:23:51.690 254904 DEBUG oslo_concurrency.lockutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:23:51 np0005542249 nova_compute[254900]: 2025-12-02 11:23:51.698 254904 DEBUG nova.virt.hardware [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:23:51 np0005542249 nova_compute[254900]: 2025-12-02 11:23:51.698 254904 INFO nova.compute.claims [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:23:51 np0005542249 nova_compute[254900]: 2025-12-02 11:23:51.828 254904 DEBUG oslo_concurrency.processutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:23:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:23:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Dec  2 06:23:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Dec  2 06:23:51 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Dec  2 06:23:52 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1307: 321 pgs: 321 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 64 op/s
Dec  2 06:23:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:23:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3017484597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.350 254904 DEBUG oslo_concurrency.processutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.360 254904 DEBUG nova.compute.provider_tree [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.384 254904 DEBUG nova.scheduler.client.report [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.426 254904 DEBUG oslo_concurrency.lockutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.427 254904 DEBUG nova.compute.manager [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.490 254904 DEBUG nova.compute.manager [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.491 254904 DEBUG nova.network.neutron [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.522 254904 INFO nova.virt.libvirt.driver [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.582 254904 DEBUG nova.compute.manager [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.660 254904 INFO nova.virt.block_device [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Booting with volume 98439f7a-099a-4932-bf77-7027059660c5 at /dev/vdb#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.764 254904 DEBUG nova.policy [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '409e3a0b9ed441f2bb32f5f1fd0bb00a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e06121cb1a114bf997558a008929f199', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.828 254904 DEBUG os_brick.utils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.829 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.850 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.850 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[95b756b1-6350-4f4a-951a-7bd62a460504]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.852 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.867 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.867 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[61d8157d-7ada-4b61-a024-396784edadee]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.871 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.886 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.887 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[a31e2b10-57b0-415e-925d-e63f08ed0822]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.888 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[5397bcf0-1a39-47ea-b0fb-bbdd342f380c]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.889 254904 DEBUG oslo_concurrency.processutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.920 254904 DEBUG oslo_concurrency.processutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.925 254904 DEBUG os_brick.initiator.connectors.lightos [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.926 254904 DEBUG os_brick.initiator.connectors.lightos [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.926 254904 DEBUG os_brick.initiator.connectors.lightos [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.927 254904 DEBUG os_brick.utils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] <== get_connector_properties: return (98ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:23:52 np0005542249 nova_compute[254900]: 2025-12-02 11:23:52.928 254904 DEBUG nova.virt.block_device [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Updating existing volume attachment record: 28589fa5-7104-418a-b0c5-8ee3a109bf52 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:23:53 np0005542249 nova_compute[254900]: 2025-12-02 11:23:53.208 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:23:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3362819211' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:23:53 np0005542249 nova_compute[254900]: 2025-12-02 11:23:53.743 254904 DEBUG nova.network.neutron [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Successfully created port: ed87e718-f390-41da-bc93-530936610716 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:23:54 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1308: 321 pgs: 321 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.9 MiB/s wr, 55 op/s
Dec  2 06:23:54 np0005542249 nova_compute[254900]: 2025-12-02 11:23:54.037 254904 DEBUG nova.compute.manager [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:23:54 np0005542249 nova_compute[254900]: 2025-12-02 11:23:54.039 254904 DEBUG nova.virt.libvirt.driver [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:23:54 np0005542249 nova_compute[254900]: 2025-12-02 11:23:54.039 254904 INFO nova.virt.libvirt.driver [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Creating image(s)#033[00m
Dec  2 06:23:54 np0005542249 nova_compute[254900]: 2025-12-02 11:23:54.067 254904 DEBUG nova.storage.rbd_utils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] rbd image 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:23:54 np0005542249 nova_compute[254900]: 2025-12-02 11:23:54.090 254904 DEBUG nova.storage.rbd_utils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] rbd image 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:23:54 np0005542249 nova_compute[254900]: 2025-12-02 11:23:54.117 254904 DEBUG nova.storage.rbd_utils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] rbd image 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:23:54 np0005542249 nova_compute[254900]: 2025-12-02 11:23:54.121 254904 DEBUG oslo_concurrency.processutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:23:54 np0005542249 nova_compute[254900]: 2025-12-02 11:23:54.195 254904 DEBUG oslo_concurrency.processutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:23:54 np0005542249 nova_compute[254900]: 2025-12-02 11:23:54.198 254904 DEBUG oslo_concurrency.lockutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Acquiring lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:23:54 np0005542249 nova_compute[254900]: 2025-12-02 11:23:54.199 254904 DEBUG oslo_concurrency.lockutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:23:54 np0005542249 nova_compute[254900]: 2025-12-02 11:23:54.199 254904 DEBUG oslo_concurrency.lockutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:23:54 np0005542249 nova_compute[254900]: 2025-12-02 11:23:54.229 254904 DEBUG nova.storage.rbd_utils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] rbd image 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:23:54 np0005542249 nova_compute[254900]: 2025-12-02 11:23:54.234 254904 DEBUG oslo_concurrency.processutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:23:54 np0005542249 nova_compute[254900]: 2025-12-02 11:23:54.360 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:54 np0005542249 nova_compute[254900]: 2025-12-02 11:23:54.524 254904 DEBUG oslo_concurrency.processutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:23:54 np0005542249 nova_compute[254900]: 2025-12-02 11:23:54.596 254904 DEBUG nova.storage.rbd_utils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] resizing rbd image 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  2 06:23:54 np0005542249 nova_compute[254900]: 2025-12-02 11:23:54.700 254904 DEBUG nova.objects.instance [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Lazy-loading 'migration_context' on Instance uuid 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:23:54 np0005542249 nova_compute[254900]: 2025-12-02 11:23:54.720 254904 DEBUG nova.virt.libvirt.driver [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 06:23:54 np0005542249 nova_compute[254900]: 2025-12-02 11:23:54.721 254904 DEBUG nova.virt.libvirt.driver [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Ensure instance console log exists: /var/lib/nova/instances/2c9c5605-aec2-4e63-a5b5-49c57c3e9e17/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:23:54 np0005542249 nova_compute[254900]: 2025-12-02 11:23:54.722 254904 DEBUG oslo_concurrency.lockutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:23:54 np0005542249 nova_compute[254900]: 2025-12-02 11:23:54.723 254904 DEBUG oslo_concurrency.lockutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:23:54 np0005542249 nova_compute[254900]: 2025-12-02 11:23:54.724 254904 DEBUG oslo_concurrency.lockutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:23:54 np0005542249 nova_compute[254900]: 2025-12-02 11:23:54.778 254904 DEBUG nova.network.neutron [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Successfully updated port: ed87e718-f390-41da-bc93-530936610716 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:23:54 np0005542249 nova_compute[254900]: 2025-12-02 11:23:54.793 254904 DEBUG oslo_concurrency.lockutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Acquiring lock "refresh_cache-2c9c5605-aec2-4e63-a5b5-49c57c3e9e17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:23:54 np0005542249 nova_compute[254900]: 2025-12-02 11:23:54.794 254904 DEBUG oslo_concurrency.lockutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Acquired lock "refresh_cache-2c9c5605-aec2-4e63-a5b5-49c57c3e9e17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:23:54 np0005542249 nova_compute[254900]: 2025-12-02 11:23:54.795 254904 DEBUG nova.network.neutron [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:23:55 np0005542249 nova_compute[254900]: 2025-12-02 11:23:55.003 254904 DEBUG nova.network.neutron [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:23:55 np0005542249 nova_compute[254900]: 2025-12-02 11:23:55.017 254904 DEBUG nova.compute.manager [req-febf692e-d14d-4d5f-a70b-0a752f21449f req-cb25f094-e2da-4dae-8d82-cd77b6d83740 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Received event network-changed-ed87e718-f390-41da-bc93-530936610716 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:23:55 np0005542249 nova_compute[254900]: 2025-12-02 11:23:55.018 254904 DEBUG nova.compute.manager [req-febf692e-d14d-4d5f-a70b-0a752f21449f req-cb25f094-e2da-4dae-8d82-cd77b6d83740 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Refreshing instance network info cache due to event network-changed-ed87e718-f390-41da-bc93-530936610716. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:23:55 np0005542249 nova_compute[254900]: 2025-12-02 11:23:55.018 254904 DEBUG oslo_concurrency.lockutils [req-febf692e-d14d-4d5f-a70b-0a752f21449f req-cb25f094-e2da-4dae-8d82-cd77b6d83740 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-2c9c5605-aec2-4e63-a5b5-49c57c3e9e17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:23:56 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1309: 321 pgs: 321 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 794 KiB/s rd, 336 KiB/s wr, 21 op/s
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.311 254904 DEBUG nova.network.neutron [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Updating instance_info_cache with network_info: [{"id": "ed87e718-f390-41da-bc93-530936610716", "address": "fa:16:3e:25:b1:91", "network": {"id": "d3026dba-17e4-448f-b1b1-951c4cf69278", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-452915621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e06121cb1a114bf997558a008929f199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped87e718-f3", "ovs_interfaceid": "ed87e718-f390-41da-bc93-530936610716", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.331 254904 DEBUG oslo_concurrency.lockutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Releasing lock "refresh_cache-2c9c5605-aec2-4e63-a5b5-49c57c3e9e17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.332 254904 DEBUG nova.compute.manager [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Instance network_info: |[{"id": "ed87e718-f390-41da-bc93-530936610716", "address": "fa:16:3e:25:b1:91", "network": {"id": "d3026dba-17e4-448f-b1b1-951c4cf69278", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-452915621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e06121cb1a114bf997558a008929f199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped87e718-f3", "ovs_interfaceid": "ed87e718-f390-41da-bc93-530936610716", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.333 254904 DEBUG oslo_concurrency.lockutils [req-febf692e-d14d-4d5f-a70b-0a752f21449f req-cb25f094-e2da-4dae-8d82-cd77b6d83740 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-2c9c5605-aec2-4e63-a5b5-49c57c3e9e17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.334 254904 DEBUG nova.network.neutron [req-febf692e-d14d-4d5f-a70b-0a752f21449f req-cb25f094-e2da-4dae-8d82-cd77b6d83740 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Refreshing network info cache for port ed87e718-f390-41da-bc93-530936610716 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.339 254904 DEBUG nova.virt.libvirt.driver [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Start _get_guest_xml network_info=[{"id": "ed87e718-f390-41da-bc93-530936610716", "address": "fa:16:3e:25:b1:91", "network": {"id": "d3026dba-17e4-448f-b1b1-951c4cf69278", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-452915621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e06121cb1a114bf997558a008929f199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped87e718-f3", "ovs_interfaceid": "ed87e718-f390-41da-bc93-530936610716", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'image_id': '5a40f66c-ab43-47dd-9880-e59f9fa2c60e'}], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vdb', 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-98439f7a-099a-4932-bf77-7027059660c5', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '98439f7a-099a-4932-bf77-7027059660c5', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '2c9c5605-aec2-4e63-a5b5-49c57c3e9e17', 'attached_at': '', 'detached_at': '', 'volume_id': '98439f7a-099a-4932-bf77-7027059660c5', 'serial': '98439f7a-099a-4932-bf77-7027059660c5'}, 'boot_index': -1, 'device_type': 'disk', 'guest_format': None, 'attachment_id': '28589fa5-7104-418a-b0c5-8ee3a109bf52', 'delete_on_termination': False, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.347 254904 WARNING nova.virt.libvirt.driver [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.353 254904 DEBUG nova.virt.libvirt.host [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.354 254904 DEBUG nova.virt.libvirt.host [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:23:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:23:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.369 254904 DEBUG nova.virt.libvirt.host [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.370 254904 DEBUG nova.virt.libvirt.host [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.371 254904 DEBUG nova.virt.libvirt.driver [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.371 254904 DEBUG nova.virt.hardware [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.372 254904 DEBUG nova.virt.hardware [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.373 254904 DEBUG nova.virt.hardware [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.373 254904 DEBUG nova.virt.hardware [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.374 254904 DEBUG nova.virt.hardware [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.374 254904 DEBUG nova.virt.hardware [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.375 254904 DEBUG nova.virt.hardware [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.375 254904 DEBUG nova.virt.hardware [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.376 254904 DEBUG nova.virt.hardware [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.376 254904 DEBUG nova.virt.hardware [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.377 254904 DEBUG nova.virt.hardware [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.381 254904 DEBUG oslo_concurrency.processutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:23:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:23:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:23:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:23:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:23:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:23:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4066031321' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.893 254904 DEBUG oslo_concurrency.processutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.914 254904 DEBUG nova.storage.rbd_utils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] rbd image 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:23:56 np0005542249 nova_compute[254900]: 2025-12-02 11:23:56.920 254904 DEBUG oslo_concurrency.processutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:23:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:23:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:23:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3957122321' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.404 254904 DEBUG oslo_concurrency.processutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.435 254904 DEBUG nova.virt.libvirt.vif [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:23:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-instance-767987115',display_name='tempest-instance-767987115',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-767987115',id=12,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWGLTuvFZjkL4j1zSwZMrly3chy8nNhv7htjYEv0G4fMU266qOmBBXChmqhhQQLxUCXGceS9XI0wgfnx+yPPJFk7UfC/I5jgSmHwrgK4d3M0QxwS1FpcG3w3fuR+l3Vrg==',key_name='tempest-keypair-1194168157',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e06121cb1a114bf997558a008929f199',ramdisk_id='',reservation_id='r-4d40kbj8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-575462623',owner_user_name='tempest-VolumesBackupsTest-575462623-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:23:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='409e3a0b9ed441f2bb32f5f1fd0bb00a',uuid=2c9c5605-aec2-4e63-a5b5-49c57c3e9e17,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ed87e718-f390-41da-bc93-530936610716", "address": "fa:16:3e:25:b1:91", "network": {"id": "d3026dba-17e4-448f-b1b1-951c4cf69278", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-452915621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e06121cb1a114bf997558a008929f199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped87e718-f3", "ovs_interfaceid": "ed87e718-f390-41da-bc93-530936610716", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.436 254904 DEBUG nova.network.os_vif_util [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Converting VIF {"id": "ed87e718-f390-41da-bc93-530936610716", "address": "fa:16:3e:25:b1:91", "network": {"id": "d3026dba-17e4-448f-b1b1-951c4cf69278", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-452915621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e06121cb1a114bf997558a008929f199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped87e718-f3", "ovs_interfaceid": "ed87e718-f390-41da-bc93-530936610716", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.437 254904 DEBUG nova.network.os_vif_util [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:b1:91,bridge_name='br-int',has_traffic_filtering=True,id=ed87e718-f390-41da-bc93-530936610716,network=Network(d3026dba-17e4-448f-b1b1-951c4cf69278),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped87e718-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.438 254904 DEBUG nova.objects.instance [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.454 254904 DEBUG nova.virt.libvirt.driver [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:23:57 np0005542249 nova_compute[254900]:  <uuid>2c9c5605-aec2-4e63-a5b5-49c57c3e9e17</uuid>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:  <name>instance-0000000c</name>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <nova:name>tempest-instance-767987115</nova:name>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:23:56</nova:creationTime>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:23:57 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:        <nova:user uuid="409e3a0b9ed441f2bb32f5f1fd0bb00a">tempest-VolumesBackupsTest-575462623-project-member</nova:user>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:        <nova:project uuid="e06121cb1a114bf997558a008929f199">tempest-VolumesBackupsTest-575462623</nova:project>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <nova:root type="image" uuid="5a40f66c-ab43-47dd-9880-e59f9fa2c60e"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:        <nova:port uuid="ed87e718-f390-41da-bc93-530936610716">
Dec  2 06:23:57 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <entry name="serial">2c9c5605-aec2-4e63-a5b5-49c57c3e9e17</entry>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <entry name="uuid">2c9c5605-aec2-4e63-a5b5-49c57c3e9e17</entry>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/2c9c5605-aec2-4e63-a5b5-49c57c3e9e17_disk">
Dec  2 06:23:57 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:23:57 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/2c9c5605-aec2-4e63-a5b5-49c57c3e9e17_disk.config">
Dec  2 06:23:57 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:23:57 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="volumes/volume-98439f7a-099a-4932-bf77-7027059660c5">
Dec  2 06:23:57 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:23:57 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <target dev="vdb" bus="virtio"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <serial>98439f7a-099a-4932-bf77-7027059660c5</serial>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:25:b1:91"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <target dev="taped87e718-f3"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/2c9c5605-aec2-4e63-a5b5-49c57c3e9e17/console.log" append="off"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:23:57 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:23:57 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:23:57 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:23:57 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.456 254904 DEBUG nova.compute.manager [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Preparing to wait for external event network-vif-plugged-ed87e718-f390-41da-bc93-530936610716 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.456 254904 DEBUG oslo_concurrency.lockutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Acquiring lock "2c9c5605-aec2-4e63-a5b5-49c57c3e9e17-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.457 254904 DEBUG oslo_concurrency.lockutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Lock "2c9c5605-aec2-4e63-a5b5-49c57c3e9e17-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.457 254904 DEBUG oslo_concurrency.lockutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Lock "2c9c5605-aec2-4e63-a5b5-49c57c3e9e17-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.458 254904 DEBUG nova.virt.libvirt.vif [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:23:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-instance-767987115',display_name='tempest-instance-767987115',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-767987115',id=12,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWGLTuvFZjkL4j1zSwZMrly3chy8nNhv7htjYEv0G4fMU266qOmBBXChmqhhQQLxUCXGceS9XI0wgfnx+yPPJFk7UfC/I5jgSmHwrgK4d3M0QxwS1FpcG3w3fuR+l3Vrg==',key_name='tempest-keypair-1194168157',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e06121cb1a114bf997558a008929f199',ramdisk_id='',reservation_id='r-4d40kbj8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-575462623',owner_user_name='tempest-VolumesBackupsTest-575462623-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:23:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='409e3a0b9ed441f2bb32f5f1fd0bb00a',uuid=2c9c5605-aec2-4e63-a5b5-49c57c3e9e17,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ed87e718-f390-41da-bc93-530936610716", "address": "fa:16:3e:25:b1:91", "network": {"id": "d3026dba-17e4-448f-b1b1-951c4cf69278", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-452915621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e06121cb1a114bf997558a008929f199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped87e718-f3", "ovs_interfaceid": "ed87e718-f390-41da-bc93-530936610716", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.458 254904 DEBUG nova.network.os_vif_util [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Converting VIF {"id": "ed87e718-f390-41da-bc93-530936610716", "address": "fa:16:3e:25:b1:91", "network": {"id": "d3026dba-17e4-448f-b1b1-951c4cf69278", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-452915621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e06121cb1a114bf997558a008929f199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped87e718-f3", "ovs_interfaceid": "ed87e718-f390-41da-bc93-530936610716", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.459 254904 DEBUG nova.network.os_vif_util [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:b1:91,bridge_name='br-int',has_traffic_filtering=True,id=ed87e718-f390-41da-bc93-530936610716,network=Network(d3026dba-17e4-448f-b1b1-951c4cf69278),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped87e718-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.460 254904 DEBUG os_vif [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:b1:91,bridge_name='br-int',has_traffic_filtering=True,id=ed87e718-f390-41da-bc93-530936610716,network=Network(d3026dba-17e4-448f-b1b1-951c4cf69278),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped87e718-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.460 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.461 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.461 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.466 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.466 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped87e718-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.467 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=taped87e718-f3, col_values=(('external_ids', {'iface-id': 'ed87e718-f390-41da-bc93-530936610716', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:25:b1:91', 'vm-uuid': '2c9c5605-aec2-4e63-a5b5-49c57c3e9e17'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.468 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.470 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:23:57 np0005542249 NetworkManager[48987]: <info>  [1764674637.4704] manager: (taped87e718-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.481 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.482 254904 INFO os_vif [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:b1:91,bridge_name='br-int',has_traffic_filtering=True,id=ed87e718-f390-41da-bc93-530936610716,network=Network(d3026dba-17e4-448f-b1b1-951c4cf69278),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped87e718-f3')#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.540 254904 DEBUG nova.virt.libvirt.driver [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.541 254904 DEBUG nova.virt.libvirt.driver [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.541 254904 DEBUG nova.virt.libvirt.driver [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.541 254904 DEBUG nova.virt.libvirt.driver [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] No VIF found with MAC fa:16:3e:25:b1:91, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.541 254904 INFO nova.virt.libvirt.driver [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Using config drive#033[00m
Dec  2 06:23:57 np0005542249 nova_compute[254900]: 2025-12-02 11:23:57.559 254904 DEBUG nova.storage.rbd_utils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] rbd image 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:23:58 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1310: 321 pgs: 321 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 7.9 KiB/s rd, 2.0 MiB/s wr, 15 op/s
Dec  2 06:23:58 np0005542249 nova_compute[254900]: 2025-12-02 11:23:58.097 254904 INFO nova.virt.libvirt.driver [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Creating config drive at /var/lib/nova/instances/2c9c5605-aec2-4e63-a5b5-49c57c3e9e17/disk.config#033[00m
Dec  2 06:23:58 np0005542249 nova_compute[254900]: 2025-12-02 11:23:58.113 254904 DEBUG oslo_concurrency.processutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2c9c5605-aec2-4e63-a5b5-49c57c3e9e17/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0ocbe3ct execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:23:58 np0005542249 nova_compute[254900]: 2025-12-02 11:23:58.145 254904 DEBUG nova.network.neutron [req-febf692e-d14d-4d5f-a70b-0a752f21449f req-cb25f094-e2da-4dae-8d82-cd77b6d83740 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Updated VIF entry in instance network info cache for port ed87e718-f390-41da-bc93-530936610716. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:23:58 np0005542249 nova_compute[254900]: 2025-12-02 11:23:58.147 254904 DEBUG nova.network.neutron [req-febf692e-d14d-4d5f-a70b-0a752f21449f req-cb25f094-e2da-4dae-8d82-cd77b6d83740 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Updating instance_info_cache with network_info: [{"id": "ed87e718-f390-41da-bc93-530936610716", "address": "fa:16:3e:25:b1:91", "network": {"id": "d3026dba-17e4-448f-b1b1-951c4cf69278", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-452915621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e06121cb1a114bf997558a008929f199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped87e718-f3", "ovs_interfaceid": "ed87e718-f390-41da-bc93-530936610716", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:23:58 np0005542249 nova_compute[254900]: 2025-12-02 11:23:58.166 254904 DEBUG oslo_concurrency.lockutils [req-febf692e-d14d-4d5f-a70b-0a752f21449f req-cb25f094-e2da-4dae-8d82-cd77b6d83740 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-2c9c5605-aec2-4e63-a5b5-49c57c3e9e17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:23:58 np0005542249 nova_compute[254900]: 2025-12-02 11:23:58.237 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:58 np0005542249 nova_compute[254900]: 2025-12-02 11:23:58.260 254904 DEBUG oslo_concurrency.processutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2c9c5605-aec2-4e63-a5b5-49c57c3e9e17/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0ocbe3ct" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:23:58 np0005542249 nova_compute[254900]: 2025-12-02 11:23:58.298 254904 DEBUG nova.storage.rbd_utils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] rbd image 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:23:58 np0005542249 nova_compute[254900]: 2025-12-02 11:23:58.304 254904 DEBUG oslo_concurrency.processutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2c9c5605-aec2-4e63-a5b5-49c57c3e9e17/disk.config 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:23:58 np0005542249 nova_compute[254900]: 2025-12-02 11:23:58.490 254904 DEBUG oslo_concurrency.processutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2c9c5605-aec2-4e63-a5b5-49c57c3e9e17/disk.config 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:23:58 np0005542249 nova_compute[254900]: 2025-12-02 11:23:58.492 254904 INFO nova.virt.libvirt.driver [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Deleting local config drive /var/lib/nova/instances/2c9c5605-aec2-4e63-a5b5-49c57c3e9e17/disk.config because it was imported into RBD.#033[00m
Dec  2 06:23:58 np0005542249 kernel: taped87e718-f3: entered promiscuous mode
Dec  2 06:23:58 np0005542249 NetworkManager[48987]: <info>  [1764674638.5753] manager: (taped87e718-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/73)
Dec  2 06:23:58 np0005542249 nova_compute[254900]: 2025-12-02 11:23:58.585 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:58 np0005542249 ovn_controller[153849]: 2025-12-02T11:23:58Z|00126|binding|INFO|Claiming lport ed87e718-f390-41da-bc93-530936610716 for this chassis.
Dec  2 06:23:58 np0005542249 ovn_controller[153849]: 2025-12-02T11:23:58Z|00127|binding|INFO|ed87e718-f390-41da-bc93-530936610716: Claiming fa:16:3e:25:b1:91 10.100.0.11
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.601 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:b1:91 10.100.0.11'], port_security=['fa:16:3e:25:b1:91 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '2c9c5605-aec2-4e63-a5b5-49c57c3e9e17', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3026dba-17e4-448f-b1b1-951c4cf69278', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e06121cb1a114bf997558a008929f199', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd625b3d1-36fb-4d71-9562-8a233f7374db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f3395d5a-df7b-4d1a-a507-470df61cdd58, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=ed87e718-f390-41da-bc93-530936610716) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.604 163757 INFO neutron.agent.ovn.metadata.agent [-] Port ed87e718-f390-41da-bc93-530936610716 in datapath d3026dba-17e4-448f-b1b1-951c4cf69278 bound to our chassis#033[00m
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.607 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d3026dba-17e4-448f-b1b1-951c4cf69278#033[00m
Dec  2 06:23:58 np0005542249 systemd-machined[216222]: New machine qemu-12-instance-0000000c.
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.628 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[b5ba2cc3-df0e-4a97-b9cd-4a6e19c6242b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.631 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd3026dba-11 in ovnmeta-d3026dba-17e4-448f-b1b1-951c4cf69278 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.633 262398 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd3026dba-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.634 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[75b38347-1a38-4baa-9cbc-b6c07c9712c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.636 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[10d7391e-231d-4118-991c-5d9f8b4b5b6e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.655 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[04a0f4fb-0313-4378-be44-3a8d29b40e45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:23:58 np0005542249 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.688 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[67dfc13d-c340-418b-84ab-c0a411ec17ba]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:23:58 np0005542249 nova_compute[254900]: 2025-12-02 11:23:58.690 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:58 np0005542249 ovn_controller[153849]: 2025-12-02T11:23:58Z|00128|binding|INFO|Setting lport ed87e718-f390-41da-bc93-530936610716 ovn-installed in OVS
Dec  2 06:23:58 np0005542249 ovn_controller[153849]: 2025-12-02T11:23:58Z|00129|binding|INFO|Setting lport ed87e718-f390-41da-bc93-530936610716 up in Southbound
Dec  2 06:23:58 np0005542249 nova_compute[254900]: 2025-12-02 11:23:58.699 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:58 np0005542249 systemd-udevd[278459]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.731 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[f9581e56-efd3-472c-a43e-a9f937f1a009]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:23:58 np0005542249 NetworkManager[48987]: <info>  [1764674638.7405] manager: (tapd3026dba-10): new Veth device (/org/freedesktop/NetworkManager/Devices/74)
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.740 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[e24130a8-8c3c-45e3-b6e6-1ccf97445bdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:23:58 np0005542249 systemd-udevd[278466]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:23:58 np0005542249 NetworkManager[48987]: <info>  [1764674638.7469] device (taped87e718-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:23:58 np0005542249 NetworkManager[48987]: <info>  [1764674638.7483] device (taped87e718-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.776 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[048d52b9-9019-4ae8-99d5-14e52afdfbcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.780 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[53a408c0-827b-4049-9218-cef40e31130e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:23:58 np0005542249 NetworkManager[48987]: <info>  [1764674638.8061] device (tapd3026dba-10): carrier: link connected
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.812 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[5d3af067-967b-489d-a3dc-7fa883d7b044]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.827 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[7dcb70a6-854e-4ad0-87d5-8855a8bdb1a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3026dba-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:21:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 483162, 'reachable_time': 34718, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278487, 'error': None, 'target': 'ovnmeta-d3026dba-17e4-448f-b1b1-951c4cf69278', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.844 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[da5b5e48-5021-43c9-85af-5f8a9547fab6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe97:21e7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 483162, 'tstamp': 483162}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278488, 'error': None, 'target': 'ovnmeta-d3026dba-17e4-448f-b1b1-951c4cf69278', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.865 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[1c699674-a37b-4362-8387-89914bafb0f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3026dba-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:21:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 483162, 'reachable_time': 34718, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278489, 'error': None, 'target': 'ovnmeta-d3026dba-17e4-448f-b1b1-951c4cf69278', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.900 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[e5203f97-4200-4207-951e-b1742dbbbd3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.969 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[55ed7356-33e6-444a-ac11-c9f1b4cbf7b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.973 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3026dba-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.973 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.974 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3026dba-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:23:58 np0005542249 nova_compute[254900]: 2025-12-02 11:23:58.977 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:58 np0005542249 kernel: tapd3026dba-10: entered promiscuous mode
Dec  2 06:23:58 np0005542249 NetworkManager[48987]: <info>  [1764674638.9784] manager: (tapd3026dba-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Dec  2 06:23:58 np0005542249 nova_compute[254900]: 2025-12-02 11:23:58.979 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.985 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd3026dba-10, col_values=(('external_ids', {'iface-id': 'e0edef65-5991-4952-ac57-244aecf03cee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:23:58 np0005542249 nova_compute[254900]: 2025-12-02 11:23:58.987 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:58 np0005542249 ovn_controller[153849]: 2025-12-02T11:23:58Z|00130|binding|INFO|Releasing lport e0edef65-5991-4952-ac57-244aecf03cee from this chassis (sb_readonly=0)
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.990 163757 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d3026dba-17e4-448f-b1b1-951c4cf69278.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d3026dba-17e4-448f-b1b1-951c4cf69278.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.991 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[6727ec9e-3ef8-4f3d-9114-f6dad8ac121e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.993 163757 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: global
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]:    log         /dev/log local0 debug
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]:    log-tag     haproxy-metadata-proxy-d3026dba-17e4-448f-b1b1-951c4cf69278
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]:    user        root
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]:    group       root
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]:    maxconn     1024
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]:    pidfile     /var/lib/neutron/external/pids/d3026dba-17e4-448f-b1b1-951c4cf69278.pid.haproxy
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]:    daemon
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: defaults
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]:    log global
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]:    mode http
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]:    option httplog
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]:    option dontlognull
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]:    option http-server-close
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]:    option forwardfor
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]:    retries                 3
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]:    timeout http-request    30s
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]:    timeout connect         30s
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]:    timeout client          32s
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]:    timeout server          32s
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]:    timeout http-keep-alive 30s
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: listen listener
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]:    bind 169.254.169.254:80
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]:    http-request add-header X-OVN-Network-ID d3026dba-17e4-448f-b1b1-951c4cf69278
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 06:23:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:23:58.994 163757 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d3026dba-17e4-448f-b1b1-951c4cf69278', 'env', 'PROCESS_TAG=haproxy-d3026dba-17e4-448f-b1b1-951c4cf69278', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d3026dba-17e4-448f-b1b1-951c4cf69278.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 06:23:59 np0005542249 nova_compute[254900]: 2025-12-02 11:23:59.007 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:23:59 np0005542249 nova_compute[254900]: 2025-12-02 11:23:59.320 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674639.3202145, 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:23:59 np0005542249 nova_compute[254900]: 2025-12-02 11:23:59.321 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] VM Started (Lifecycle Event)#033[00m
Dec  2 06:23:59 np0005542249 nova_compute[254900]: 2025-12-02 11:23:59.342 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:23:59 np0005542249 nova_compute[254900]: 2025-12-02 11:23:59.346 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674639.320324, 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:23:59 np0005542249 nova_compute[254900]: 2025-12-02 11:23:59.347 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:23:59 np0005542249 nova_compute[254900]: 2025-12-02 11:23:59.361 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:23:59 np0005542249 nova_compute[254900]: 2025-12-02 11:23:59.364 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:23:59 np0005542249 nova_compute[254900]: 2025-12-02 11:23:59.384 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:23:59 np0005542249 podman[278583]: 2025-12-02 11:23:59.430562195 +0000 UTC m=+0.058995245 container create 03ae92d11b1b104f82d19adfb61789e1931e07c1b948985b503447f506622a21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3026dba-17e4-448f-b1b1-951c4cf69278, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:23:59 np0005542249 systemd[1]: Started libpod-conmon-03ae92d11b1b104f82d19adfb61789e1931e07c1b948985b503447f506622a21.scope.
Dec  2 06:23:59 np0005542249 podman[278583]: 2025-12-02 11:23:59.397760269 +0000 UTC m=+0.026193289 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:23:59 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:23:59 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f2726dcb5caaedaa478d8c59ec0ffd82374d3fcd5a92aade4a45a953795e5ff/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 06:23:59 np0005542249 podman[278583]: 2025-12-02 11:23:59.544250717 +0000 UTC m=+0.172683767 container init 03ae92d11b1b104f82d19adfb61789e1931e07c1b948985b503447f506622a21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3026dba-17e4-448f-b1b1-951c4cf69278, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:23:59 np0005542249 podman[278583]: 2025-12-02 11:23:59.554631988 +0000 UTC m=+0.183065008 container start 03ae92d11b1b104f82d19adfb61789e1931e07c1b948985b503447f506622a21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3026dba-17e4-448f-b1b1-951c4cf69278, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:23:59 np0005542249 neutron-haproxy-ovnmeta-d3026dba-17e4-448f-b1b1-951c4cf69278[278599]: [NOTICE]   (278603) : New worker (278605) forked
Dec  2 06:23:59 np0005542249 neutron-haproxy-ovnmeta-d3026dba-17e4-448f-b1b1-951c4cf69278[278599]: [NOTICE]   (278603) : Loading success.
Dec  2 06:24:00 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1311: 321 pgs: 321 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 22 KiB/s rd, 2.1 MiB/s wr, 35 op/s
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.240 254904 DEBUG oslo_concurrency.lockutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Acquiring lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.241 254904 DEBUG oslo_concurrency.lockutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.374 254904 DEBUG nova.compute.manager [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.461 254904 DEBUG oslo_concurrency.lockutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.462 254904 DEBUG oslo_concurrency.lockutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.472 254904 DEBUG nova.virt.hardware [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.473 254904 INFO nova.compute.claims [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.619 254904 DEBUG oslo_concurrency.processutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.714 254904 DEBUG nova.compute.manager [req-3e89d394-635f-4852-94ab-c214ed278413 req-4fab67d4-33cc-407a-8368-8eeed3794279 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Received event network-vif-plugged-ed87e718-f390-41da-bc93-530936610716 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.715 254904 DEBUG oslo_concurrency.lockutils [req-3e89d394-635f-4852-94ab-c214ed278413 req-4fab67d4-33cc-407a-8368-8eeed3794279 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "2c9c5605-aec2-4e63-a5b5-49c57c3e9e17-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.715 254904 DEBUG oslo_concurrency.lockutils [req-3e89d394-635f-4852-94ab-c214ed278413 req-4fab67d4-33cc-407a-8368-8eeed3794279 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "2c9c5605-aec2-4e63-a5b5-49c57c3e9e17-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.716 254904 DEBUG oslo_concurrency.lockutils [req-3e89d394-635f-4852-94ab-c214ed278413 req-4fab67d4-33cc-407a-8368-8eeed3794279 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "2c9c5605-aec2-4e63-a5b5-49c57c3e9e17-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.716 254904 DEBUG nova.compute.manager [req-3e89d394-635f-4852-94ab-c214ed278413 req-4fab67d4-33cc-407a-8368-8eeed3794279 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Processing event network-vif-plugged-ed87e718-f390-41da-bc93-530936610716 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.718 254904 DEBUG nova.compute.manager [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.724 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674640.7239554, 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.724 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.739 254904 DEBUG nova.virt.libvirt.driver [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.745 254904 INFO nova.virt.libvirt.driver [-] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Instance spawned successfully.#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.746 254904 DEBUG nova.virt.libvirt.driver [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.753 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.757 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.782 254904 DEBUG nova.virt.libvirt.driver [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.783 254904 DEBUG nova.virt.libvirt.driver [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.784 254904 DEBUG nova.virt.libvirt.driver [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.784 254904 DEBUG nova.virt.libvirt.driver [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.785 254904 DEBUG nova.virt.libvirt.driver [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.786 254904 DEBUG nova.virt.libvirt.driver [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.795 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.857 254904 INFO nova.compute.manager [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Took 6.82 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.859 254904 DEBUG nova.compute.manager [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.934 254904 INFO nova.compute.manager [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Took 9.27 seconds to build instance.#033[00m
Dec  2 06:24:00 np0005542249 nova_compute[254900]: 2025-12-02 11:24:00.951 254904 DEBUG oslo_concurrency.lockutils [None req-9e1e8202-fb91-4aed-ab17-d0c4c1f216e2 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Lock "2c9c5605-aec2-4e63-a5b5-49c57c3e9e17" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.367s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:24:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2238665010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:24:01 np0005542249 nova_compute[254900]: 2025-12-02 11:24:01.133 254904 DEBUG oslo_concurrency.processutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:01 np0005542249 nova_compute[254900]: 2025-12-02 11:24:01.141 254904 DEBUG nova.compute.provider_tree [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:24:01 np0005542249 nova_compute[254900]: 2025-12-02 11:24:01.163 254904 DEBUG nova.scheduler.client.report [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:24:01 np0005542249 nova_compute[254900]: 2025-12-02 11:24:01.197 254904 DEBUG oslo_concurrency.lockutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:01 np0005542249 nova_compute[254900]: 2025-12-02 11:24:01.198 254904 DEBUG nova.compute.manager [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:24:01 np0005542249 nova_compute[254900]: 2025-12-02 11:24:01.268 254904 DEBUG nova.compute.manager [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:24:01 np0005542249 nova_compute[254900]: 2025-12-02 11:24:01.270 254904 DEBUG nova.network.neutron [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:24:01 np0005542249 nova_compute[254900]: 2025-12-02 11:24:01.302 254904 INFO nova.virt.libvirt.driver [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:24:01 np0005542249 nova_compute[254900]: 2025-12-02 11:24:01.319 254904 DEBUG nova.compute.manager [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:24:01 np0005542249 nova_compute[254900]: 2025-12-02 11:24:01.439 254904 DEBUG nova.compute.manager [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:24:01 np0005542249 nova_compute[254900]: 2025-12-02 11:24:01.440 254904 DEBUG nova.virt.libvirt.driver [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:24:01 np0005542249 nova_compute[254900]: 2025-12-02 11:24:01.441 254904 INFO nova.virt.libvirt.driver [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Creating image(s)#033[00m
Dec  2 06:24:01 np0005542249 nova_compute[254900]: 2025-12-02 11:24:01.467 254904 DEBUG nova.storage.rbd_utils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] rbd image 7d55326f-52eb-4f7f-a9cd-05282ca6ca20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:24:01 np0005542249 nova_compute[254900]: 2025-12-02 11:24:01.502 254904 DEBUG nova.storage.rbd_utils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] rbd image 7d55326f-52eb-4f7f-a9cd-05282ca6ca20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:24:01 np0005542249 nova_compute[254900]: 2025-12-02 11:24:01.539 254904 DEBUG nova.storage.rbd_utils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] rbd image 7d55326f-52eb-4f7f-a9cd-05282ca6ca20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:24:01 np0005542249 nova_compute[254900]: 2025-12-02 11:24:01.545 254904 DEBUG oslo_concurrency.processutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:01 np0005542249 nova_compute[254900]: 2025-12-02 11:24:01.642 254904 DEBUG oslo_concurrency.processutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:01 np0005542249 nova_compute[254900]: 2025-12-02 11:24:01.644 254904 DEBUG oslo_concurrency.lockutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Acquiring lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:01 np0005542249 nova_compute[254900]: 2025-12-02 11:24:01.645 254904 DEBUG oslo_concurrency.lockutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:01 np0005542249 nova_compute[254900]: 2025-12-02 11:24:01.645 254904 DEBUG oslo_concurrency.lockutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:01 np0005542249 nova_compute[254900]: 2025-12-02 11:24:01.673 254904 DEBUG nova.storage.rbd_utils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] rbd image 7d55326f-52eb-4f7f-a9cd-05282ca6ca20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:24:01 np0005542249 nova_compute[254900]: 2025-12-02 11:24:01.678 254904 DEBUG oslo_concurrency.processutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 7d55326f-52eb-4f7f-a9cd-05282ca6ca20_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:01 np0005542249 nova_compute[254900]: 2025-12-02 11:24:01.713 254904 DEBUG nova.policy [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b3ecaaf4f0044a58b99879bf1c55b18e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8bda44a38b8b4f31a8b6e8f6f0548898', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:24:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:24:02 np0005542249 nova_compute[254900]: 2025-12-02 11:24:02.029 254904 DEBUG oslo_concurrency.processutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 7d55326f-52eb-4f7f-a9cd-05282ca6ca20_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:02 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1312: 321 pgs: 321 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 80 KiB/s rd, 2.1 MiB/s wr, 43 op/s
Dec  2 06:24:02 np0005542249 nova_compute[254900]: 2025-12-02 11:24:02.133 254904 DEBUG nova.storage.rbd_utils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] resizing rbd image 7d55326f-52eb-4f7f-a9cd-05282ca6ca20_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  2 06:24:02 np0005542249 nova_compute[254900]: 2025-12-02 11:24:02.252 254904 DEBUG nova.objects.instance [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lazy-loading 'migration_context' on Instance uuid 7d55326f-52eb-4f7f-a9cd-05282ca6ca20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:24:02 np0005542249 nova_compute[254900]: 2025-12-02 11:24:02.269 254904 DEBUG nova.virt.libvirt.driver [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 06:24:02 np0005542249 nova_compute[254900]: 2025-12-02 11:24:02.270 254904 DEBUG nova.virt.libvirt.driver [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Ensure instance console log exists: /var/lib/nova/instances/7d55326f-52eb-4f7f-a9cd-05282ca6ca20/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:24:02 np0005542249 nova_compute[254900]: 2025-12-02 11:24:02.271 254904 DEBUG oslo_concurrency.lockutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:02 np0005542249 nova_compute[254900]: 2025-12-02 11:24:02.271 254904 DEBUG oslo_concurrency.lockutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:02 np0005542249 nova_compute[254900]: 2025-12-02 11:24:02.271 254904 DEBUG oslo_concurrency.lockutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:02 np0005542249 nova_compute[254900]: 2025-12-02 11:24:02.471 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:02 np0005542249 nova_compute[254900]: 2025-12-02 11:24:02.788 254904 DEBUG nova.network.neutron [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Successfully created port: e6b169f0-73a9-4a53-93e6-a0290b2f4f4a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:24:02 np0005542249 nova_compute[254900]: 2025-12-02 11:24:02.846 254904 DEBUG nova.compute.manager [req-d938dc9b-1875-4aa8-a1d2-8357d654552f req-892063a6-8b17-471b-aa6c-2e4526c21322 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Received event network-vif-plugged-ed87e718-f390-41da-bc93-530936610716 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:24:02 np0005542249 nova_compute[254900]: 2025-12-02 11:24:02.847 254904 DEBUG oslo_concurrency.lockutils [req-d938dc9b-1875-4aa8-a1d2-8357d654552f req-892063a6-8b17-471b-aa6c-2e4526c21322 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "2c9c5605-aec2-4e63-a5b5-49c57c3e9e17-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:02 np0005542249 nova_compute[254900]: 2025-12-02 11:24:02.848 254904 DEBUG oslo_concurrency.lockutils [req-d938dc9b-1875-4aa8-a1d2-8357d654552f req-892063a6-8b17-471b-aa6c-2e4526c21322 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "2c9c5605-aec2-4e63-a5b5-49c57c3e9e17-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:02 np0005542249 nova_compute[254900]: 2025-12-02 11:24:02.848 254904 DEBUG oslo_concurrency.lockutils [req-d938dc9b-1875-4aa8-a1d2-8357d654552f req-892063a6-8b17-471b-aa6c-2e4526c21322 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "2c9c5605-aec2-4e63-a5b5-49c57c3e9e17-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:02 np0005542249 nova_compute[254900]: 2025-12-02 11:24:02.849 254904 DEBUG nova.compute.manager [req-d938dc9b-1875-4aa8-a1d2-8357d654552f req-892063a6-8b17-471b-aa6c-2e4526c21322 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] No waiting events found dispatching network-vif-plugged-ed87e718-f390-41da-bc93-530936610716 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:24:02 np0005542249 nova_compute[254900]: 2025-12-02 11:24:02.849 254904 WARNING nova.compute.manager [req-d938dc9b-1875-4aa8-a1d2-8357d654552f req-892063a6-8b17-471b-aa6c-2e4526c21322 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Received unexpected event network-vif-plugged-ed87e718-f390-41da-bc93-530936610716 for instance with vm_state active and task_state None.#033[00m
Dec  2 06:24:03 np0005542249 nova_compute[254900]: 2025-12-02 11:24:03.242 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:03 np0005542249 nova_compute[254900]: 2025-12-02 11:24:03.774 254904 DEBUG nova.network.neutron [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Successfully updated port: e6b169f0-73a9-4a53-93e6-a0290b2f4f4a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:24:03 np0005542249 nova_compute[254900]: 2025-12-02 11:24:03.797 254904 DEBUG oslo_concurrency.lockutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Acquiring lock "refresh_cache-7d55326f-52eb-4f7f-a9cd-05282ca6ca20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:24:03 np0005542249 nova_compute[254900]: 2025-12-02 11:24:03.798 254904 DEBUG oslo_concurrency.lockutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Acquired lock "refresh_cache-7d55326f-52eb-4f7f-a9cd-05282ca6ca20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:24:03 np0005542249 nova_compute[254900]: 2025-12-02 11:24:03.798 254904 DEBUG nova.network.neutron [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:24:03 np0005542249 nova_compute[254900]: 2025-12-02 11:24:03.940 254904 DEBUG nova.compute.manager [req-e1289b96-5067-4609-9908-0177961b9dcd req-b3146228-95cf-4e16-adb3-54a4fa9d5d95 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Received event network-changed-e6b169f0-73a9-4a53-93e6-a0290b2f4f4a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:24:03 np0005542249 nova_compute[254900]: 2025-12-02 11:24:03.941 254904 DEBUG nova.compute.manager [req-e1289b96-5067-4609-9908-0177961b9dcd req-b3146228-95cf-4e16-adb3-54a4fa9d5d95 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Refreshing instance network info cache due to event network-changed-e6b169f0-73a9-4a53-93e6-a0290b2f4f4a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:24:03 np0005542249 nova_compute[254900]: 2025-12-02 11:24:03.941 254904 DEBUG oslo_concurrency.lockutils [req-e1289b96-5067-4609-9908-0177961b9dcd req-b3146228-95cf-4e16-adb3-54a4fa9d5d95 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-7d55326f-52eb-4f7f-a9cd-05282ca6ca20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:24:04 np0005542249 nova_compute[254900]: 2025-12-02 11:24:04.038 254904 DEBUG nova.network.neutron [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:24:04 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1313: 321 pgs: 321 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.9 MiB/s wr, 87 op/s
Dec  2 06:24:04 np0005542249 NetworkManager[48987]: <info>  [1764674644.3399] manager: (patch-br-int-to-provnet-236defd8-cd9f-40fb-be9d-c3108d5fe981): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Dec  2 06:24:04 np0005542249 NetworkManager[48987]: <info>  [1764674644.3412] manager: (patch-provnet-236defd8-cd9f-40fb-be9d-c3108d5fe981-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Dec  2 06:24:04 np0005542249 nova_compute[254900]: 2025-12-02 11:24:04.341 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:04 np0005542249 nova_compute[254900]: 2025-12-02 11:24:04.590 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:04 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:04Z|00131|binding|INFO|Releasing lport e0edef65-5991-4952-ac57-244aecf03cee from this chassis (sb_readonly=0)
Dec  2 06:24:04 np0005542249 nova_compute[254900]: 2025-12-02 11:24:04.618 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:04 np0005542249 nova_compute[254900]: 2025-12-02 11:24:04.943 254904 DEBUG nova.compute.manager [req-6a46bd31-c386-4698-9db3-503f32bf67ad req-5e7e14b5-ba50-4124-b80a-df1cdf68b7fd 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Received event network-changed-ed87e718-f390-41da-bc93-530936610716 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:24:04 np0005542249 nova_compute[254900]: 2025-12-02 11:24:04.944 254904 DEBUG nova.compute.manager [req-6a46bd31-c386-4698-9db3-503f32bf67ad req-5e7e14b5-ba50-4124-b80a-df1cdf68b7fd 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Refreshing instance network info cache due to event network-changed-ed87e718-f390-41da-bc93-530936610716. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:24:04 np0005542249 nova_compute[254900]: 2025-12-02 11:24:04.944 254904 DEBUG oslo_concurrency.lockutils [req-6a46bd31-c386-4698-9db3-503f32bf67ad req-5e7e14b5-ba50-4124-b80a-df1cdf68b7fd 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-2c9c5605-aec2-4e63-a5b5-49c57c3e9e17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:24:04 np0005542249 nova_compute[254900]: 2025-12-02 11:24:04.944 254904 DEBUG oslo_concurrency.lockutils [req-6a46bd31-c386-4698-9db3-503f32bf67ad req-5e7e14b5-ba50-4124-b80a-df1cdf68b7fd 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-2c9c5605-aec2-4e63-a5b5-49c57c3e9e17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:24:04 np0005542249 nova_compute[254900]: 2025-12-02 11:24:04.945 254904 DEBUG nova.network.neutron [req-6a46bd31-c386-4698-9db3-503f32bf67ad req-5e7e14b5-ba50-4124-b80a-df1cdf68b7fd 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Refreshing network info cache for port ed87e718-f390-41da-bc93-530936610716 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.085 254904 DEBUG nova.network.neutron [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Updating instance_info_cache with network_info: [{"id": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "address": "fa:16:3e:02:ef:76", "network": {"id": "202d4c1b-b1c2-4564-b679-1d789b189a11", "bridge": "br-int", "label": "tempest-TestStampPattern-951670540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bda44a38b8b4f31a8b6e8f6f0548898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6b169f0-73", "ovs_interfaceid": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.117 254904 DEBUG oslo_concurrency.lockutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Releasing lock "refresh_cache-7d55326f-52eb-4f7f-a9cd-05282ca6ca20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.119 254904 DEBUG nova.compute.manager [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Instance network_info: |[{"id": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "address": "fa:16:3e:02:ef:76", "network": {"id": "202d4c1b-b1c2-4564-b679-1d789b189a11", "bridge": "br-int", "label": "tempest-TestStampPattern-951670540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bda44a38b8b4f31a8b6e8f6f0548898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6b169f0-73", "ovs_interfaceid": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.119 254904 DEBUG oslo_concurrency.lockutils [req-e1289b96-5067-4609-9908-0177961b9dcd req-b3146228-95cf-4e16-adb3-54a4fa9d5d95 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-7d55326f-52eb-4f7f-a9cd-05282ca6ca20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.120 254904 DEBUG nova.network.neutron [req-e1289b96-5067-4609-9908-0177961b9dcd req-b3146228-95cf-4e16-adb3-54a4fa9d5d95 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Refreshing network info cache for port e6b169f0-73a9-4a53-93e6-a0290b2f4f4a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.124 254904 DEBUG nova.virt.libvirt.driver [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Start _get_guest_xml network_info=[{"id": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "address": "fa:16:3e:02:ef:76", "network": {"id": "202d4c1b-b1c2-4564-b679-1d789b189a11", "bridge": "br-int", "label": "tempest-TestStampPattern-951670540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bda44a38b8b4f31a8b6e8f6f0548898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6b169f0-73", "ovs_interfaceid": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'image_id': '5a40f66c-ab43-47dd-9880-e59f9fa2c60e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.129 254904 WARNING nova.virt.libvirt.driver [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.135 254904 DEBUG nova.virt.libvirt.host [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.136 254904 DEBUG nova.virt.libvirt.host [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.151 254904 DEBUG nova.virt.libvirt.host [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.151 254904 DEBUG nova.virt.libvirt.host [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.152 254904 DEBUG nova.virt.libvirt.driver [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.152 254904 DEBUG nova.virt.hardware [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.153 254904 DEBUG nova.virt.hardware [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.153 254904 DEBUG nova.virt.hardware [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.154 254904 DEBUG nova.virt.hardware [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.154 254904 DEBUG nova.virt.hardware [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.154 254904 DEBUG nova.virt.hardware [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.155 254904 DEBUG nova.virt.hardware [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.155 254904 DEBUG nova.virt.hardware [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.156 254904 DEBUG nova.virt.hardware [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.156 254904 DEBUG nova.virt.hardware [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.156 254904 DEBUG nova.virt.hardware [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.161 254904 DEBUG oslo_concurrency.processutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.284 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:24:05 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/612106336' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.624 254904 DEBUG oslo_concurrency.processutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.656 254904 DEBUG nova.storage.rbd_utils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] rbd image 7d55326f-52eb-4f7f-a9cd-05282ca6ca20_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:24:05 np0005542249 nova_compute[254900]: 2025-12-02 11:24:05.663 254904 DEBUG oslo_concurrency.processutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:06 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1314: 321 pgs: 321 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.1 MiB/s wr, 107 op/s
Dec  2 06:24:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:24:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/940617450' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:24:06 np0005542249 podman[278863]: 2025-12-02 11:24:06.076364865 +0000 UTC m=+0.145094581 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.100 254904 DEBUG oslo_concurrency.processutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.104 254904 DEBUG nova.virt.libvirt.vif [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:23:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1044177341',display_name='tempest-TestStampPattern-server-1044177341',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1044177341',id=13,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK61TL5woGKETyNF39DpzDOQY/3175WIhzETkz8a48X3DCBshG53o26Fmw3NbtfN3xWBBqbdYLP1coPVuKFyoXP+AsOB02VoMViHoPOBDDnBHEpudY8V6Kx5OUC4H683Ng==',key_name='tempest-TestStampPattern-1085425870',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8bda44a38b8b4f31a8b6e8f6f0548898',ramdisk_id='',reservation_id='r-9lo6jxsl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-2114823383',owner_user_name='tempest-TestStampPattern-2114823383-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:24:01Z,user_data=None,user_id='b3ecaaf4f0044a58b99879bf1c55b18e',uuid=7d55326f-52eb-4f7f-a9cd-05282ca6ca20,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "address": "fa:16:3e:02:ef:76", "network": {"id": "202d4c1b-b1c2-4564-b679-1d789b189a11", "bridge": "br-int", "label": "tempest-TestStampPattern-951670540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bda44a38b8b4f31a8b6e8f6f0548898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6b169f0-73", "ovs_interfaceid": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.104 254904 DEBUG nova.network.os_vif_util [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Converting VIF {"id": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "address": "fa:16:3e:02:ef:76", "network": {"id": "202d4c1b-b1c2-4564-b679-1d789b189a11", "bridge": "br-int", "label": "tempest-TestStampPattern-951670540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bda44a38b8b4f31a8b6e8f6f0548898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6b169f0-73", "ovs_interfaceid": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.106 254904 DEBUG nova.network.os_vif_util [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:ef:76,bridge_name='br-int',has_traffic_filtering=True,id=e6b169f0-73a9-4a53-93e6-a0290b2f4f4a,network=Network(202d4c1b-b1c2-4564-b679-1d789b189a11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6b169f0-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.108 254904 DEBUG nova.objects.instance [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7d55326f-52eb-4f7f-a9cd-05282ca6ca20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.138 254904 DEBUG nova.virt.libvirt.driver [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:24:06 np0005542249 nova_compute[254900]:  <uuid>7d55326f-52eb-4f7f-a9cd-05282ca6ca20</uuid>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:  <name>instance-0000000d</name>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <nova:name>tempest-TestStampPattern-server-1044177341</nova:name>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:24:05</nova:creationTime>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:24:06 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:        <nova:user uuid="b3ecaaf4f0044a58b99879bf1c55b18e">tempest-TestStampPattern-2114823383-project-member</nova:user>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:        <nova:project uuid="8bda44a38b8b4f31a8b6e8f6f0548898">tempest-TestStampPattern-2114823383</nova:project>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <nova:root type="image" uuid="5a40f66c-ab43-47dd-9880-e59f9fa2c60e"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:        <nova:port uuid="e6b169f0-73a9-4a53-93e6-a0290b2f4f4a">
Dec  2 06:24:06 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <entry name="serial">7d55326f-52eb-4f7f-a9cd-05282ca6ca20</entry>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <entry name="uuid">7d55326f-52eb-4f7f-a9cd-05282ca6ca20</entry>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/7d55326f-52eb-4f7f-a9cd-05282ca6ca20_disk">
Dec  2 06:24:06 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:24:06 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/7d55326f-52eb-4f7f-a9cd-05282ca6ca20_disk.config">
Dec  2 06:24:06 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:24:06 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:02:ef:76"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <target dev="tape6b169f0-73"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/7d55326f-52eb-4f7f-a9cd-05282ca6ca20/console.log" append="off"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:24:06 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:24:06 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:24:06 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:24:06 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.141 254904 DEBUG nova.compute.manager [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Preparing to wait for external event network-vif-plugged-e6b169f0-73a9-4a53-93e6-a0290b2f4f4a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.141 254904 DEBUG oslo_concurrency.lockutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Acquiring lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.142 254904 DEBUG oslo_concurrency.lockutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.142 254904 DEBUG oslo_concurrency.lockutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.144 254904 DEBUG nova.virt.libvirt.vif [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:23:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1044177341',display_name='tempest-TestStampPattern-server-1044177341',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1044177341',id=13,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK61TL5woGKETyNF39DpzDOQY/3175WIhzETkz8a48X3DCBshG53o26Fmw3NbtfN3xWBBqbdYLP1coPVuKFyoXP+AsOB02VoMViHoPOBDDnBHEpudY8V6Kx5OUC4H683Ng==',key_name='tempest-TestStampPattern-1085425870',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8bda44a38b8b4f31a8b6e8f6f0548898',ramdisk_id='',reservation_id='r-9lo6jxsl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-2114823383',owner_user_name='tempest-TestStampPattern-2114823383-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:24:01Z,user_data=None,user_id='b3ecaaf4f0044a58b99879bf1c55b18e',uuid=7d55326f-52eb-4f7f-a9cd-05282ca6ca20,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "address": "fa:16:3e:02:ef:76", "network": {"id": "202d4c1b-b1c2-4564-b679-1d789b189a11", "bridge": "br-int", "label": "tempest-TestStampPattern-951670540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bda44a38b8b4f31a8b6e8f6f0548898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6b169f0-73", "ovs_interfaceid": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.144 254904 DEBUG nova.network.os_vif_util [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Converting VIF {"id": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "address": "fa:16:3e:02:ef:76", "network": {"id": "202d4c1b-b1c2-4564-b679-1d789b189a11", "bridge": "br-int", "label": "tempest-TestStampPattern-951670540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bda44a38b8b4f31a8b6e8f6f0548898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6b169f0-73", "ovs_interfaceid": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.145 254904 DEBUG nova.network.os_vif_util [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:ef:76,bridge_name='br-int',has_traffic_filtering=True,id=e6b169f0-73a9-4a53-93e6-a0290b2f4f4a,network=Network(202d4c1b-b1c2-4564-b679-1d789b189a11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6b169f0-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.146 254904 DEBUG os_vif [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:ef:76,bridge_name='br-int',has_traffic_filtering=True,id=e6b169f0-73a9-4a53-93e6-a0290b2f4f4a,network=Network(202d4c1b-b1c2-4564-b679-1d789b189a11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6b169f0-73') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.148 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.149 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.149 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.162 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.163 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape6b169f0-73, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.164 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape6b169f0-73, col_values=(('external_ids', {'iface-id': 'e6b169f0-73a9-4a53-93e6-a0290b2f4f4a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:ef:76', 'vm-uuid': '7d55326f-52eb-4f7f-a9cd-05282ca6ca20'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.166 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:06 np0005542249 NetworkManager[48987]: <info>  [1764674646.1674] manager: (tape6b169f0-73): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.169 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.174 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.177 254904 INFO os_vif [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:ef:76,bridge_name='br-int',has_traffic_filtering=True,id=e6b169f0-73a9-4a53-93e6-a0290b2f4f4a,network=Network(202d4c1b-b1c2-4564-b679-1d789b189a11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6b169f0-73')#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.262 254904 DEBUG nova.virt.libvirt.driver [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.262 254904 DEBUG nova.virt.libvirt.driver [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.262 254904 DEBUG nova.virt.libvirt.driver [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] No VIF found with MAC fa:16:3e:02:ef:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.263 254904 INFO nova.virt.libvirt.driver [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Using config drive#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.283 254904 DEBUG nova.storage.rbd_utils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] rbd image 7d55326f-52eb-4f7f-a9cd-05282ca6ca20_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.400 254904 DEBUG nova.network.neutron [req-6a46bd31-c386-4698-9db3-503f32bf67ad req-5e7e14b5-ba50-4124-b80a-df1cdf68b7fd 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Updated VIF entry in instance network info cache for port ed87e718-f390-41da-bc93-530936610716. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.400 254904 DEBUG nova.network.neutron [req-6a46bd31-c386-4698-9db3-503f32bf67ad req-5e7e14b5-ba50-4124-b80a-df1cdf68b7fd 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Updating instance_info_cache with network_info: [{"id": "ed87e718-f390-41da-bc93-530936610716", "address": "fa:16:3e:25:b1:91", "network": {"id": "d3026dba-17e4-448f-b1b1-951c4cf69278", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-452915621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e06121cb1a114bf997558a008929f199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped87e718-f3", "ovs_interfaceid": "ed87e718-f390-41da-bc93-530936610716", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.520 254904 DEBUG oslo_concurrency.lockutils [req-6a46bd31-c386-4698-9db3-503f32bf67ad req-5e7e14b5-ba50-4124-b80a-df1cdf68b7fd 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-2c9c5605-aec2-4e63-a5b5-49c57c3e9e17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.636 254904 DEBUG nova.network.neutron [req-e1289b96-5067-4609-9908-0177961b9dcd req-b3146228-95cf-4e16-adb3-54a4fa9d5d95 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Updated VIF entry in instance network info cache for port e6b169f0-73a9-4a53-93e6-a0290b2f4f4a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.637 254904 DEBUG nova.network.neutron [req-e1289b96-5067-4609-9908-0177961b9dcd req-b3146228-95cf-4e16-adb3-54a4fa9d5d95 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Updating instance_info_cache with network_info: [{"id": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "address": "fa:16:3e:02:ef:76", "network": {"id": "202d4c1b-b1c2-4564-b679-1d789b189a11", "bridge": "br-int", "label": "tempest-TestStampPattern-951670540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bda44a38b8b4f31a8b6e8f6f0548898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6b169f0-73", "ovs_interfaceid": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.652 254904 DEBUG oslo_concurrency.lockutils [req-e1289b96-5067-4609-9908-0177961b9dcd req-b3146228-95cf-4e16-adb3-54a4fa9d5d95 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-7d55326f-52eb-4f7f-a9cd-05282ca6ca20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.787 254904 INFO nova.virt.libvirt.driver [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Creating config drive at /var/lib/nova/instances/7d55326f-52eb-4f7f-a9cd-05282ca6ca20/disk.config#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.795 254904 DEBUG oslo_concurrency.processutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7d55326f-52eb-4f7f-a9cd-05282ca6ca20/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb3r5_z8l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.944 254904 DEBUG oslo_concurrency.processutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7d55326f-52eb-4f7f-a9cd-05282ca6ca20/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb3r5_z8l" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.982 254904 DEBUG nova.storage.rbd_utils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] rbd image 7d55326f-52eb-4f7f-a9cd-05282ca6ca20_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:24:06 np0005542249 nova_compute[254900]: 2025-12-02 11:24:06.987 254904 DEBUG oslo_concurrency.processutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7d55326f-52eb-4f7f-a9cd-05282ca6ca20/disk.config 7d55326f-52eb-4f7f-a9cd-05282ca6ca20_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:07 np0005542249 nova_compute[254900]: 2025-12-02 11:24:07.182 254904 DEBUG oslo_concurrency.processutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7d55326f-52eb-4f7f-a9cd-05282ca6ca20/disk.config 7d55326f-52eb-4f7f-a9cd-05282ca6ca20_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:07 np0005542249 nova_compute[254900]: 2025-12-02 11:24:07.184 254904 INFO nova.virt.libvirt.driver [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Deleting local config drive /var/lib/nova/instances/7d55326f-52eb-4f7f-a9cd-05282ca6ca20/disk.config because it was imported into RBD.#033[00m
Dec  2 06:24:07 np0005542249 kernel: tape6b169f0-73: entered promiscuous mode
Dec  2 06:24:07 np0005542249 NetworkManager[48987]: <info>  [1764674647.2529] manager: (tape6b169f0-73): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Dec  2 06:24:07 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:07Z|00132|binding|INFO|Claiming lport e6b169f0-73a9-4a53-93e6-a0290b2f4f4a for this chassis.
Dec  2 06:24:07 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:07Z|00133|binding|INFO|e6b169f0-73a9-4a53-93e6-a0290b2f4f4a: Claiming fa:16:3e:02:ef:76 10.100.0.9
Dec  2 06:24:07 np0005542249 nova_compute[254900]: 2025-12-02 11:24:07.255 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.263 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:ef:76 10.100.0.9'], port_security=['fa:16:3e:02:ef:76 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7d55326f-52eb-4f7f-a9cd-05282ca6ca20', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-202d4c1b-b1c2-4564-b679-1d789b189a11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bda44a38b8b4f31a8b6e8f6f0548898', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5dfe3f56-bbf4-4aa1-aeeb-3cc512936610', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64d73c60-6b3f-4965-8ef1-0cdac8312377, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=e6b169f0-73a9-4a53-93e6-a0290b2f4f4a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.265 163757 INFO neutron.agent.ovn.metadata.agent [-] Port e6b169f0-73a9-4a53-93e6-a0290b2f4f4a in datapath 202d4c1b-b1c2-4564-b679-1d789b189a11 bound to our chassis#033[00m
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.267 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 202d4c1b-b1c2-4564-b679-1d789b189a11#033[00m
Dec  2 06:24:07 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:07Z|00134|binding|INFO|Setting lport e6b169f0-73a9-4a53-93e6-a0290b2f4f4a ovn-installed in OVS
Dec  2 06:24:07 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:07Z|00135|binding|INFO|Setting lport e6b169f0-73a9-4a53-93e6-a0290b2f4f4a up in Southbound
Dec  2 06:24:07 np0005542249 nova_compute[254900]: 2025-12-02 11:24:07.284 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.288 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[19607083-4598-4b8d-848f-97d40ff0e6fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.289 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap202d4c1b-b1 in ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.293 262398 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap202d4c1b-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.293 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[7011da1d-3e25-4c4e-a7ea-957f1a80191b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.296 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[9db33dd8-2611-487c-8dca-f4ef7dd05a80]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:07 np0005542249 systemd-udevd[278957]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.314 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[197bd7da-7698-4a4d-b54d-1d792001677d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:07 np0005542249 systemd-machined[216222]: New machine qemu-13-instance-0000000d.
Dec  2 06:24:07 np0005542249 NetworkManager[48987]: <info>  [1764674647.3219] device (tape6b169f0-73): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:24:07 np0005542249 NetworkManager[48987]: <info>  [1764674647.3231] device (tape6b169f0-73): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:24:07 np0005542249 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.344 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[4603ebe1-9a6c-4525-b5e3-543842ed6890]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.389 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[abfdfe0a-cabe-4b92-a5dc-58e99f3e141f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:07 np0005542249 NetworkManager[48987]: <info>  [1764674647.3988] manager: (tap202d4c1b-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/80)
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.397 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[a9fd4fd1-6877-42ed-90b3-6efdc40331ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.454 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6cda8a-aecc-47ca-8d99-4210e61fb44b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.460 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb0796f-fe54-44a6-a58f-4f27a3ea9789]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:07 np0005542249 NetworkManager[48987]: <info>  [1764674647.4970] device (tap202d4c1b-b0): carrier: link connected
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.505 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[3e0d12eb-783f-4fd6-a19f-b5a6dc8c08ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.532 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[9471b2a0-bce1-421f-83ce-6438b6a80921]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap202d4c1b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:0f:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 484032, 'reachable_time': 17542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278990, 'error': None, 'target': 'ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.554 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6fc0fc-ebdc-4855-abd6-547b1ff7d311]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef5:f41'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 484032, 'tstamp': 484032}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278991, 'error': None, 'target': 'ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.593 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f21f82-fee2-41ab-8c31-ecea8699df27]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap202d4c1b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:0f:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 484032, 'reachable_time': 17542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278992, 'error': None, 'target': 'ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.637 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[e05ac8e6-0168-48bd-b41c-bdcab1aa135d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.737 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[6ece8ec0-1b77-4508-ad74-914e8635dec1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.740 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap202d4c1b-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.740 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.741 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap202d4c1b-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:24:07 np0005542249 nova_compute[254900]: 2025-12-02 11:24:07.797 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:07 np0005542249 NetworkManager[48987]: <info>  [1764674647.7979] manager: (tap202d4c1b-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Dec  2 06:24:07 np0005542249 kernel: tap202d4c1b-b0: entered promiscuous mode
Dec  2 06:24:07 np0005542249 nova_compute[254900]: 2025-12-02 11:24:07.804 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.806 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap202d4c1b-b0, col_values=(('external_ids', {'iface-id': '3e075ddd-b183-4a84-8a21-ad15a1122c7d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:24:07 np0005542249 nova_compute[254900]: 2025-12-02 11:24:07.808 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:07 np0005542249 nova_compute[254900]: 2025-12-02 11:24:07.810 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.811 163757 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/202d4c1b-b1c2-4564-b679-1d789b189a11.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/202d4c1b-b1c2-4564-b679-1d789b189a11.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.813 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[e874a623-5100-4575-a236-5c4bfb96a584]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:07 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:07Z|00136|binding|INFO|Releasing lport 3e075ddd-b183-4a84-8a21-ad15a1122c7d from this chassis (sb_readonly=0)
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.814 163757 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: global
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]:    log         /dev/log local0 debug
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]:    log-tag     haproxy-metadata-proxy-202d4c1b-b1c2-4564-b679-1d789b189a11
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]:    user        root
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]:    group       root
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]:    maxconn     1024
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]:    pidfile     /var/lib/neutron/external/pids/202d4c1b-b1c2-4564-b679-1d789b189a11.pid.haproxy
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]:    daemon
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: defaults
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]:    log global
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]:    mode http
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]:    option httplog
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]:    option dontlognull
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]:    option http-server-close
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]:    option forwardfor
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]:    retries                 3
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]:    timeout http-request    30s
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]:    timeout connect         30s
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]:    timeout client          32s
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]:    timeout server          32s
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]:    timeout http-keep-alive 30s
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: listen listener
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]:    bind 169.254.169.254:80
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]:    http-request add-header X-OVN-Network-ID 202d4c1b-b1c2-4564-b679-1d789b189a11
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 06:24:07 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:07.815 163757 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11', 'env', 'PROCESS_TAG=haproxy-202d4c1b-b1c2-4564-b679-1d789b189a11', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/202d4c1b-b1c2-4564-b679-1d789b189a11.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 06:24:07 np0005542249 nova_compute[254900]: 2025-12-02 11:24:07.847 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:08 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1315: 321 pgs: 321 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 134 op/s
Dec  2 06:24:08 np0005542249 nova_compute[254900]: 2025-12-02 11:24:08.069 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674648.0682943, 7d55326f-52eb-4f7f-a9cd-05282ca6ca20 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:24:08 np0005542249 nova_compute[254900]: 2025-12-02 11:24:08.069 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] VM Started (Lifecycle Event)#033[00m
Dec  2 06:24:08 np0005542249 nova_compute[254900]: 2025-12-02 11:24:08.097 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:24:08 np0005542249 nova_compute[254900]: 2025-12-02 11:24:08.101 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674648.0686405, 7d55326f-52eb-4f7f-a9cd-05282ca6ca20 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:24:08 np0005542249 nova_compute[254900]: 2025-12-02 11:24:08.102 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:24:08 np0005542249 nova_compute[254900]: 2025-12-02 11:24:08.121 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:24:08 np0005542249 nova_compute[254900]: 2025-12-02 11:24:08.126 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:24:08 np0005542249 nova_compute[254900]: 2025-12-02 11:24:08.151 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:24:08 np0005542249 nova_compute[254900]: 2025-12-02 11:24:08.243 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:08 np0005542249 podman[279066]: 2025-12-02 11:24:08.307801422 +0000 UTC m=+0.080309101 container create edd9211b774a10b8a03207a8a2ae6c558d7143a33ba55e8b3ea7273d7fba6a4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec  2 06:24:08 np0005542249 podman[279066]: 2025-12-02 11:24:08.262403646 +0000 UTC m=+0.034911325 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:24:08 np0005542249 systemd[1]: Started libpod-conmon-edd9211b774a10b8a03207a8a2ae6c558d7143a33ba55e8b3ea7273d7fba6a4a.scope.
Dec  2 06:24:08 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:24:08 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7456695519acebaf7b25892e7aab2f1867a04bec0b6409570bebc013aee17da/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 06:24:08 np0005542249 podman[279066]: 2025-12-02 11:24:08.441399823 +0000 UTC m=+0.213907552 container init edd9211b774a10b8a03207a8a2ae6c558d7143a33ba55e8b3ea7273d7fba6a4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec  2 06:24:08 np0005542249 podman[279066]: 2025-12-02 11:24:08.452164613 +0000 UTC m=+0.224672282 container start edd9211b774a10b8a03207a8a2ae6c558d7143a33ba55e8b3ea7273d7fba6a4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  2 06:24:08 np0005542249 neutron-haproxy-ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11[279081]: [NOTICE]   (279085) : New worker (279087) forked
Dec  2 06:24:08 np0005542249 neutron-haproxy-ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11[279081]: [NOTICE]   (279085) : Loading success.
Dec  2 06:24:10 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1316: 321 pgs: 321 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.9 MiB/s wr, 131 op/s
Dec  2 06:24:10 np0005542249 nova_compute[254900]: 2025-12-02 11:24:10.198 254904 DEBUG nova.compute.manager [req-e7585ebf-9629-40f0-8924-0542a36b4664 req-903c12b8-c956-4f9d-918c-af7878e6a76c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Received event network-vif-plugged-e6b169f0-73a9-4a53-93e6-a0290b2f4f4a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:24:10 np0005542249 nova_compute[254900]: 2025-12-02 11:24:10.199 254904 DEBUG oslo_concurrency.lockutils [req-e7585ebf-9629-40f0-8924-0542a36b4664 req-903c12b8-c956-4f9d-918c-af7878e6a76c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:10 np0005542249 nova_compute[254900]: 2025-12-02 11:24:10.199 254904 DEBUG oslo_concurrency.lockutils [req-e7585ebf-9629-40f0-8924-0542a36b4664 req-903c12b8-c956-4f9d-918c-af7878e6a76c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:10 np0005542249 nova_compute[254900]: 2025-12-02 11:24:10.200 254904 DEBUG oslo_concurrency.lockutils [req-e7585ebf-9629-40f0-8924-0542a36b4664 req-903c12b8-c956-4f9d-918c-af7878e6a76c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:10 np0005542249 nova_compute[254900]: 2025-12-02 11:24:10.200 254904 DEBUG nova.compute.manager [req-e7585ebf-9629-40f0-8924-0542a36b4664 req-903c12b8-c956-4f9d-918c-af7878e6a76c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Processing event network-vif-plugged-e6b169f0-73a9-4a53-93e6-a0290b2f4f4a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:24:10 np0005542249 nova_compute[254900]: 2025-12-02 11:24:10.201 254904 DEBUG nova.compute.manager [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:24:10 np0005542249 nova_compute[254900]: 2025-12-02 11:24:10.210 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674650.2098646, 7d55326f-52eb-4f7f-a9cd-05282ca6ca20 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:24:10 np0005542249 nova_compute[254900]: 2025-12-02 11:24:10.210 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:24:10 np0005542249 nova_compute[254900]: 2025-12-02 11:24:10.213 254904 DEBUG nova.virt.libvirt.driver [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:24:10 np0005542249 nova_compute[254900]: 2025-12-02 11:24:10.219 254904 INFO nova.virt.libvirt.driver [-] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Instance spawned successfully.#033[00m
Dec  2 06:24:10 np0005542249 nova_compute[254900]: 2025-12-02 11:24:10.220 254904 DEBUG nova.virt.libvirt.driver [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 06:24:10 np0005542249 nova_compute[254900]: 2025-12-02 11:24:10.246 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:24:10 np0005542249 nova_compute[254900]: 2025-12-02 11:24:10.258 254904 DEBUG nova.virt.libvirt.driver [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:24:10 np0005542249 nova_compute[254900]: 2025-12-02 11:24:10.259 254904 DEBUG nova.virt.libvirt.driver [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:24:10 np0005542249 nova_compute[254900]: 2025-12-02 11:24:10.261 254904 DEBUG nova.virt.libvirt.driver [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:24:10 np0005542249 nova_compute[254900]: 2025-12-02 11:24:10.262 254904 DEBUG nova.virt.libvirt.driver [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:24:10 np0005542249 nova_compute[254900]: 2025-12-02 11:24:10.263 254904 DEBUG nova.virt.libvirt.driver [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:24:10 np0005542249 nova_compute[254900]: 2025-12-02 11:24:10.264 254904 DEBUG nova.virt.libvirt.driver [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:24:10 np0005542249 nova_compute[254900]: 2025-12-02 11:24:10.269 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:24:10 np0005542249 nova_compute[254900]: 2025-12-02 11:24:10.316 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:24:10 np0005542249 nova_compute[254900]: 2025-12-02 11:24:10.365 254904 INFO nova.compute.manager [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Took 8.93 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:24:10 np0005542249 nova_compute[254900]: 2025-12-02 11:24:10.367 254904 DEBUG nova.compute.manager [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:24:10 np0005542249 nova_compute[254900]: 2025-12-02 11:24:10.444 254904 INFO nova.compute.manager [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Took 10.02 seconds to build instance.#033[00m
Dec  2 06:24:10 np0005542249 nova_compute[254900]: 2025-12-02 11:24:10.463 254904 DEBUG oslo_concurrency.lockutils [None req-ab0fb3a8-44e3-4928-98e4-4efcddb286ad b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:11 np0005542249 nova_compute[254900]: 2025-12-02 11:24:11.166 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:24:12 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1317: 321 pgs: 321 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Dec  2 06:24:12 np0005542249 nova_compute[254900]: 2025-12-02 11:24:12.309 254904 DEBUG nova.compute.manager [req-c42368a3-808a-4a4d-a46e-6d4e066a0c53 req-325d60a3-f1fa-4b91-a9c8-727e457e4b3d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Received event network-vif-plugged-e6b169f0-73a9-4a53-93e6-a0290b2f4f4a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:24:12 np0005542249 nova_compute[254900]: 2025-12-02 11:24:12.310 254904 DEBUG oslo_concurrency.lockutils [req-c42368a3-808a-4a4d-a46e-6d4e066a0c53 req-325d60a3-f1fa-4b91-a9c8-727e457e4b3d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:12 np0005542249 nova_compute[254900]: 2025-12-02 11:24:12.311 254904 DEBUG oslo_concurrency.lockutils [req-c42368a3-808a-4a4d-a46e-6d4e066a0c53 req-325d60a3-f1fa-4b91-a9c8-727e457e4b3d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:12 np0005542249 nova_compute[254900]: 2025-12-02 11:24:12.311 254904 DEBUG oslo_concurrency.lockutils [req-c42368a3-808a-4a4d-a46e-6d4e066a0c53 req-325d60a3-f1fa-4b91-a9c8-727e457e4b3d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:12 np0005542249 nova_compute[254900]: 2025-12-02 11:24:12.311 254904 DEBUG nova.compute.manager [req-c42368a3-808a-4a4d-a46e-6d4e066a0c53 req-325d60a3-f1fa-4b91-a9c8-727e457e4b3d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] No waiting events found dispatching network-vif-plugged-e6b169f0-73a9-4a53-93e6-a0290b2f4f4a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:24:12 np0005542249 nova_compute[254900]: 2025-12-02 11:24:12.312 254904 WARNING nova.compute.manager [req-c42368a3-808a-4a4d-a46e-6d4e066a0c53 req-325d60a3-f1fa-4b91-a9c8-727e457e4b3d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Received unexpected event network-vif-plugged-e6b169f0-73a9-4a53-93e6-a0290b2f4f4a for instance with vm_state active and task_state None.#033[00m
Dec  2 06:24:13 np0005542249 nova_compute[254900]: 2025-12-02 11:24:13.307 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:14 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1318: 321 pgs: 321 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 5.3 MiB/s rd, 1.8 MiB/s wr, 181 op/s
Dec  2 06:24:14 np0005542249 podman[279096]: 2025-12-02 11:24:14.048542006 +0000 UTC m=+0.114770172 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  2 06:24:14 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:14Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:25:b1:91 10.100.0.11
Dec  2 06:24:14 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:14Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:25:b1:91 10.100.0.11
Dec  2 06:24:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:24:15 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1424252155' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:24:15 np0005542249 nova_compute[254900]: 2025-12-02 11:24:15.152 254904 DEBUG nova.compute.manager [req-b5b3880f-bd90-429e-ad35-391838ef6a0f req-1c63b71a-2769-4fd2-9cf3-558eb07ad15d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Received event network-changed-e6b169f0-73a9-4a53-93e6-a0290b2f4f4a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:24:15 np0005542249 nova_compute[254900]: 2025-12-02 11:24:15.153 254904 DEBUG nova.compute.manager [req-b5b3880f-bd90-429e-ad35-391838ef6a0f req-1c63b71a-2769-4fd2-9cf3-558eb07ad15d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Refreshing instance network info cache due to event network-changed-e6b169f0-73a9-4a53-93e6-a0290b2f4f4a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:24:15 np0005542249 nova_compute[254900]: 2025-12-02 11:24:15.153 254904 DEBUG oslo_concurrency.lockutils [req-b5b3880f-bd90-429e-ad35-391838ef6a0f req-1c63b71a-2769-4fd2-9cf3-558eb07ad15d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-7d55326f-52eb-4f7f-a9cd-05282ca6ca20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:24:15 np0005542249 nova_compute[254900]: 2025-12-02 11:24:15.153 254904 DEBUG oslo_concurrency.lockutils [req-b5b3880f-bd90-429e-ad35-391838ef6a0f req-1c63b71a-2769-4fd2-9cf3-558eb07ad15d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-7d55326f-52eb-4f7f-a9cd-05282ca6ca20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:24:15 np0005542249 nova_compute[254900]: 2025-12-02 11:24:15.153 254904 DEBUG nova.network.neutron [req-b5b3880f-bd90-429e-ad35-391838ef6a0f req-1c63b71a-2769-4fd2-9cf3-558eb07ad15d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Refreshing network info cache for port e6b169f0-73a9-4a53-93e6-a0290b2f4f4a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:24:16 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1319: 321 pgs: 321 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 4.5 MiB/s rd, 2.1 MiB/s wr, 174 op/s
Dec  2 06:24:16 np0005542249 nova_compute[254900]: 2025-12-02 11:24:16.169 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Dec  2 06:24:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Dec  2 06:24:16 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Dec  2 06:24:16 np0005542249 nova_compute[254900]: 2025-12-02 11:24:16.807 254904 DEBUG nova.network.neutron [req-b5b3880f-bd90-429e-ad35-391838ef6a0f req-1c63b71a-2769-4fd2-9cf3-558eb07ad15d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Updated VIF entry in instance network info cache for port e6b169f0-73a9-4a53-93e6-a0290b2f4f4a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:24:16 np0005542249 nova_compute[254900]: 2025-12-02 11:24:16.808 254904 DEBUG nova.network.neutron [req-b5b3880f-bd90-429e-ad35-391838ef6a0f req-1c63b71a-2769-4fd2-9cf3-558eb07ad15d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Updating instance_info_cache with network_info: [{"id": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "address": "fa:16:3e:02:ef:76", "network": {"id": "202d4c1b-b1c2-4564-b679-1d789b189a11", "bridge": "br-int", "label": "tempest-TestStampPattern-951670540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bda44a38b8b4f31a8b6e8f6f0548898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6b169f0-73", "ovs_interfaceid": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:24:16 np0005542249 nova_compute[254900]: 2025-12-02 11:24:16.832 254904 DEBUG oslo_concurrency.lockutils [req-b5b3880f-bd90-429e-ad35-391838ef6a0f req-1c63b71a-2769-4fd2-9cf3-558eb07ad15d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-7d55326f-52eb-4f7f-a9cd-05282ca6ca20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:24:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:24:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Dec  2 06:24:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Dec  2 06:24:17 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Dec  2 06:24:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:24:17 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:24:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:24:17 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:24:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:24:17 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:24:17 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev fa6b6a26-7544-4745-b828-c6319dc5c077 does not exist
Dec  2 06:24:17 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev de4359c4-67cd-4b4c-ac1c-f2194ad18bda does not exist
Dec  2 06:24:17 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev f15a26c3-52a9-4868-a466-cb9538a05eb0 does not exist
Dec  2 06:24:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:24:17 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:24:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:24:17 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:24:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:24:17 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:24:17 np0005542249 podman[279278]: 2025-12-02 11:24:17.950204675 +0000 UTC m=+0.095288755 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  2 06:24:18 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1322: 321 pgs: 321 active+clean; 2.4 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 4.7 MiB/s rd, 8.1 MiB/s wr, 295 op/s
Dec  2 06:24:18 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:24:18 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:24:18 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:24:18 np0005542249 nova_compute[254900]: 2025-12-02 11:24:18.309 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:18 np0005542249 podman[279411]: 2025-12-02 11:24:18.529238021 +0000 UTC m=+0.068717038 container create 23f070dae28c161783467b5f8650a484a55d57c2da680fac0aa82205fea3d3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_elgamal, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:24:18 np0005542249 systemd[1]: Started libpod-conmon-23f070dae28c161783467b5f8650a484a55d57c2da680fac0aa82205fea3d3d7.scope.
Dec  2 06:24:18 np0005542249 podman[279411]: 2025-12-02 11:24:18.499817906 +0000 UTC m=+0.039296963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:24:18 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:24:18 np0005542249 podman[279411]: 2025-12-02 11:24:18.648259347 +0000 UTC m=+0.187738414 container init 23f070dae28c161783467b5f8650a484a55d57c2da680fac0aa82205fea3d3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:24:18 np0005542249 podman[279411]: 2025-12-02 11:24:18.661766703 +0000 UTC m=+0.201245720 container start 23f070dae28c161783467b5f8650a484a55d57c2da680fac0aa82205fea3d3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_elgamal, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:24:18 np0005542249 podman[279411]: 2025-12-02 11:24:18.665864313 +0000 UTC m=+0.205343300 container attach 23f070dae28c161783467b5f8650a484a55d57c2da680fac0aa82205fea3d3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_elgamal, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:24:18 np0005542249 friendly_elgamal[279428]: 167 167
Dec  2 06:24:18 np0005542249 systemd[1]: libpod-23f070dae28c161783467b5f8650a484a55d57c2da680fac0aa82205fea3d3d7.scope: Deactivated successfully.
Dec  2 06:24:18 np0005542249 podman[279411]: 2025-12-02 11:24:18.67386442 +0000 UTC m=+0.213343437 container died 23f070dae28c161783467b5f8650a484a55d57c2da680fac0aa82205fea3d3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  2 06:24:18 np0005542249 systemd[1]: var-lib-containers-storage-overlay-825f1afa4bcee4072175432cab43550309eee0838f260114409d04c7c99f806e-merged.mount: Deactivated successfully.
Dec  2 06:24:18 np0005542249 podman[279411]: 2025-12-02 11:24:18.728517637 +0000 UTC m=+0.267996604 container remove 23f070dae28c161783467b5f8650a484a55d57c2da680fac0aa82205fea3d3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_elgamal, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:24:18 np0005542249 systemd[1]: libpod-conmon-23f070dae28c161783467b5f8650a484a55d57c2da680fac0aa82205fea3d3d7.scope: Deactivated successfully.
Dec  2 06:24:18 np0005542249 podman[279451]: 2025-12-02 11:24:18.990538977 +0000 UTC m=+0.068292497 container create e44d3c90a70129af761d2ee65f0825a597fe8eb576c53f8380c5d84d352fa32f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_babbage, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Dec  2 06:24:19 np0005542249 podman[279451]: 2025-12-02 11:24:18.957133464 +0000 UTC m=+0.034887054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:24:19 np0005542249 systemd[1]: Started libpod-conmon-e44d3c90a70129af761d2ee65f0825a597fe8eb576c53f8380c5d84d352fa32f.scope.
Dec  2 06:24:19 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:24:19 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5d54dd0d5496e78a41aac0b94b7e507569b9157d2cd85b34dca07800d38918a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:24:19 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5d54dd0d5496e78a41aac0b94b7e507569b9157d2cd85b34dca07800d38918a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:24:19 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5d54dd0d5496e78a41aac0b94b7e507569b9157d2cd85b34dca07800d38918a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:24:19 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5d54dd0d5496e78a41aac0b94b7e507569b9157d2cd85b34dca07800d38918a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:24:19 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5d54dd0d5496e78a41aac0b94b7e507569b9157d2cd85b34dca07800d38918a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:24:19 np0005542249 podman[279451]: 2025-12-02 11:24:19.137264542 +0000 UTC m=+0.215018112 container init e44d3c90a70129af761d2ee65f0825a597fe8eb576c53f8380c5d84d352fa32f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_babbage, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:24:19 np0005542249 podman[279451]: 2025-12-02 11:24:19.150714525 +0000 UTC m=+0.228468045 container start e44d3c90a70129af761d2ee65f0825a597fe8eb576c53f8380c5d84d352fa32f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:24:19 np0005542249 podman[279451]: 2025-12-02 11:24:19.164923818 +0000 UTC m=+0.242677388 container attach e44d3c90a70129af761d2ee65f0825a597fe8eb576c53f8380c5d84d352fa32f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:24:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:24:19 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2406762571' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:24:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:19.840 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:19.842 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:19.844 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:20 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1323: 321 pgs: 321 active+clean; 2.4 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 6.0 MiB/s rd, 8.5 MiB/s wr, 303 op/s
Dec  2 06:24:20 np0005542249 fervent_babbage[279467]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:24:20 np0005542249 fervent_babbage[279467]: --> relative data size: 1.0
Dec  2 06:24:20 np0005542249 fervent_babbage[279467]: --> All data devices are unavailable
Dec  2 06:24:20 np0005542249 systemd[1]: libpod-e44d3c90a70129af761d2ee65f0825a597fe8eb576c53f8380c5d84d352fa32f.scope: Deactivated successfully.
Dec  2 06:24:20 np0005542249 systemd[1]: libpod-e44d3c90a70129af761d2ee65f0825a597fe8eb576c53f8380c5d84d352fa32f.scope: Consumed 1.101s CPU time.
Dec  2 06:24:20 np0005542249 conmon[279467]: conmon e44d3c90a70129af761d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e44d3c90a70129af761d2ee65f0825a597fe8eb576c53f8380c5d84d352fa32f.scope/container/memory.events
Dec  2 06:24:20 np0005542249 podman[279451]: 2025-12-02 11:24:20.328243073 +0000 UTC m=+1.405996573 container died e44d3c90a70129af761d2ee65f0825a597fe8eb576c53f8380c5d84d352fa32f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_babbage, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  2 06:24:20 np0005542249 systemd[1]: var-lib-containers-storage-overlay-e5d54dd0d5496e78a41aac0b94b7e507569b9157d2cd85b34dca07800d38918a-merged.mount: Deactivated successfully.
Dec  2 06:24:20 np0005542249 podman[279451]: 2025-12-02 11:24:20.396309093 +0000 UTC m=+1.474062583 container remove e44d3c90a70129af761d2ee65f0825a597fe8eb576c53f8380c5d84d352fa32f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_babbage, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Dec  2 06:24:20 np0005542249 systemd[1]: libpod-conmon-e44d3c90a70129af761d2ee65f0825a597fe8eb576c53f8380c5d84d352fa32f.scope: Deactivated successfully.
Dec  2 06:24:21 np0005542249 nova_compute[254900]: 2025-12-02 11:24:21.172 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Dec  2 06:24:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Dec  2 06:24:21 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Dec  2 06:24:21 np0005542249 podman[279649]: 2025-12-02 11:24:21.341271936 +0000 UTC m=+0.114343061 container create c44796cef27fc2fec227b92b2cc64fad39a8a9cfdfeb9235ad60cbe4d1b38597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:24:21 np0005542249 podman[279649]: 2025-12-02 11:24:21.309728104 +0000 UTC m=+0.082799249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:24:21 np0005542249 systemd[1]: Started libpod-conmon-c44796cef27fc2fec227b92b2cc64fad39a8a9cfdfeb9235ad60cbe4d1b38597.scope.
Dec  2 06:24:21 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:24:21 np0005542249 podman[279649]: 2025-12-02 11:24:21.466469129 +0000 UTC m=+0.239540254 container init c44796cef27fc2fec227b92b2cc64fad39a8a9cfdfeb9235ad60cbe4d1b38597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_rhodes, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:24:21 np0005542249 podman[279649]: 2025-12-02 11:24:21.475399681 +0000 UTC m=+0.248470776 container start c44796cef27fc2fec227b92b2cc64fad39a8a9cfdfeb9235ad60cbe4d1b38597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:24:21 np0005542249 podman[279649]: 2025-12-02 11:24:21.480122238 +0000 UTC m=+0.253193333 container attach c44796cef27fc2fec227b92b2cc64fad39a8a9cfdfeb9235ad60cbe4d1b38597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_rhodes, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  2 06:24:21 np0005542249 lucid_rhodes[279665]: 167 167
Dec  2 06:24:21 np0005542249 systemd[1]: libpod-c44796cef27fc2fec227b92b2cc64fad39a8a9cfdfeb9235ad60cbe4d1b38597.scope: Deactivated successfully.
Dec  2 06:24:21 np0005542249 podman[279649]: 2025-12-02 11:24:21.48647071 +0000 UTC m=+0.259541805 container died c44796cef27fc2fec227b92b2cc64fad39a8a9cfdfeb9235ad60cbe4d1b38597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_rhodes, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:24:21 np0005542249 systemd[1]: var-lib-containers-storage-overlay-9eccb6e7f47482461b2029c2925ec0d12ad39a88295aeab94949be4b48137d8d-merged.mount: Deactivated successfully.
Dec  2 06:24:21 np0005542249 podman[279649]: 2025-12-02 11:24:21.530810977 +0000 UTC m=+0.303882072 container remove c44796cef27fc2fec227b92b2cc64fad39a8a9cfdfeb9235ad60cbe4d1b38597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Dec  2 06:24:21 np0005542249 systemd[1]: libpod-conmon-c44796cef27fc2fec227b92b2cc64fad39a8a9cfdfeb9235ad60cbe4d1b38597.scope: Deactivated successfully.
Dec  2 06:24:21 np0005542249 podman[279691]: 2025-12-02 11:24:21.774510362 +0000 UTC m=+0.046074256 container create e4040a668223d207106a7615c674d5a2982ef3ffb47473b8b88b7a7f40f8073c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  2 06:24:21 np0005542249 systemd[1]: Started libpod-conmon-e4040a668223d207106a7615c674d5a2982ef3ffb47473b8b88b7a7f40f8073c.scope.
Dec  2 06:24:21 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:24:21 np0005542249 podman[279691]: 2025-12-02 11:24:21.755445628 +0000 UTC m=+0.027009552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:24:21 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b5127f5117088af339e92578c470740941461533cbfb1bce4314d54d0944bbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:24:21 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b5127f5117088af339e92578c470740941461533cbfb1bce4314d54d0944bbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:24:21 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b5127f5117088af339e92578c470740941461533cbfb1bce4314d54d0944bbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:24:21 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b5127f5117088af339e92578c470740941461533cbfb1bce4314d54d0944bbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:24:21 np0005542249 podman[279691]: 2025-12-02 11:24:21.871890534 +0000 UTC m=+0.143454498 container init e4040a668223d207106a7615c674d5a2982ef3ffb47473b8b88b7a7f40f8073c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sutherland, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:24:21 np0005542249 podman[279691]: 2025-12-02 11:24:21.883151928 +0000 UTC m=+0.154715872 container start e4040a668223d207106a7615c674d5a2982ef3ffb47473b8b88b7a7f40f8073c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sutherland, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:24:21 np0005542249 podman[279691]: 2025-12-02 11:24:21.887756422 +0000 UTC m=+0.159320366 container attach e4040a668223d207106a7615c674d5a2982ef3ffb47473b8b88b7a7f40f8073c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sutherland, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  2 06:24:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:24:22 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1325: 321 pgs: 321 active+clean; 2.4 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 6.5 MiB/s rd, 10 MiB/s wr, 232 op/s
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]: {
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:    "0": [
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:        {
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "devices": [
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "/dev/loop3"
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            ],
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "lv_name": "ceph_lv0",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "lv_size": "21470642176",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "name": "ceph_lv0",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "tags": {
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.cluster_name": "ceph",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.crush_device_class": "",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.encrypted": "0",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.osd_id": "0",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.type": "block",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.vdo": "0"
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            },
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "type": "block",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "vg_name": "ceph_vg0"
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:        }
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:    ],
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:    "1": [
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:        {
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "devices": [
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "/dev/loop4"
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            ],
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "lv_name": "ceph_lv1",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "lv_size": "21470642176",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "name": "ceph_lv1",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "tags": {
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.cluster_name": "ceph",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.crush_device_class": "",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.encrypted": "0",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.osd_id": "1",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.type": "block",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.vdo": "0"
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            },
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "type": "block",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "vg_name": "ceph_vg1"
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:        }
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:    ],
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:    "2": [
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:        {
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "devices": [
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "/dev/loop5"
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            ],
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "lv_name": "ceph_lv2",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "lv_size": "21470642176",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "name": "ceph_lv2",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "tags": {
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.cluster_name": "ceph",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.crush_device_class": "",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.encrypted": "0",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.osd_id": "2",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.type": "block",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:                "ceph.vdo": "0"
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            },
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "type": "block",
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:            "vg_name": "ceph_vg2"
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:        }
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]:    ]
Dec  2 06:24:22 np0005542249 busy_sutherland[279707]: }
Dec  2 06:24:22 np0005542249 systemd[1]: libpod-e4040a668223d207106a7615c674d5a2982ef3ffb47473b8b88b7a7f40f8073c.scope: Deactivated successfully.
Dec  2 06:24:22 np0005542249 podman[279691]: 2025-12-02 11:24:22.799555049 +0000 UTC m=+1.071119033 container died e4040a668223d207106a7615c674d5a2982ef3ffb47473b8b88b7a7f40f8073c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sutherland, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Dec  2 06:24:22 np0005542249 systemd[1]: var-lib-containers-storage-overlay-2b5127f5117088af339e92578c470740941461533cbfb1bce4314d54d0944bbb-merged.mount: Deactivated successfully.
Dec  2 06:24:22 np0005542249 podman[279691]: 2025-12-02 11:24:22.87323065 +0000 UTC m=+1.144794565 container remove e4040a668223d207106a7615c674d5a2982ef3ffb47473b8b88b7a7f40f8073c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:24:22 np0005542249 systemd[1]: libpod-conmon-e4040a668223d207106a7615c674d5a2982ef3ffb47473b8b88b7a7f40f8073c.scope: Deactivated successfully.
Dec  2 06:24:23 np0005542249 nova_compute[254900]: 2025-12-02 11:24:23.313 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:23 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:23Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:02:ef:76 10.100.0.9
Dec  2 06:24:23 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:23Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:ef:76 10.100.0.9
Dec  2 06:24:23 np0005542249 podman[279870]: 2025-12-02 11:24:23.854191348 +0000 UTC m=+0.080846606 container create 88eaf26b8825c9fec8cfe4018144effc3c57c59eea9214867bc4645308f8579c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kare, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:24:23 np0005542249 podman[279870]: 2025-12-02 11:24:23.820927039 +0000 UTC m=+0.047582367 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:24:23 np0005542249 systemd[1]: Started libpod-conmon-88eaf26b8825c9fec8cfe4018144effc3c57c59eea9214867bc4645308f8579c.scope.
Dec  2 06:24:23 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:24:23 np0005542249 podman[279870]: 2025-12-02 11:24:23.973337287 +0000 UTC m=+0.199992605 container init 88eaf26b8825c9fec8cfe4018144effc3c57c59eea9214867bc4645308f8579c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kare, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  2 06:24:23 np0005542249 podman[279870]: 2025-12-02 11:24:23.98712607 +0000 UTC m=+0.213781338 container start 88eaf26b8825c9fec8cfe4018144effc3c57c59eea9214867bc4645308f8579c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:24:23 np0005542249 podman[279870]: 2025-12-02 11:24:23.992252328 +0000 UTC m=+0.218907636 container attach 88eaf26b8825c9fec8cfe4018144effc3c57c59eea9214867bc4645308f8579c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kare, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  2 06:24:23 np0005542249 quirky_kare[279886]: 167 167
Dec  2 06:24:23 np0005542249 systemd[1]: libpod-88eaf26b8825c9fec8cfe4018144effc3c57c59eea9214867bc4645308f8579c.scope: Deactivated successfully.
Dec  2 06:24:23 np0005542249 podman[279870]: 2025-12-02 11:24:23.997443888 +0000 UTC m=+0.224099136 container died 88eaf26b8825c9fec8cfe4018144effc3c57c59eea9214867bc4645308f8579c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kare, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  2 06:24:24 np0005542249 systemd[1]: var-lib-containers-storage-overlay-25edb407894ae487a451e57705ec11335aa39a5ec3956c2c1966402c3ff220d0-merged.mount: Deactivated successfully.
Dec  2 06:24:24 np0005542249 podman[279870]: 2025-12-02 11:24:24.044928121 +0000 UTC m=+0.271583339 container remove 88eaf26b8825c9fec8cfe4018144effc3c57c59eea9214867bc4645308f8579c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kare, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:24:24 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1326: 321 pgs: 321 active+clean; 2.4 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 5.7 MiB/s rd, 11 MiB/s wr, 210 op/s
Dec  2 06:24:24 np0005542249 systemd[1]: libpod-conmon-88eaf26b8825c9fec8cfe4018144effc3c57c59eea9214867bc4645308f8579c.scope: Deactivated successfully.
Dec  2 06:24:24 np0005542249 podman[279910]: 2025-12-02 11:24:24.274480164 +0000 UTC m=+0.069960061 container create 2dfb1009d8a5656d3a357e4648ae424969d2674e528aa9727033d9b19f268d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_torvalds, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:24:24 np0005542249 systemd[1]: Started libpod-conmon-2dfb1009d8a5656d3a357e4648ae424969d2674e528aa9727033d9b19f268d82.scope.
Dec  2 06:24:24 np0005542249 podman[279910]: 2025-12-02 11:24:24.253092617 +0000 UTC m=+0.048572504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:24:24 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:24:24 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29873e443ead75f681e207a6fff94cee939bf211a136c69b6bdfb6cc6f1adb32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:24:24 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29873e443ead75f681e207a6fff94cee939bf211a136c69b6bdfb6cc6f1adb32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:24:24 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29873e443ead75f681e207a6fff94cee939bf211a136c69b6bdfb6cc6f1adb32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:24:24 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29873e443ead75f681e207a6fff94cee939bf211a136c69b6bdfb6cc6f1adb32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:24:24 np0005542249 podman[279910]: 2025-12-02 11:24:24.405526525 +0000 UTC m=+0.201006432 container init 2dfb1009d8a5656d3a357e4648ae424969d2674e528aa9727033d9b19f268d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:24:24 np0005542249 podman[279910]: 2025-12-02 11:24:24.415130984 +0000 UTC m=+0.210610861 container start 2dfb1009d8a5656d3a357e4648ae424969d2674e528aa9727033d9b19f268d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  2 06:24:24 np0005542249 podman[279910]: 2025-12-02 11:24:24.419227225 +0000 UTC m=+0.214707142 container attach 2dfb1009d8a5656d3a357e4648ae424969d2674e528aa9727033d9b19f268d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_torvalds, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:24:25 np0005542249 sweet_torvalds[279927]: {
Dec  2 06:24:25 np0005542249 sweet_torvalds[279927]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:24:25 np0005542249 sweet_torvalds[279927]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:24:25 np0005542249 sweet_torvalds[279927]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:24:25 np0005542249 sweet_torvalds[279927]:        "osd_id": 0,
Dec  2 06:24:25 np0005542249 sweet_torvalds[279927]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:24:25 np0005542249 sweet_torvalds[279927]:        "type": "bluestore"
Dec  2 06:24:25 np0005542249 sweet_torvalds[279927]:    },
Dec  2 06:24:25 np0005542249 sweet_torvalds[279927]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:24:25 np0005542249 sweet_torvalds[279927]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:24:25 np0005542249 sweet_torvalds[279927]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:24:25 np0005542249 sweet_torvalds[279927]:        "osd_id": 2,
Dec  2 06:24:25 np0005542249 sweet_torvalds[279927]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:24:25 np0005542249 sweet_torvalds[279927]:        "type": "bluestore"
Dec  2 06:24:25 np0005542249 sweet_torvalds[279927]:    },
Dec  2 06:24:25 np0005542249 sweet_torvalds[279927]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:24:25 np0005542249 sweet_torvalds[279927]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:24:25 np0005542249 sweet_torvalds[279927]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:24:25 np0005542249 sweet_torvalds[279927]:        "osd_id": 1,
Dec  2 06:24:25 np0005542249 sweet_torvalds[279927]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:24:25 np0005542249 sweet_torvalds[279927]:        "type": "bluestore"
Dec  2 06:24:25 np0005542249 sweet_torvalds[279927]:    }
Dec  2 06:24:25 np0005542249 sweet_torvalds[279927]: }
Dec  2 06:24:25 np0005542249 systemd[1]: libpod-2dfb1009d8a5656d3a357e4648ae424969d2674e528aa9727033d9b19f268d82.scope: Deactivated successfully.
Dec  2 06:24:25 np0005542249 podman[279910]: 2025-12-02 11:24:25.60879783 +0000 UTC m=+1.404277727 container died 2dfb1009d8a5656d3a357e4648ae424969d2674e528aa9727033d9b19f268d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_torvalds, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:24:25 np0005542249 systemd[1]: libpod-2dfb1009d8a5656d3a357e4648ae424969d2674e528aa9727033d9b19f268d82.scope: Consumed 1.204s CPU time.
Dec  2 06:24:25 np0005542249 systemd[1]: var-lib-containers-storage-overlay-29873e443ead75f681e207a6fff94cee939bf211a136c69b6bdfb6cc6f1adb32-merged.mount: Deactivated successfully.
Dec  2 06:24:25 np0005542249 podman[279910]: 2025-12-02 11:24:25.698720959 +0000 UTC m=+1.494200836 container remove 2dfb1009d8a5656d3a357e4648ae424969d2674e528aa9727033d9b19f268d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:24:25 np0005542249 systemd[1]: libpod-conmon-2dfb1009d8a5656d3a357e4648ae424969d2674e528aa9727033d9b19f268d82.scope: Deactivated successfully.
Dec  2 06:24:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:24:25 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:24:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:24:25 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:24:25 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 93857422-9bbb-44d7-b069-171b623252e9 does not exist
Dec  2 06:24:25 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 9270a9e8-f3eb-4b11-acd4-857cd29c3fdd does not exist
Dec  2 06:24:26 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1327: 321 pgs: 321 active+clean; 2.5 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 4.7 MiB/s rd, 7.0 MiB/s wr, 187 op/s
Dec  2 06:24:26 np0005542249 nova_compute[254900]: 2025-12-02 11:24:26.175 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:24:26
Dec  2 06:24:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:24:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:24:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.meta', 'vms', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'default.rgw.control']
Dec  2 06:24:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:24:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:24:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:24:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:24:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:24:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:24:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:24:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:24:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:24:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:24:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:24:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:24:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:24:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:24:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:24:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:24:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:24:26 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:24:26 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:24:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:24:28 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1328: 321 pgs: 321 active+clean; 2.5 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 4.1 MiB/s rd, 5.0 MiB/s wr, 146 op/s
Dec  2 06:24:28 np0005542249 nova_compute[254900]: 2025-12-02 11:24:28.316 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:30 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1329: 321 pgs: 321 active+clean; 2.5 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.7 MiB/s wr, 120 op/s
Dec  2 06:24:30 np0005542249 nova_compute[254900]: 2025-12-02 11:24:30.892 254904 DEBUG oslo_concurrency.lockutils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Acquiring lock "c8ef6338-699a-4f16-8f0b-39f2bb67ee45" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:30 np0005542249 nova_compute[254900]: 2025-12-02 11:24:30.893 254904 DEBUG oslo_concurrency.lockutils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Lock "c8ef6338-699a-4f16-8f0b-39f2bb67ee45" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:30 np0005542249 nova_compute[254900]: 2025-12-02 11:24:30.925 254904 DEBUG nova.compute.manager [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:24:31 np0005542249 nova_compute[254900]: 2025-12-02 11:24:31.032 254904 DEBUG oslo_concurrency.lockutils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:31 np0005542249 nova_compute[254900]: 2025-12-02 11:24:31.032 254904 DEBUG oslo_concurrency.lockutils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:31 np0005542249 nova_compute[254900]: 2025-12-02 11:24:31.044 254904 DEBUG nova.virt.hardware [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:24:31 np0005542249 nova_compute[254900]: 2025-12-02 11:24:31.044 254904 INFO nova.compute.claims [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:24:31 np0005542249 nova_compute[254900]: 2025-12-02 11:24:31.180 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:31 np0005542249 nova_compute[254900]: 2025-12-02 11:24:31.237 254904 DEBUG oslo_concurrency.processutils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:31 np0005542249 nova_compute[254900]: 2025-12-02 11:24:31.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:24:31 np0005542249 nova_compute[254900]: 2025-12-02 11:24:31.383 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  2 06:24:31 np0005542249 nova_compute[254900]: 2025-12-02 11:24:31.411 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  2 06:24:31 np0005542249 nova_compute[254900]: 2025-12-02 11:24:31.428 254904 DEBUG oslo_concurrency.lockutils [None req-a126e791-3c7b-49a2-b5a9-34e0fb94df54 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Acquiring lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:31 np0005542249 nova_compute[254900]: 2025-12-02 11:24:31.430 254904 DEBUG oslo_concurrency.lockutils [None req-a126e791-3c7b-49a2-b5a9-34e0fb94df54 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:31 np0005542249 nova_compute[254900]: 2025-12-02 11:24:31.459 254904 DEBUG nova.objects.instance [None req-a126e791-3c7b-49a2-b5a9-34e0fb94df54 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lazy-loading 'flavor' on Instance uuid 7d55326f-52eb-4f7f-a9cd-05282ca6ca20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:24:31 np0005542249 nova_compute[254900]: 2025-12-02 11:24:31.522 254904 DEBUG oslo_concurrency.lockutils [None req-a126e791-3c7b-49a2-b5a9-34e0fb94df54 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:24:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2867116747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:24:31 np0005542249 nova_compute[254900]: 2025-12-02 11:24:31.842 254904 DEBUG oslo_concurrency.processutils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:31 np0005542249 nova_compute[254900]: 2025-12-02 11:24:31.850 254904 DEBUG nova.compute.provider_tree [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:24:31 np0005542249 nova_compute[254900]: 2025-12-02 11:24:31.857 254904 DEBUG oslo_concurrency.lockutils [None req-a126e791-3c7b-49a2-b5a9-34e0fb94df54 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Acquiring lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:31 np0005542249 nova_compute[254900]: 2025-12-02 11:24:31.857 254904 DEBUG oslo_concurrency.lockutils [None req-a126e791-3c7b-49a2-b5a9-34e0fb94df54 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:31 np0005542249 nova_compute[254900]: 2025-12-02 11:24:31.857 254904 INFO nova.compute.manager [None req-a126e791-3c7b-49a2-b5a9-34e0fb94df54 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Attaching volume 2fa3b583-c9ba-40ef-a4ef-c5bc4eb1601a to /dev/vdb#033[00m
Dec  2 06:24:31 np0005542249 nova_compute[254900]: 2025-12-02 11:24:31.868 254904 DEBUG nova.scheduler.client.report [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:24:31 np0005542249 nova_compute[254900]: 2025-12-02 11:24:31.908 254904 DEBUG oslo_concurrency.lockutils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:31 np0005542249 nova_compute[254900]: 2025-12-02 11:24:31.909 254904 DEBUG nova.compute.manager [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:24:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:24:31 np0005542249 nova_compute[254900]: 2025-12-02 11:24:31.953 254904 DEBUG nova.compute.manager [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:24:31 np0005542249 nova_compute[254900]: 2025-12-02 11:24:31.954 254904 DEBUG nova.network.neutron [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:24:31 np0005542249 nova_compute[254900]: 2025-12-02 11:24:31.982 254904 INFO nova.virt.libvirt.driver [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:24:31 np0005542249 nova_compute[254900]: 2025-12-02 11:24:31.998 254904 DEBUG nova.compute.manager [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.043 254904 DEBUG os_brick.utils [None req-a126e791-3c7b-49a2-b5a9-34e0fb94df54 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.047 254904 INFO nova.virt.block_device [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Booting with volume cb1b3701-a571-4a54-8162-4cc9b695c6f4 at /dev/vda#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.047 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:32 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1330: 321 pgs: 321 active+clean; 2.5 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.4 MiB/s wr, 112 op/s
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.069 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.069 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[a8d61ca4-f55e-430f-9b08-bdca4f711802]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.074 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.111 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.111 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[1108f835-a63f-4143-a7ed-c392e94ac9cb]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.114 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.122 254904 DEBUG nova.policy [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1e7562e263fa4d47ae69c1891e4b61ff', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674916d2c2d94b239e86c66b1f0af922', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.130 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.131 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0c671c-9511-4111-adf4-c0cbacf6e201]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.132 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[864e0f65-0dce-44ac-a3ec-52f45c1f9e00]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.133 254904 DEBUG oslo_concurrency.processutils [None req-a126e791-3c7b-49a2-b5a9-34e0fb94df54 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.172 254904 DEBUG oslo_concurrency.processutils [None req-a126e791-3c7b-49a2-b5a9-34e0fb94df54 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] CMD "nvme version" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.176 254904 DEBUG os_brick.initiator.connectors.lightos [None req-a126e791-3c7b-49a2-b5a9-34e0fb94df54 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.177 254904 DEBUG os_brick.initiator.connectors.lightos [None req-a126e791-3c7b-49a2-b5a9-34e0fb94df54 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.177 254904 DEBUG os_brick.initiator.connectors.lightos [None req-a126e791-3c7b-49a2-b5a9-34e0fb94df54 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.178 254904 DEBUG os_brick.utils [None req-a126e791-3c7b-49a2-b5a9-34e0fb94df54 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] <== get_connector_properties: return (133ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.179 254904 DEBUG nova.virt.block_device [None req-a126e791-3c7b-49a2-b5a9-34e0fb94df54 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Updating existing volume attachment record: d8c790d1-a843-4de2-9f5d-0c490fb1ec14 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.248 254904 DEBUG os_brick.utils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.250 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.270 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.270 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[9d11e0bb-756a-40d0-83c4-ee2764cc1ad1]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.272 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.288 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.288 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[09cc010b-9f93-44c0-b7ef-b329ef14f0d6]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.291 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.308 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.309 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[6b586770-b0d3-4c8b-a172-c8e70fd5d740]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.310 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d92ae3-9fd8-4121-a65a-488d4e06b45f]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.311 254904 DEBUG oslo_concurrency.processutils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.347 254904 DEBUG oslo_concurrency.processutils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] CMD "nvme version" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.350 254904 DEBUG os_brick.initiator.connectors.lightos [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.351 254904 DEBUG os_brick.initiator.connectors.lightos [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.351 254904 DEBUG os_brick.initiator.connectors.lightos [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.352 254904 DEBUG os_brick.utils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] <== get_connector_properties: return (102ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.352 254904 DEBUG nova.virt.block_device [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Updating existing volume attachment record: c8552908-a5d0-4e40-9bea-69622a564e6e _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:24:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:24:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2822042311' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.905 254904 DEBUG nova.network.neutron [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Successfully created port: 677e9f6e-c8b4-450a-a8fa-47c99433fedf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.947 254904 DEBUG nova.objects.instance [None req-a126e791-3c7b-49a2-b5a9-34e0fb94df54 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lazy-loading 'flavor' on Instance uuid 7d55326f-52eb-4f7f-a9cd-05282ca6ca20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.988 254904 DEBUG nova.virt.libvirt.driver [None req-a126e791-3c7b-49a2-b5a9-34e0fb94df54 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Attempting to attach volume 2fa3b583-c9ba-40ef-a4ef-c5bc4eb1601a with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Dec  2 06:24:32 np0005542249 nova_compute[254900]: 2025-12-02 11:24:32.992 254904 DEBUG nova.virt.libvirt.guest [None req-a126e791-3c7b-49a2-b5a9-34e0fb94df54 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] attach device xml: <disk type="network" device="disk">
Dec  2 06:24:32 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:24:32 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-2fa3b583-c9ba-40ef-a4ef-c5bc4eb1601a">
Dec  2 06:24:32 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:24:32 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:24:32 np0005542249 nova_compute[254900]:  <auth username="openstack">
Dec  2 06:24:32 np0005542249 nova_compute[254900]:    <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:24:32 np0005542249 nova_compute[254900]:  </auth>
Dec  2 06:24:32 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:24:32 np0005542249 nova_compute[254900]:  <serial>2fa3b583-c9ba-40ef-a4ef-c5bc4eb1601a</serial>
Dec  2 06:24:32 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:24:32 np0005542249 nova_compute[254900]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  2 06:24:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:24:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1473389314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:24:33 np0005542249 nova_compute[254900]: 2025-12-02 11:24:33.148 254904 DEBUG nova.virt.libvirt.driver [None req-a126e791-3c7b-49a2-b5a9-34e0fb94df54 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:24:33 np0005542249 nova_compute[254900]: 2025-12-02 11:24:33.149 254904 DEBUG nova.virt.libvirt.driver [None req-a126e791-3c7b-49a2-b5a9-34e0fb94df54 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:24:33 np0005542249 nova_compute[254900]: 2025-12-02 11:24:33.149 254904 DEBUG nova.virt.libvirt.driver [None req-a126e791-3c7b-49a2-b5a9-34e0fb94df54 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:24:33 np0005542249 nova_compute[254900]: 2025-12-02 11:24:33.150 254904 DEBUG nova.virt.libvirt.driver [None req-a126e791-3c7b-49a2-b5a9-34e0fb94df54 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] No VIF found with MAC fa:16:3e:02:ef:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:24:33 np0005542249 nova_compute[254900]: 2025-12-02 11:24:33.320 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:33 np0005542249 nova_compute[254900]: 2025-12-02 11:24:33.390 254904 DEBUG oslo_concurrency.lockutils [None req-a126e791-3c7b-49a2-b5a9-34e0fb94df54 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:33 np0005542249 nova_compute[254900]: 2025-12-02 11:24:33.431 254904 DEBUG nova.compute.manager [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:24:33 np0005542249 nova_compute[254900]: 2025-12-02 11:24:33.433 254904 DEBUG nova.virt.libvirt.driver [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:24:33 np0005542249 nova_compute[254900]: 2025-12-02 11:24:33.434 254904 INFO nova.virt.libvirt.driver [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Creating image(s)#033[00m
Dec  2 06:24:33 np0005542249 nova_compute[254900]: 2025-12-02 11:24:33.435 254904 DEBUG nova.virt.libvirt.driver [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  2 06:24:33 np0005542249 nova_compute[254900]: 2025-12-02 11:24:33.435 254904 DEBUG nova.virt.libvirt.driver [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Ensure instance console log exists: /var/lib/nova/instances/c8ef6338-699a-4f16-8f0b-39f2bb67ee45/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:24:33 np0005542249 nova_compute[254900]: 2025-12-02 11:24:33.437 254904 DEBUG oslo_concurrency.lockutils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:33 np0005542249 nova_compute[254900]: 2025-12-02 11:24:33.438 254904 DEBUG oslo_concurrency.lockutils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:33 np0005542249 nova_compute[254900]: 2025-12-02 11:24:33.438 254904 DEBUG oslo_concurrency.lockutils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:33 np0005542249 nova_compute[254900]: 2025-12-02 11:24:33.625 254904 DEBUG nova.network.neutron [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Successfully updated port: 677e9f6e-c8b4-450a-a8fa-47c99433fedf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:24:33 np0005542249 nova_compute[254900]: 2025-12-02 11:24:33.651 254904 DEBUG oslo_concurrency.lockutils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Acquiring lock "refresh_cache-c8ef6338-699a-4f16-8f0b-39f2bb67ee45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:24:33 np0005542249 nova_compute[254900]: 2025-12-02 11:24:33.652 254904 DEBUG oslo_concurrency.lockutils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Acquired lock "refresh_cache-c8ef6338-699a-4f16-8f0b-39f2bb67ee45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:24:33 np0005542249 nova_compute[254900]: 2025-12-02 11:24:33.652 254904 DEBUG nova.network.neutron [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:24:33 np0005542249 nova_compute[254900]: 2025-12-02 11:24:33.753 254904 DEBUG nova.compute.manager [req-a6877371-7a45-434b-be73-d24b3f964edb req-f4ee6abd-f904-459a-844e-03773780a436 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Received event network-changed-677e9f6e-c8b4-450a-a8fa-47c99433fedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:24:33 np0005542249 nova_compute[254900]: 2025-12-02 11:24:33.754 254904 DEBUG nova.compute.manager [req-a6877371-7a45-434b-be73-d24b3f964edb req-f4ee6abd-f904-459a-844e-03773780a436 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Refreshing instance network info cache due to event network-changed-677e9f6e-c8b4-450a-a8fa-47c99433fedf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:24:33 np0005542249 nova_compute[254900]: 2025-12-02 11:24:33.754 254904 DEBUG oslo_concurrency.lockutils [req-a6877371-7a45-434b-be73-d24b3f964edb req-f4ee6abd-f904-459a-844e-03773780a436 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-c8ef6338-699a-4f16-8f0b-39f2bb67ee45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:24:34 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1331: 321 pgs: 321 active+clean; 2.5 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 984 KiB/s rd, 3.0 MiB/s wr, 86 op/s
Dec  2 06:24:34 np0005542249 nova_compute[254900]: 2025-12-02 11:24:34.249 254904 DEBUG nova.network.neutron [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:24:35 np0005542249 nova_compute[254900]: 2025-12-02 11:24:35.561 254904 DEBUG nova.network.neutron [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Updating instance_info_cache with network_info: [{"id": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "address": "fa:16:3e:f5:ea:49", "network": {"id": "e8562a40-d165-4b23-84c8-7c8f664e882e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1724334586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674916d2c2d94b239e86c66b1f0af922", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677e9f6e-c8", "ovs_interfaceid": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:24:35 np0005542249 nova_compute[254900]: 2025-12-02 11:24:35.859 254904 DEBUG oslo_concurrency.lockutils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Releasing lock "refresh_cache-c8ef6338-699a-4f16-8f0b-39f2bb67ee45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:24:35 np0005542249 nova_compute[254900]: 2025-12-02 11:24:35.860 254904 DEBUG nova.compute.manager [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Instance network_info: |[{"id": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "address": "fa:16:3e:f5:ea:49", "network": {"id": "e8562a40-d165-4b23-84c8-7c8f664e882e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1724334586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674916d2c2d94b239e86c66b1f0af922", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677e9f6e-c8", "ovs_interfaceid": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:24:35 np0005542249 nova_compute[254900]: 2025-12-02 11:24:35.861 254904 DEBUG oslo_concurrency.lockutils [req-a6877371-7a45-434b-be73-d24b3f964edb req-f4ee6abd-f904-459a-844e-03773780a436 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-c8ef6338-699a-4f16-8f0b-39f2bb67ee45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:24:35 np0005542249 nova_compute[254900]: 2025-12-02 11:24:35.861 254904 DEBUG nova.network.neutron [req-a6877371-7a45-434b-be73-d24b3f964edb req-f4ee6abd-f904-459a-844e-03773780a436 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Refreshing network info cache for port 677e9f6e-c8b4-450a-a8fa-47c99433fedf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:24:35 np0005542249 nova_compute[254900]: 2025-12-02 11:24:35.865 254904 DEBUG nova.virt.libvirt.driver [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Start _get_guest_xml network_info=[{"id": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "address": "fa:16:3e:f5:ea:49", "network": {"id": "e8562a40-d165-4b23-84c8-7c8f664e882e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1724334586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674916d2c2d94b239e86c66b1f0af922", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677e9f6e-c8", "ovs_interfaceid": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-cb1b3701-a571-4a54-8162-4cc9b695c6f4', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'cb1b3701-a571-4a54-8162-4cc9b695c6f4', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'c8ef6338-699a-4f16-8f0b-39f2bb67ee45', 'attached_at': '', 'detached_at': '', 'volume_id': 'cb1b3701-a571-4a54-8162-4cc9b695c6f4', 'serial': 'cb1b3701-a571-4a54-8162-4cc9b695c6f4'}, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'attachment_id': 'c8552908-a5d0-4e40-9bea-69622a564e6e', 'delete_on_termination': False, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:24:35 np0005542249 nova_compute[254900]: 2025-12-02 11:24:35.871 254904 WARNING nova.virt.libvirt.driver [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:24:35 np0005542249 nova_compute[254900]: 2025-12-02 11:24:35.877 254904 DEBUG nova.virt.libvirt.host [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:24:35 np0005542249 nova_compute[254900]: 2025-12-02 11:24:35.877 254904 DEBUG nova.virt.libvirt.host [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:24:35 np0005542249 nova_compute[254900]: 2025-12-02 11:24:35.881 254904 DEBUG nova.virt.libvirt.host [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:24:35 np0005542249 nova_compute[254900]: 2025-12-02 11:24:35.882 254904 DEBUG nova.virt.libvirt.host [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:24:35 np0005542249 nova_compute[254900]: 2025-12-02 11:24:35.882 254904 DEBUG nova.virt.libvirt.driver [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:24:35 np0005542249 nova_compute[254900]: 2025-12-02 11:24:35.882 254904 DEBUG nova.virt.hardware [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:24:35 np0005542249 nova_compute[254900]: 2025-12-02 11:24:35.883 254904 DEBUG nova.virt.hardware [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:24:35 np0005542249 nova_compute[254900]: 2025-12-02 11:24:35.883 254904 DEBUG nova.virt.hardware [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:24:35 np0005542249 nova_compute[254900]: 2025-12-02 11:24:35.884 254904 DEBUG nova.virt.hardware [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:24:35 np0005542249 nova_compute[254900]: 2025-12-02 11:24:35.884 254904 DEBUG nova.virt.hardware [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:24:35 np0005542249 nova_compute[254900]: 2025-12-02 11:24:35.884 254904 DEBUG nova.virt.hardware [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:24:35 np0005542249 nova_compute[254900]: 2025-12-02 11:24:35.885 254904 DEBUG nova.virt.hardware [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:24:35 np0005542249 nova_compute[254900]: 2025-12-02 11:24:35.885 254904 DEBUG nova.virt.hardware [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:24:35 np0005542249 nova_compute[254900]: 2025-12-02 11:24:35.885 254904 DEBUG nova.virt.hardware [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:24:35 np0005542249 nova_compute[254900]: 2025-12-02 11:24:35.885 254904 DEBUG nova.virt.hardware [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:24:35 np0005542249 nova_compute[254900]: 2025-12-02 11:24:35.886 254904 DEBUG nova.virt.hardware [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:24:35 np0005542249 nova_compute[254900]: 2025-12-02 11:24:35.918 254904 DEBUG nova.storage.rbd_utils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] rbd image c8ef6338-699a-4f16-8f0b-39f2bb67ee45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:24:35 np0005542249 nova_compute[254900]: 2025-12-02 11:24:35.924 254904 DEBUG oslo_concurrency.processutils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:24:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:24:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:24:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:24:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011076865395259492 of space, bias 1.0, pg target 0.33230596185778477 quantized to 32 (current 32)
Dec  2 06:24:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:24:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.03513799946685551 of space, bias 1.0, pg target 10.541399840056652 quantized to 32 (current 32)
Dec  2 06:24:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:24:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0006921848546265546 of space, bias 1.0, pg target 0.20073360784170083 quantized to 32 (current 32)
Dec  2 06:24:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:24:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19309890746076708 quantized to 32 (current 32)
Dec  2 06:24:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:24:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0005901217685745913 quantized to 16 (current 32)
Dec  2 06:24:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:24:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:24:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:24:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.376522107182392e-05 quantized to 32 (current 32)
Dec  2 06:24:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:24:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006270043791105033 quantized to 32 (current 32)
Dec  2 06:24:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:24:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:24:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:24:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00014753044214364783 quantized to 32 (current 32)
Dec  2 06:24:36 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1332: 321 pgs: 321 active+clean; 2.5 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 524 KiB/s rd, 1.2 MiB/s wr, 67 op/s
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.182 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.413 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:24:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:24:36 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3067183311' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.442 254904 DEBUG oslo_concurrency.processutils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.584 254904 DEBUG nova.virt.libvirt.vif [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:24:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-281089066',display_name='tempest-TestVolumeBackupRestore-server-281089066',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-281089066',id=14,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPmzgD+QuHBI397ADK7R+dwgAgosB0Pxeuf5Epg8BSAVtmZeTahW9i6smNPnTbbnmlfUZAuk8gV1dS8px8NP9mt2RK8KUn93g8VdaLWmF6oVZZQJHwqVchLLycXOy7OqYQ==',key_name='tempest-TestVolumeBackupRestore-111150897',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674916d2c2d94b239e86c66b1f0af922',ramdisk_id='',reservation_id='r-0gsqleit',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-1579005259',owner_user_name='tempest-TestVolumeBackupRestore-1579005259-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:24:32Z,user_data=None,user_id='1e7562e263fa4d47ae69c1891e4b61ff',uuid=c8ef6338-699a-4f16-8f0b-39f2bb67ee45,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "address": "fa:16:3e:f5:ea:49", "network": {"id": "e8562a40-d165-4b23-84c8-7c8f664e882e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1724334586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674916d2c2d94b239e86c66b1f0af922", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677e9f6e-c8", "ovs_interfaceid": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.585 254904 DEBUG nova.network.os_vif_util [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Converting VIF {"id": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "address": "fa:16:3e:f5:ea:49", "network": {"id": "e8562a40-d165-4b23-84c8-7c8f664e882e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1724334586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674916d2c2d94b239e86c66b1f0af922", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677e9f6e-c8", "ovs_interfaceid": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.587 254904 DEBUG nova.network.os_vif_util [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:ea:49,bridge_name='br-int',has_traffic_filtering=True,id=677e9f6e-c8b4-450a-a8fa-47c99433fedf,network=Network(e8562a40-d165-4b23-84c8-7c8f664e882e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap677e9f6e-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.590 254904 DEBUG nova.objects.instance [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Lazy-loading 'pci_devices' on Instance uuid c8ef6338-699a-4f16-8f0b-39f2bb67ee45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.608 254904 DEBUG nova.virt.libvirt.driver [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  <uuid>c8ef6338-699a-4f16-8f0b-39f2bb67ee45</uuid>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  <name>instance-0000000e</name>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <nova:name>tempest-TestVolumeBackupRestore-server-281089066</nova:name>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:24:35</nova:creationTime>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:24:36 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:        <nova:user uuid="1e7562e263fa4d47ae69c1891e4b61ff">tempest-TestVolumeBackupRestore-1579005259-project-member</nova:user>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:        <nova:project uuid="674916d2c2d94b239e86c66b1f0af922">tempest-TestVolumeBackupRestore-1579005259</nova:project>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:        <nova:port uuid="677e9f6e-c8b4-450a-a8fa-47c99433fedf">
Dec  2 06:24:36 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <entry name="serial">c8ef6338-699a-4f16-8f0b-39f2bb67ee45</entry>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <entry name="uuid">c8ef6338-699a-4f16-8f0b-39f2bb67ee45</entry>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/c8ef6338-699a-4f16-8f0b-39f2bb67ee45_disk.config">
Dec  2 06:24:36 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:24:36 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="volumes/volume-cb1b3701-a571-4a54-8162-4cc9b695c6f4">
Dec  2 06:24:36 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:24:36 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <serial>cb1b3701-a571-4a54-8162-4cc9b695c6f4</serial>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:f5:ea:49"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <target dev="tap677e9f6e-c8"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/c8ef6338-699a-4f16-8f0b-39f2bb67ee45/console.log" append="off"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:24:36 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:24:36 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:24:36 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.608 254904 DEBUG nova.compute.manager [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Preparing to wait for external event network-vif-plugged-677e9f6e-c8b4-450a-a8fa-47c99433fedf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.609 254904 DEBUG oslo_concurrency.lockutils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Acquiring lock "c8ef6338-699a-4f16-8f0b-39f2bb67ee45-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.609 254904 DEBUG oslo_concurrency.lockutils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Lock "c8ef6338-699a-4f16-8f0b-39f2bb67ee45-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.609 254904 DEBUG oslo_concurrency.lockutils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Lock "c8ef6338-699a-4f16-8f0b-39f2bb67ee45-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.610 254904 DEBUG nova.virt.libvirt.vif [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:24:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-281089066',display_name='tempest-TestVolumeBackupRestore-server-281089066',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-281089066',id=14,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPmzgD+QuHBI397ADK7R+dwgAgosB0Pxeuf5Epg8BSAVtmZeTahW9i6smNPnTbbnmlfUZAuk8gV1dS8px8NP9mt2RK8KUn93g8VdaLWmF6oVZZQJHwqVchLLycXOy7OqYQ==',key_name='tempest-TestVolumeBackupRestore-111150897',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674916d2c2d94b239e86c66b1f0af922',ramdisk_id='',reservation_id='r-0gsqleit',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-1579005259',owner_user_name='tempest-TestVolumeBackupRestore-1579005259-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:24:32Z,user_data=None,user_id='1e7562e263fa4d47ae69c1891e4b61ff',uuid=c8ef6338-699a-4f16-8f0b-39f2bb67ee45,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "address": "fa:16:3e:f5:ea:49", "network": {"id": "e8562a40-d165-4b23-84c8-7c8f664e882e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1724334586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674916d2c2d94b239e86c66b1f0af922", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677e9f6e-c8", "ovs_interfaceid": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.610 254904 DEBUG nova.network.os_vif_util [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Converting VIF {"id": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "address": "fa:16:3e:f5:ea:49", "network": {"id": "e8562a40-d165-4b23-84c8-7c8f664e882e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1724334586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674916d2c2d94b239e86c66b1f0af922", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677e9f6e-c8", "ovs_interfaceid": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.611 254904 DEBUG nova.network.os_vif_util [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:ea:49,bridge_name='br-int',has_traffic_filtering=True,id=677e9f6e-c8b4-450a-a8fa-47c99433fedf,network=Network(e8562a40-d165-4b23-84c8-7c8f664e882e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap677e9f6e-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.612 254904 DEBUG os_vif [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:ea:49,bridge_name='br-int',has_traffic_filtering=True,id=677e9f6e-c8b4-450a-a8fa-47c99433fedf,network=Network(e8562a40-d165-4b23-84c8-7c8f664e882e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap677e9f6e-c8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.613 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.613 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.613 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.617 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.617 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap677e9f6e-c8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.618 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap677e9f6e-c8, col_values=(('external_ids', {'iface-id': '677e9f6e-c8b4-450a-a8fa-47c99433fedf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f5:ea:49', 'vm-uuid': 'c8ef6338-699a-4f16-8f0b-39f2bb67ee45'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.620 254904 DEBUG oslo_concurrency.lockutils [None req-a6f1dcf3-ba26-4303-94d3-d2937a3b26e4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Acquiring lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.620 254904 DEBUG oslo_concurrency.lockutils [None req-a6f1dcf3-ba26-4303-94d3-d2937a3b26e4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.632 254904 INFO nova.compute.manager [None req-a6f1dcf3-ba26-4303-94d3-d2937a3b26e4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Detaching volume 2fa3b583-c9ba-40ef-a4ef-c5bc4eb1601a#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.654 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:36 np0005542249 NetworkManager[48987]: <info>  [1764674676.6559] manager: (tap677e9f6e-c8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.657 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.666 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.668 254904 INFO os_vif [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:ea:49,bridge_name='br-int',has_traffic_filtering=True,id=677e9f6e-c8b4-450a-a8fa-47c99433fedf,network=Network(e8562a40-d165-4b23-84c8-7c8f664e882e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap677e9f6e-c8')#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.761 254904 DEBUG nova.virt.libvirt.driver [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.761 254904 DEBUG nova.virt.libvirt.driver [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.761 254904 DEBUG nova.virt.libvirt.driver [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] No VIF found with MAC fa:16:3e:f5:ea:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.762 254904 INFO nova.virt.libvirt.driver [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Using config drive#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.794 254904 DEBUG nova.storage.rbd_utils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] rbd image c8ef6338-699a-4f16-8f0b-39f2bb67ee45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.808 254904 INFO nova.virt.block_device [None req-a6f1dcf3-ba26-4303-94d3-d2937a3b26e4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Attempting to driver detach volume 2fa3b583-c9ba-40ef-a4ef-c5bc4eb1601a from mountpoint /dev/vdb#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.811 254904 DEBUG oslo_concurrency.lockutils [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Acquiring lock "2c9c5605-aec2-4e63-a5b5-49c57c3e9e17" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.812 254904 DEBUG oslo_concurrency.lockutils [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Lock "2c9c5605-aec2-4e63-a5b5-49c57c3e9e17" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.812 254904 DEBUG oslo_concurrency.lockutils [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Acquiring lock "2c9c5605-aec2-4e63-a5b5-49c57c3e9e17-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.813 254904 DEBUG oslo_concurrency.lockutils [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Lock "2c9c5605-aec2-4e63-a5b5-49c57c3e9e17-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.813 254904 DEBUG oslo_concurrency.lockutils [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Lock "2c9c5605-aec2-4e63-a5b5-49c57c3e9e17-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.815 254904 INFO nova.compute.manager [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Terminating instance#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.817 254904 DEBUG nova.compute.manager [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:24:36 np0005542249 podman[280122]: 2025-12-02 11:24:36.838682636 +0000 UTC m=+0.108039371 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.859 254904 DEBUG nova.virt.libvirt.driver [None req-a6f1dcf3-ba26-4303-94d3-d2937a3b26e4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Attempting to detach device vdb from instance 7d55326f-52eb-4f7f-a9cd-05282ca6ca20 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.860 254904 DEBUG nova.virt.libvirt.guest [None req-a6f1dcf3-ba26-4303-94d3-d2937a3b26e4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] detach device xml: <disk type="network" device="disk">
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-2fa3b583-c9ba-40ef-a4ef-c5bc4eb1601a">
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  <serial>2fa3b583-c9ba-40ef-a4ef-c5bc4eb1601a</serial>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:24:36 np0005542249 nova_compute[254900]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.870 254904 INFO nova.virt.libvirt.driver [None req-a6f1dcf3-ba26-4303-94d3-d2937a3b26e4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Successfully detached device vdb from instance 7d55326f-52eb-4f7f-a9cd-05282ca6ca20 from the persistent domain config.#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.870 254904 DEBUG nova.virt.libvirt.driver [None req-a6f1dcf3-ba26-4303-94d3-d2937a3b26e4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 7d55326f-52eb-4f7f-a9cd-05282ca6ca20 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.871 254904 DEBUG nova.virt.libvirt.guest [None req-a6f1dcf3-ba26-4303-94d3-d2937a3b26e4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] detach device xml: <disk type="network" device="disk">
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-2fa3b583-c9ba-40ef-a4ef-c5bc4eb1601a">
Dec  2 06:24:36 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  <serial>2fa3b583-c9ba-40ef-a4ef-c5bc4eb1601a</serial>
Dec  2 06:24:36 np0005542249 nova_compute[254900]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  2 06:24:36 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:24:36 np0005542249 nova_compute[254900]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  2 06:24:36 np0005542249 kernel: taped87e718-f3 (unregistering): left promiscuous mode
Dec  2 06:24:36 np0005542249 NetworkManager[48987]: <info>  [1764674676.9067] device (taped87e718-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.928 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:36 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:36Z|00137|binding|INFO|Releasing lport ed87e718-f390-41da-bc93-530936610716 from this chassis (sb_readonly=0)
Dec  2 06:24:36 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:36Z|00138|binding|INFO|Setting lport ed87e718-f390-41da-bc93-530936610716 down in Southbound
Dec  2 06:24:36 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:36Z|00139|binding|INFO|Removing iface taped87e718-f3 ovn-installed in OVS
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.933 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:36.939 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:b1:91 10.100.0.11'], port_security=['fa:16:3e:25:b1:91 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '2c9c5605-aec2-4e63-a5b5-49c57c3e9e17', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3026dba-17e4-448f-b1b1-951c4cf69278', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e06121cb1a114bf997558a008929f199', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd625b3d1-36fb-4d71-9562-8a233f7374db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f3395d5a-df7b-4d1a-a507-470df61cdd58, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=ed87e718-f390-41da-bc93-530936610716) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:24:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:36.941 163757 INFO neutron.agent.ovn.metadata.agent [-] Port ed87e718-f390-41da-bc93-530936610716 in datapath d3026dba-17e4-448f-b1b1-951c4cf69278 unbound from our chassis#033[00m
Dec  2 06:24:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:36.944 163757 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d3026dba-17e4-448f-b1b1-951c4cf69278, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 06:24:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:36.947 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[b0fdab0f-5969-4beb-9e7c-2c3af1ca334b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:36.948 163757 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d3026dba-17e4-448f-b1b1-951c4cf69278 namespace which is not needed anymore#033[00m
Dec  2 06:24:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:24:36 np0005542249 nova_compute[254900]: 2025-12-02 11:24:36.963 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:36 np0005542249 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Dec  2 06:24:36 np0005542249 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 15.649s CPU time.
Dec  2 06:24:36 np0005542249 systemd-machined[216222]: Machine qemu-12-instance-0000000c terminated.
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.014 254904 DEBUG nova.virt.libvirt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Received event <DeviceRemovedEvent: 1764674677.0143445, 7d55326f-52eb-4f7f-a9cd-05282ca6ca20 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.017 254904 DEBUG nova.virt.libvirt.driver [None req-a6f1dcf3-ba26-4303-94d3-d2937a3b26e4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 7d55326f-52eb-4f7f-a9cd-05282ca6ca20 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.020 254904 INFO nova.virt.libvirt.driver [None req-a6f1dcf3-ba26-4303-94d3-d2937a3b26e4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Successfully detached device vdb from instance 7d55326f-52eb-4f7f-a9cd-05282ca6ca20 from the live domain config.#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.063 254904 INFO nova.virt.libvirt.driver [-] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Instance destroyed successfully.#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.064 254904 DEBUG nova.objects.instance [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Lazy-loading 'resources' on Instance uuid 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.088 254904 DEBUG nova.virt.libvirt.vif [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:23:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-instance-767987115',display_name='tempest-instance-767987115',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-767987115',id=12,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWGLTuvFZjkL4j1zSwZMrly3chy8nNhv7htjYEv0G4fMU266qOmBBXChmqhhQQLxUCXGceS9XI0wgfnx+yPPJFk7UfC/I5jgSmHwrgK4d3M0QxwS1FpcG3w3fuR+l3Vrg==',key_name='tempest-keypair-1194168157',keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:24:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e06121cb1a114bf997558a008929f199',ramdisk_id='',reservation_id='r-4d40kbj8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-575462623',owner_user_name='tempest-VolumesBackupsTest-575462623-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:24:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='409e3a0b9ed441f2bb32f5f1fd0bb00a',uuid=2c9c5605-aec2-4e63-a5b5-49c57c3e9e17,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ed87e718-f390-41da-bc93-530936610716", "address": "fa:16:3e:25:b1:91", "network": {"id": "d3026dba-17e4-448f-b1b1-951c4cf69278", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-452915621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e06121cb1a114bf997558a008929f199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped87e718-f3", "ovs_interfaceid": "ed87e718-f390-41da-bc93-530936610716", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.089 254904 DEBUG nova.network.os_vif_util [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Converting VIF {"id": "ed87e718-f390-41da-bc93-530936610716", "address": "fa:16:3e:25:b1:91", "network": {"id": "d3026dba-17e4-448f-b1b1-951c4cf69278", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-452915621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e06121cb1a114bf997558a008929f199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped87e718-f3", "ovs_interfaceid": "ed87e718-f390-41da-bc93-530936610716", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.090 254904 DEBUG nova.network.os_vif_util [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:25:b1:91,bridge_name='br-int',has_traffic_filtering=True,id=ed87e718-f390-41da-bc93-530936610716,network=Network(d3026dba-17e4-448f-b1b1-951c4cf69278),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped87e718-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.091 254904 DEBUG os_vif [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:25:b1:91,bridge_name='br-int',has_traffic_filtering=True,id=ed87e718-f390-41da-bc93-530936610716,network=Network(d3026dba-17e4-448f-b1b1-951c4cf69278),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped87e718-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.094 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.094 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped87e718-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.098 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.101 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.103 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.105 254904 INFO os_vif [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:25:b1:91,bridge_name='br-int',has_traffic_filtering=True,id=ed87e718-f390-41da-bc93-530936610716,network=Network(d3026dba-17e4-448f-b1b1-951c4cf69278),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped87e718-f3')#033[00m
Dec  2 06:24:37 np0005542249 neutron-haproxy-ovnmeta-d3026dba-17e4-448f-b1b1-951c4cf69278[278599]: [NOTICE]   (278603) : haproxy version is 2.8.14-c23fe91
Dec  2 06:24:37 np0005542249 neutron-haproxy-ovnmeta-d3026dba-17e4-448f-b1b1-951c4cf69278[278599]: [NOTICE]   (278603) : path to executable is /usr/sbin/haproxy
Dec  2 06:24:37 np0005542249 neutron-haproxy-ovnmeta-d3026dba-17e4-448f-b1b1-951c4cf69278[278599]: [WARNING]  (278603) : Exiting Master process...
Dec  2 06:24:37 np0005542249 neutron-haproxy-ovnmeta-d3026dba-17e4-448f-b1b1-951c4cf69278[278599]: [ALERT]    (278603) : Current worker (278605) exited with code 143 (Terminated)
Dec  2 06:24:37 np0005542249 neutron-haproxy-ovnmeta-d3026dba-17e4-448f-b1b1-951c4cf69278[278599]: [WARNING]  (278603) : All workers exited. Exiting... (0)
Dec  2 06:24:37 np0005542249 systemd[1]: libpod-03ae92d11b1b104f82d19adfb61789e1931e07c1b948985b503447f506622a21.scope: Deactivated successfully.
Dec  2 06:24:37 np0005542249 conmon[278599]: conmon 03ae92d11b1b104f82d1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-03ae92d11b1b104f82d19adfb61789e1931e07c1b948985b503447f506622a21.scope/container/memory.events
Dec  2 06:24:37 np0005542249 podman[280197]: 2025-12-02 11:24:37.165801075 +0000 UTC m=+0.083524288 container died 03ae92d11b1b104f82d19adfb61789e1931e07c1b948985b503447f506622a21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3026dba-17e4-448f-b1b1-951c4cf69278, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:24:37 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-03ae92d11b1b104f82d19adfb61789e1931e07c1b948985b503447f506622a21-userdata-shm.mount: Deactivated successfully.
Dec  2 06:24:37 np0005542249 systemd[1]: var-lib-containers-storage-overlay-0f2726dcb5caaedaa478d8c59ec0ffd82374d3fcd5a92aade4a45a953795e5ff-merged.mount: Deactivated successfully.
Dec  2 06:24:37 np0005542249 podman[280197]: 2025-12-02 11:24:37.210224236 +0000 UTC m=+0.127947439 container cleanup 03ae92d11b1b104f82d19adfb61789e1931e07c1b948985b503447f506622a21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3026dba-17e4-448f-b1b1-951c4cf69278, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  2 06:24:37 np0005542249 systemd[1]: libpod-conmon-03ae92d11b1b104f82d19adfb61789e1931e07c1b948985b503447f506622a21.scope: Deactivated successfully.
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.271 254904 DEBUG nova.objects.instance [None req-a6f1dcf3-ba26-4303-94d3-d2937a3b26e4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lazy-loading 'flavor' on Instance uuid 7d55326f-52eb-4f7f-a9cd-05282ca6ca20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.297 254904 INFO nova.virt.libvirt.driver [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Creating config drive at /var/lib/nova/instances/c8ef6338-699a-4f16-8f0b-39f2bb67ee45/disk.config#033[00m
Dec  2 06:24:37 np0005542249 podman[280249]: 2025-12-02 11:24:37.302941721 +0000 UTC m=+0.061116673 container remove 03ae92d11b1b104f82d19adfb61789e1931e07c1b948985b503447f506622a21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3026dba-17e4-448f-b1b1-951c4cf69278, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec  2 06:24:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:37.310 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[6aa89117-7b29-4c06-8aae-5bc111bbb252]: (4, ('Tue Dec  2 11:24:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d3026dba-17e4-448f-b1b1-951c4cf69278 (03ae92d11b1b104f82d19adfb61789e1931e07c1b948985b503447f506622a21)\n03ae92d11b1b104f82d19adfb61789e1931e07c1b948985b503447f506622a21\nTue Dec  2 11:24:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d3026dba-17e4-448f-b1b1-951c4cf69278 (03ae92d11b1b104f82d19adfb61789e1931e07c1b948985b503447f506622a21)\n03ae92d11b1b104f82d19adfb61789e1931e07c1b948985b503447f506622a21\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.313 254904 DEBUG oslo_concurrency.processutils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c8ef6338-699a-4f16-8f0b-39f2bb67ee45/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp3966xmd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:37.313 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[87100761-b5c2-4ad2-af1c-b6ee82d25e4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:37.315 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3026dba-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:24:37 np0005542249 kernel: tapd3026dba-10: left promiscuous mode
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.349 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.355 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.356 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:37.359 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[44f71a95-b8f3-4ac1-9c51-000bf7a81a70]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:37.382 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[513c1257-2ead-4d67-bacb-7e9a58d487fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:37.384 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[4f51aabe-befc-4cb5-8fd3-9e6742e3cd96]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.395 254904 DEBUG nova.compute.manager [req-1b969101-c411-4788-a099-fc63ef773904 req-6ed4c683-176a-4583-9b45-645589e60366 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Received event network-vif-unplugged-ed87e718-f390-41da-bc93-530936610716 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.395 254904 DEBUG oslo_concurrency.lockutils [req-1b969101-c411-4788-a099-fc63ef773904 req-6ed4c683-176a-4583-9b45-645589e60366 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "2c9c5605-aec2-4e63-a5b5-49c57c3e9e17-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.396 254904 DEBUG oslo_concurrency.lockutils [req-1b969101-c411-4788-a099-fc63ef773904 req-6ed4c683-176a-4583-9b45-645589e60366 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "2c9c5605-aec2-4e63-a5b5-49c57c3e9e17-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.396 254904 DEBUG oslo_concurrency.lockutils [req-1b969101-c411-4788-a099-fc63ef773904 req-6ed4c683-176a-4583-9b45-645589e60366 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "2c9c5605-aec2-4e63-a5b5-49c57c3e9e17-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.396 254904 DEBUG nova.compute.manager [req-1b969101-c411-4788-a099-fc63ef773904 req-6ed4c683-176a-4583-9b45-645589e60366 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] No waiting events found dispatching network-vif-unplugged-ed87e718-f390-41da-bc93-530936610716 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.396 254904 DEBUG nova.compute.manager [req-1b969101-c411-4788-a099-fc63ef773904 req-6ed4c683-176a-4583-9b45-645589e60366 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Received event network-vif-unplugged-ed87e718-f390-41da-bc93-530936610716 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 06:24:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:37.413 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[514d738b-e245-4ead-aadc-033ddf8a667d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 483154, 'reachable_time': 29987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280269, 'error': None, 'target': 'ovnmeta-d3026dba-17e4-448f-b1b1-951c4cf69278', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.415 254904 DEBUG oslo_concurrency.lockutils [None req-a6f1dcf3-ba26-4303-94d3-d2937a3b26e4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:37.417 164036 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d3026dba-17e4-448f-b1b1-951c4cf69278 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 06:24:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:37.417 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[f3b007bb-c7ef-43a5-af08-3d79d8bf2e80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:37 np0005542249 systemd[1]: run-netns-ovnmeta\x2dd3026dba\x2d17e4\x2d448f\x2db1b1\x2d951c4cf69278.mount: Deactivated successfully.
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.462 254904 DEBUG oslo_concurrency.processutils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c8ef6338-699a-4f16-8f0b-39f2bb67ee45/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp3966xmd" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.505 254904 DEBUG nova.storage.rbd_utils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] rbd image c8ef6338-699a-4f16-8f0b-39f2bb67ee45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.511 254904 DEBUG oslo_concurrency.processutils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c8ef6338-699a-4f16-8f0b-39f2bb67ee45/disk.config c8ef6338-699a-4f16-8f0b-39f2bb67ee45_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.557 254904 INFO nova.virt.libvirt.driver [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Deleting instance files /var/lib/nova/instances/2c9c5605-aec2-4e63-a5b5-49c57c3e9e17_del#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.560 254904 INFO nova.virt.libvirt.driver [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Deletion of /var/lib/nova/instances/2c9c5605-aec2-4e63-a5b5-49c57c3e9e17_del complete#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.637 254904 INFO nova.compute.manager [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.638 254904 DEBUG oslo.service.loopingcall [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.638 254904 DEBUG nova.compute.manager [-] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.639 254904 DEBUG nova.network.neutron [-] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.706 254904 DEBUG oslo_concurrency.processutils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c8ef6338-699a-4f16-8f0b-39f2bb67ee45/disk.config c8ef6338-699a-4f16-8f0b-39f2bb67ee45_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.707 254904 INFO nova.virt.libvirt.driver [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Deleting local config drive /var/lib/nova/instances/c8ef6338-699a-4f16-8f0b-39f2bb67ee45/disk.config because it was imported into RBD.#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.755 254904 DEBUG nova.network.neutron [req-a6877371-7a45-434b-be73-d24b3f964edb req-f4ee6abd-f904-459a-844e-03773780a436 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Updated VIF entry in instance network info cache for port 677e9f6e-c8b4-450a-a8fa-47c99433fedf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.755 254904 DEBUG nova.network.neutron [req-a6877371-7a45-434b-be73-d24b3f964edb req-f4ee6abd-f904-459a-844e-03773780a436 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Updating instance_info_cache with network_info: [{"id": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "address": "fa:16:3e:f5:ea:49", "network": {"id": "e8562a40-d165-4b23-84c8-7c8f664e882e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1724334586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674916d2c2d94b239e86c66b1f0af922", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677e9f6e-c8", "ovs_interfaceid": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.780 254904 DEBUG oslo_concurrency.lockutils [req-a6877371-7a45-434b-be73-d24b3f964edb req-f4ee6abd-f904-459a-844e-03773780a436 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-c8ef6338-699a-4f16-8f0b-39f2bb67ee45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:24:37 np0005542249 kernel: tap677e9f6e-c8: entered promiscuous mode
Dec  2 06:24:37 np0005542249 systemd-udevd[280164]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:24:37 np0005542249 NetworkManager[48987]: <info>  [1764674677.7918] manager: (tap677e9f6e-c8): new Tun device (/org/freedesktop/NetworkManager/Devices/83)
Dec  2 06:24:37 np0005542249 NetworkManager[48987]: <info>  [1764674677.8160] device (tap677e9f6e-c8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:24:37 np0005542249 NetworkManager[48987]: <info>  [1764674677.8173] device (tap677e9f6e-c8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.834 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:37 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:37Z|00140|binding|INFO|Claiming lport 677e9f6e-c8b4-450a-a8fa-47c99433fedf for this chassis.
Dec  2 06:24:37 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:37Z|00141|binding|INFO|677e9f6e-c8b4-450a-a8fa-47c99433fedf: Claiming fa:16:3e:f5:ea:49 10.100.0.4
Dec  2 06:24:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:37.844 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:ea:49 10.100.0.4'], port_security=['fa:16:3e:f5:ea:49 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c8ef6338-699a-4f16-8f0b-39f2bb67ee45', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e8562a40-d165-4b23-84c8-7c8f664e882e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674916d2c2d94b239e86c66b1f0af922', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'faee16a0-fea4-495b-80d5-1feb61394d54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c8c90f5e-f1cc-4d3e-8b23-74c26cb1484e, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=677e9f6e-c8b4-450a-a8fa-47c99433fedf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:24:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:37.847 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 677e9f6e-c8b4-450a-a8fa-47c99433fedf in datapath e8562a40-d165-4b23-84c8-7c8f664e882e bound to our chassis#033[00m
Dec  2 06:24:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:37.850 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e8562a40-d165-4b23-84c8-7c8f664e882e#033[00m
Dec  2 06:24:37 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:37Z|00142|binding|INFO|Setting lport 677e9f6e-c8b4-450a-a8fa-47c99433fedf ovn-installed in OVS
Dec  2 06:24:37 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:37Z|00143|binding|INFO|Setting lport 677e9f6e-c8b4-450a-a8fa-47c99433fedf up in Southbound
Dec  2 06:24:37 np0005542249 systemd-machined[216222]: New machine qemu-14-instance-0000000e.
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.868 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:37.866 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[ae6b883d-eaba-48aa-afe0-df31a5210917]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:37.869 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape8562a40-d1 in ovnmeta-e8562a40-d165-4b23-84c8-7c8f664e882e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 06:24:37 np0005542249 nova_compute[254900]: 2025-12-02 11:24:37.871 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:37.873 262398 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape8562a40-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 06:24:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:37.874 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[e445cf5e-d3f5-47bc-9680-f881aff701ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:37.875 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[6d205370-885a-460c-bf16-43c194ea4aa2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:37 np0005542249 systemd[1]: Started Virtual Machine qemu-14-instance-0000000e.
Dec  2 06:24:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:37.902 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[059f0775-2228-4ed1-b12d-28e2725381dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:37.939 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[e847c437-fd6b-4050-8748-228b92c64739]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:37.979 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[0cc218ca-6c1e-4033-b47b-dbcebeeb3708]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:37 np0005542249 NetworkManager[48987]: <info>  [1764674677.9898] manager: (tape8562a40-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/84)
Dec  2 06:24:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:37.988 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[3f260967-50f1-4cb5-87b1-71302ed273ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:38.045 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[38a1124d-d083-4ddb-8387-9ecb1ab89ab7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:38.052 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[5cde3866-066a-4ec6-acae-9087a26063e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:38 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1333: 321 pgs: 321 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 185 KiB/s rd, 373 KiB/s wr, 33 op/s
Dec  2 06:24:38 np0005542249 NetworkManager[48987]: <info>  [1764674678.0951] device (tape8562a40-d0): carrier: link connected
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:38.105 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[09ae74ae-95f8-4b26-9364-6220d89e8e9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:38.138 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[321c2da5-e383-45e0-87ac-f4b5fa29fa0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape8562a40-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:45:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487091, 'reachable_time': 24839, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280351, 'error': None, 'target': 'ovnmeta-e8562a40-d165-4b23-84c8-7c8f664e882e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:38.162 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e691d1-42fa-4634-a4a4-95c10c3b5607]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe40:45e9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 487091, 'tstamp': 487091}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280352, 'error': None, 'target': 'ovnmeta-e8562a40-d165-4b23-84c8-7c8f664e882e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:38.187 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[e34a6a8b-9d00-4ef8-b247-b332e8b959dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape8562a40-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:45:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487091, 'reachable_time': 24839, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 280353, 'error': None, 'target': 'ovnmeta-e8562a40-d165-4b23-84c8-7c8f664e882e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.200 254904 DEBUG nova.compute.manager [req-81022d83-293f-414f-8df9-139272b3799c req-95e84d47-33be-4a70-81a7-cfb02c9021d0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Received event network-vif-plugged-677e9f6e-c8b4-450a-a8fa-47c99433fedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.201 254904 DEBUG oslo_concurrency.lockutils [req-81022d83-293f-414f-8df9-139272b3799c req-95e84d47-33be-4a70-81a7-cfb02c9021d0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "c8ef6338-699a-4f16-8f0b-39f2bb67ee45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.201 254904 DEBUG oslo_concurrency.lockutils [req-81022d83-293f-414f-8df9-139272b3799c req-95e84d47-33be-4a70-81a7-cfb02c9021d0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "c8ef6338-699a-4f16-8f0b-39f2bb67ee45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.201 254904 DEBUG oslo_concurrency.lockutils [req-81022d83-293f-414f-8df9-139272b3799c req-95e84d47-33be-4a70-81a7-cfb02c9021d0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "c8ef6338-699a-4f16-8f0b-39f2bb67ee45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.201 254904 DEBUG nova.compute.manager [req-81022d83-293f-414f-8df9-139272b3799c req-95e84d47-33be-4a70-81a7-cfb02c9021d0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Processing event network-vif-plugged-677e9f6e-c8b4-450a-a8fa-47c99433fedf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:38.227 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[77cc5af7-1699-4294-82c7-3edd3a17a6d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.323 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:38.342 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[2f76f5c2-f442-471e-ae7b-30014975980d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:38.344 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8562a40-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:38.344 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:38.345 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape8562a40-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:24:38 np0005542249 NetworkManager[48987]: <info>  [1764674678.3496] manager: (tape8562a40-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Dec  2 06:24:38 np0005542249 kernel: tape8562a40-d0: entered promiscuous mode
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.348 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:38.353 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape8562a40-d0, col_values=(('external_ids', {'iface-id': '7778fba4-c484-4957-be60-d6154255df3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:24:38 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:38Z|00144|binding|INFO|Releasing lport 7778fba4-c484-4957-be60-d6154255df3b from this chassis (sb_readonly=0)
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.355 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:24:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.383 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.383 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.386 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:38.387 163757 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e8562a40-d165-4b23-84c8-7c8f664e882e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e8562a40-d165-4b23-84c8-7c8f664e882e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:38.388 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[403f25ed-0465-40c4-a4a9-75090c839a96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:38.389 163757 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]: global
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]:    log         /dev/log local0 debug
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]:    log-tag     haproxy-metadata-proxy-e8562a40-d165-4b23-84c8-7c8f664e882e
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]:    user        root
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]:    group       root
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]:    maxconn     1024
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]:    pidfile     /var/lib/neutron/external/pids/e8562a40-d165-4b23-84c8-7c8f664e882e.pid.haproxy
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]:    daemon
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]: defaults
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]:    log global
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]:    mode http
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]:    option httplog
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]:    option dontlognull
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]:    option http-server-close
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]:    option forwardfor
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]:    retries                 3
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]:    timeout http-request    30s
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]:    timeout connect         30s
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]:    timeout client          32s
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]:    timeout server          32s
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]:    timeout http-keep-alive 30s
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]: listen listener
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]:    bind 169.254.169.254:80
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]:    http-request add-header X-OVN-Network-ID e8562a40-d165-4b23-84c8-7c8f664e882e
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 06:24:38 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:38.390 163757 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e8562a40-d165-4b23-84c8-7c8f664e882e', 'env', 'PROCESS_TAG=haproxy-e8562a40-d165-4b23-84c8-7c8f664e882e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e8562a40-d165-4b23-84c8-7c8f664e882e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 06:24:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Dec  2 06:24:38 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.413 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.413 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.529 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674678.5289297, c8ef6338-699a-4f16-8f0b-39f2bb67ee45 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.530 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] VM Started (Lifecycle Event)#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.532 254904 DEBUG nova.compute.manager [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.540 254904 DEBUG nova.virt.libvirt.driver [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.546 254904 INFO nova.virt.libvirt.driver [-] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Instance spawned successfully.#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.547 254904 DEBUG nova.virt.libvirt.driver [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.558 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.561 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "refresh_cache-7d55326f-52eb-4f7f-a9cd-05282ca6ca20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.561 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquired lock "refresh_cache-7d55326f-52eb-4f7f-a9cd-05282ca6ca20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.562 254904 DEBUG nova.network.neutron [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.562 254904 DEBUG nova.objects.instance [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7d55326f-52eb-4f7f-a9cd-05282ca6ca20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.573 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.581 254904 DEBUG nova.virt.libvirt.driver [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.582 254904 DEBUG nova.virt.libvirt.driver [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.583 254904 DEBUG nova.virt.libvirt.driver [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.584 254904 DEBUG nova.virt.libvirt.driver [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.585 254904 DEBUG nova.virt.libvirt.driver [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.586 254904 DEBUG nova.virt.libvirt.driver [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.596 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.597 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674678.5302227, c8ef6338-699a-4f16-8f0b-39f2bb67ee45 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.598 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.643 254904 INFO nova.compute.manager [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Took 5.21 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.645 254904 DEBUG nova.compute.manager [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.685 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.686 254904 DEBUG nova.network.neutron [-] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.692 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674678.5365584, c8ef6338-699a-4f16-8f0b-39f2bb67ee45 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.692 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.716 254904 INFO nova.compute.manager [-] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Took 1.08 seconds to deallocate network for instance.#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.721 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.731 254904 INFO nova.compute.manager [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Took 7.74 seconds to build instance.#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.736 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.761 254904 DEBUG oslo_concurrency.lockutils [None req-9471a56a-47f7-4039-a710-6d360804f966 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Lock "c8ef6338-699a-4f16-8f0b-39f2bb67ee45" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:38 np0005542249 podman[280427]: 2025-12-02 11:24:38.872619486 +0000 UTC m=+0.077986528 container create 707d4db4a000ebdb59448ebd131dc084e27367fc55debcab4a62690513370678 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8562a40-d165-4b23-84c8-7c8f664e882e, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.910 254904 INFO nova.compute.manager [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Took 0.19 seconds to detach 1 volumes for instance.#033[00m
Dec  2 06:24:38 np0005542249 systemd[1]: Started libpod-conmon-707d4db4a000ebdb59448ebd131dc084e27367fc55debcab4a62690513370678.scope.
Dec  2 06:24:38 np0005542249 podman[280427]: 2025-12-02 11:24:38.834159196 +0000 UTC m=+0.039526248 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:24:38 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.961 254904 DEBUG oslo_concurrency.lockutils [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:38 np0005542249 nova_compute[254900]: 2025-12-02 11:24:38.962 254904 DEBUG oslo_concurrency.lockutils [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:38 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df3cbd203ff7be9459bd9ff09ceb108e87360f20e22cafdf099ad4ee0830e36c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 06:24:38 np0005542249 podman[280427]: 2025-12-02 11:24:38.986330389 +0000 UTC m=+0.191697501 container init 707d4db4a000ebdb59448ebd131dc084e27367fc55debcab4a62690513370678 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8562a40-d165-4b23-84c8-7c8f664e882e, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec  2 06:24:38 np0005542249 podman[280427]: 2025-12-02 11:24:38.99193125 +0000 UTC m=+0.197298302 container start 707d4db4a000ebdb59448ebd131dc084e27367fc55debcab4a62690513370678 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8562a40-d165-4b23-84c8-7c8f664e882e, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  2 06:24:39 np0005542249 neutron-haproxy-ovnmeta-e8562a40-d165-4b23-84c8-7c8f664e882e[280442]: [NOTICE]   (280446) : New worker (280448) forked
Dec  2 06:24:39 np0005542249 neutron-haproxy-ovnmeta-e8562a40-d165-4b23-84c8-7c8f664e882e[280442]: [NOTICE]   (280446) : Loading success.
Dec  2 06:24:39 np0005542249 nova_compute[254900]: 2025-12-02 11:24:39.317 254904 DEBUG oslo_concurrency.processutils [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:39 np0005542249 nova_compute[254900]: 2025-12-02 11:24:39.477 254904 DEBUG nova.compute.manager [req-188e1865-a50b-424d-be2d-3aba27efde86 req-a231f32f-c1c4-458e-844f-2bef8f2028cf 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Received event network-vif-plugged-ed87e718-f390-41da-bc93-530936610716 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:24:39 np0005542249 nova_compute[254900]: 2025-12-02 11:24:39.480 254904 DEBUG oslo_concurrency.lockutils [req-188e1865-a50b-424d-be2d-3aba27efde86 req-a231f32f-c1c4-458e-844f-2bef8f2028cf 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "2c9c5605-aec2-4e63-a5b5-49c57c3e9e17-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:39 np0005542249 nova_compute[254900]: 2025-12-02 11:24:39.481 254904 DEBUG oslo_concurrency.lockutils [req-188e1865-a50b-424d-be2d-3aba27efde86 req-a231f32f-c1c4-458e-844f-2bef8f2028cf 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "2c9c5605-aec2-4e63-a5b5-49c57c3e9e17-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:39 np0005542249 nova_compute[254900]: 2025-12-02 11:24:39.482 254904 DEBUG oslo_concurrency.lockutils [req-188e1865-a50b-424d-be2d-3aba27efde86 req-a231f32f-c1c4-458e-844f-2bef8f2028cf 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "2c9c5605-aec2-4e63-a5b5-49c57c3e9e17-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:39 np0005542249 nova_compute[254900]: 2025-12-02 11:24:39.483 254904 DEBUG nova.compute.manager [req-188e1865-a50b-424d-be2d-3aba27efde86 req-a231f32f-c1c4-458e-844f-2bef8f2028cf 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] No waiting events found dispatching network-vif-plugged-ed87e718-f390-41da-bc93-530936610716 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:24:39 np0005542249 nova_compute[254900]: 2025-12-02 11:24:39.484 254904 WARNING nova.compute.manager [req-188e1865-a50b-424d-be2d-3aba27efde86 req-a231f32f-c1c4-458e-844f-2bef8f2028cf 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Received unexpected event network-vif-plugged-ed87e718-f390-41da-bc93-530936610716 for instance with vm_state deleted and task_state None.#033[00m
Dec  2 06:24:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:24:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1807402928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:24:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:24:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/404711419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:24:39 np0005542249 nova_compute[254900]: 2025-12-02 11:24:39.809 254904 DEBUG oslo_concurrency.processutils [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:39 np0005542249 nova_compute[254900]: 2025-12-02 11:24:39.818 254904 DEBUG nova.compute.provider_tree [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:24:39 np0005542249 nova_compute[254900]: 2025-12-02 11:24:39.842 254904 DEBUG nova.scheduler.client.report [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:24:39 np0005542249 nova_compute[254900]: 2025-12-02 11:24:39.876 254904 DEBUG oslo_concurrency.lockutils [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:39 np0005542249 nova_compute[254900]: 2025-12-02 11:24:39.917 254904 INFO nova.scheduler.client.report [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Deleted allocations for instance 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17#033[00m
Dec  2 06:24:40 np0005542249 nova_compute[254900]: 2025-12-02 11:24:40.047 254904 DEBUG nova.compute.manager [None req-613f74b4-53ef-4184-aab8-3a5f3e66c2ff b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:24:40 np0005542249 nova_compute[254900]: 2025-12-02 11:24:40.053 254904 DEBUG oslo_concurrency.lockutils [None req-b1e2715c-fcdb-4fd1-8fb6-e37d0715824f 409e3a0b9ed441f2bb32f5f1fd0bb00a e06121cb1a114bf997558a008929f199 - - default default] Lock "2c9c5605-aec2-4e63-a5b5-49c57c3e9e17" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:40 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1335: 321 pgs: 321 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 32 KiB/s rd, 249 KiB/s wr, 44 op/s
Dec  2 06:24:40 np0005542249 nova_compute[254900]: 2025-12-02 11:24:40.110 254904 INFO nova.compute.manager [None req-613f74b4-53ef-4184-aab8-3a5f3e66c2ff b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] instance snapshotting#033[00m
Dec  2 06:24:40 np0005542249 nova_compute[254900]: 2025-12-02 11:24:40.392 254904 DEBUG nova.compute.manager [req-6daf9858-9e7f-4092-9e9b-e5d99de92aeb req-390863c9-ba76-4aaf-81af-d954e085dc06 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Received event network-vif-plugged-677e9f6e-c8b4-450a-a8fa-47c99433fedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:24:40 np0005542249 nova_compute[254900]: 2025-12-02 11:24:40.393 254904 DEBUG oslo_concurrency.lockutils [req-6daf9858-9e7f-4092-9e9b-e5d99de92aeb req-390863c9-ba76-4aaf-81af-d954e085dc06 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "c8ef6338-699a-4f16-8f0b-39f2bb67ee45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:40 np0005542249 nova_compute[254900]: 2025-12-02 11:24:40.394 254904 DEBUG oslo_concurrency.lockutils [req-6daf9858-9e7f-4092-9e9b-e5d99de92aeb req-390863c9-ba76-4aaf-81af-d954e085dc06 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "c8ef6338-699a-4f16-8f0b-39f2bb67ee45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:40 np0005542249 nova_compute[254900]: 2025-12-02 11:24:40.395 254904 DEBUG oslo_concurrency.lockutils [req-6daf9858-9e7f-4092-9e9b-e5d99de92aeb req-390863c9-ba76-4aaf-81af-d954e085dc06 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "c8ef6338-699a-4f16-8f0b-39f2bb67ee45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:40 np0005542249 nova_compute[254900]: 2025-12-02 11:24:40.395 254904 DEBUG nova.compute.manager [req-6daf9858-9e7f-4092-9e9b-e5d99de92aeb req-390863c9-ba76-4aaf-81af-d954e085dc06 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] No waiting events found dispatching network-vif-plugged-677e9f6e-c8b4-450a-a8fa-47c99433fedf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:24:40 np0005542249 nova_compute[254900]: 2025-12-02 11:24:40.396 254904 WARNING nova.compute.manager [req-6daf9858-9e7f-4092-9e9b-e5d99de92aeb req-390863c9-ba76-4aaf-81af-d954e085dc06 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Received unexpected event network-vif-plugged-677e9f6e-c8b4-450a-a8fa-47c99433fedf for instance with vm_state active and task_state None.#033[00m
Dec  2 06:24:40 np0005542249 nova_compute[254900]: 2025-12-02 11:24:40.396 254904 DEBUG nova.compute.manager [req-6daf9858-9e7f-4092-9e9b-e5d99de92aeb req-390863c9-ba76-4aaf-81af-d954e085dc06 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Received event network-vif-deleted-ed87e718-f390-41da-bc93-530936610716 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:24:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Dec  2 06:24:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Dec  2 06:24:40 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Dec  2 06:24:40 np0005542249 nova_compute[254900]: 2025-12-02 11:24:40.454 254904 INFO nova.virt.libvirt.driver [None req-613f74b4-53ef-4184-aab8-3a5f3e66c2ff b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Beginning live snapshot process#033[00m
Dec  2 06:24:40 np0005542249 nova_compute[254900]: 2025-12-02 11:24:40.743 254904 DEBUG nova.network.neutron [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Updating instance_info_cache with network_info: [{"id": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "address": "fa:16:3e:02:ef:76", "network": {"id": "202d4c1b-b1c2-4564-b679-1d789b189a11", "bridge": "br-int", "label": "tempest-TestStampPattern-951670540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bda44a38b8b4f31a8b6e8f6f0548898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6b169f0-73", "ovs_interfaceid": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:24:40 np0005542249 nova_compute[254900]: 2025-12-02 11:24:40.753 254904 DEBUG nova.virt.libvirt.imagebackend [None req-613f74b4-53ef-4184-aab8-3a5f3e66c2ff b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] No parent info for 5a40f66c-ab43-47dd-9880-e59f9fa2c60e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Dec  2 06:24:40 np0005542249 nova_compute[254900]: 2025-12-02 11:24:40.769 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Releasing lock "refresh_cache-7d55326f-52eb-4f7f-a9cd-05282ca6ca20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:24:40 np0005542249 nova_compute[254900]: 2025-12-02 11:24:40.770 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 06:24:40 np0005542249 nova_compute[254900]: 2025-12-02 11:24:40.771 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:24:40 np0005542249 nova_compute[254900]: 2025-12-02 11:24:40.771 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:24:41 np0005542249 nova_compute[254900]: 2025-12-02 11:24:41.004 254904 DEBUG nova.storage.rbd_utils [None req-613f74b4-53ef-4184-aab8-3a5f3e66c2ff b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] creating snapshot(8698cb8e17c3403c8ef3a54230d1d388) on rbd image(7d55326f-52eb-4f7f-a9cd-05282ca6ca20_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  2 06:24:41 np0005542249 nova_compute[254900]: 2025-12-02 11:24:41.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:24:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Dec  2 06:24:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Dec  2 06:24:41 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Dec  2 06:24:41 np0005542249 nova_compute[254900]: 2025-12-02 11:24:41.505 254904 DEBUG nova.storage.rbd_utils [None req-613f74b4-53ef-4184-aab8-3a5f3e66c2ff b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] cloning vms/7d55326f-52eb-4f7f-a9cd-05282ca6ca20_disk@8698cb8e17c3403c8ef3a54230d1d388 to images/673eea99-788c-44cc-a8b0-716ae3b6bc5c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec  2 06:24:41 np0005542249 nova_compute[254900]: 2025-12-02 11:24:41.680 254904 DEBUG nova.storage.rbd_utils [None req-613f74b4-53ef-4184-aab8-3a5f3e66c2ff b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] flattening images/673eea99-788c-44cc-a8b0-716ae3b6bc5c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec  2 06:24:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:24:42 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1338: 321 pgs: 321 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 4.4 MiB/s rd, 781 KiB/s wr, 191 op/s
Dec  2 06:24:42 np0005542249 ceph-osd[89966]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec  2 06:24:42 np0005542249 nova_compute[254900]: 2025-12-02 11:24:42.148 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:42 np0005542249 nova_compute[254900]: 2025-12-02 11:24:42.267 254904 DEBUG nova.storage.rbd_utils [None req-613f74b4-53ef-4184-aab8-3a5f3e66c2ff b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] removing snapshot(8698cb8e17c3403c8ef3a54230d1d388) on rbd image(7d55326f-52eb-4f7f-a9cd-05282ca6ca20_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec  2 06:24:42 np0005542249 nova_compute[254900]: 2025-12-02 11:24:42.321 254904 DEBUG nova.compute.manager [req-5f0318e0-6d95-495d-9cc9-213616f1351a req-1815e8e1-f6f6-4b66-b335-068974799c34 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Received event network-changed-677e9f6e-c8b4-450a-a8fa-47c99433fedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:24:42 np0005542249 nova_compute[254900]: 2025-12-02 11:24:42.321 254904 DEBUG nova.compute.manager [req-5f0318e0-6d95-495d-9cc9-213616f1351a req-1815e8e1-f6f6-4b66-b335-068974799c34 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Refreshing instance network info cache due to event network-changed-677e9f6e-c8b4-450a-a8fa-47c99433fedf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:24:42 np0005542249 nova_compute[254900]: 2025-12-02 11:24:42.322 254904 DEBUG oslo_concurrency.lockutils [req-5f0318e0-6d95-495d-9cc9-213616f1351a req-1815e8e1-f6f6-4b66-b335-068974799c34 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-c8ef6338-699a-4f16-8f0b-39f2bb67ee45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:24:42 np0005542249 nova_compute[254900]: 2025-12-02 11:24:42.322 254904 DEBUG oslo_concurrency.lockutils [req-5f0318e0-6d95-495d-9cc9-213616f1351a req-1815e8e1-f6f6-4b66-b335-068974799c34 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-c8ef6338-699a-4f16-8f0b-39f2bb67ee45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:24:42 np0005542249 nova_compute[254900]: 2025-12-02 11:24:42.323 254904 DEBUG nova.network.neutron [req-5f0318e0-6d95-495d-9cc9-213616f1351a req-1815e8e1-f6f6-4b66-b335-068974799c34 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Refreshing network info cache for port 677e9f6e-c8b4-450a-a8fa-47c99433fedf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:24:42 np0005542249 nova_compute[254900]: 2025-12-02 11:24:42.377 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:24:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Dec  2 06:24:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Dec  2 06:24:42 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Dec  2 06:24:42 np0005542249 nova_compute[254900]: 2025-12-02 11:24:42.492 254904 DEBUG nova.storage.rbd_utils [None req-613f74b4-53ef-4184-aab8-3a5f3e66c2ff b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] creating snapshot(snap) on rbd image(673eea99-788c-44cc-a8b0-716ae3b6bc5c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  2 06:24:43 np0005542249 nova_compute[254900]: 2025-12-02 11:24:43.357 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Dec  2 06:24:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Dec  2 06:24:43 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Dec  2 06:24:43 np0005542249 nova_compute[254900]: 2025-12-02 11:24:43.639 254904 DEBUG nova.compute.manager [req-5d45a0f2-9a98-4f2d-b73b-641e8e4451c2 req-b9ca1b9a-db1c-4cad-844a-78e4bb4c96a0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Received event network-changed-677e9f6e-c8b4-450a-a8fa-47c99433fedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:24:43 np0005542249 nova_compute[254900]: 2025-12-02 11:24:43.640 254904 DEBUG nova.compute.manager [req-5d45a0f2-9a98-4f2d-b73b-641e8e4451c2 req-b9ca1b9a-db1c-4cad-844a-78e4bb4c96a0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Refreshing instance network info cache due to event network-changed-677e9f6e-c8b4-450a-a8fa-47c99433fedf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:24:43 np0005542249 nova_compute[254900]: 2025-12-02 11:24:43.640 254904 DEBUG oslo_concurrency.lockutils [req-5d45a0f2-9a98-4f2d-b73b-641e8e4451c2 req-b9ca1b9a-db1c-4cad-844a-78e4bb4c96a0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-c8ef6338-699a-4f16-8f0b-39f2bb67ee45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:24:43 np0005542249 nova_compute[254900]: 2025-12-02 11:24:43.860 254904 DEBUG nova.network.neutron [req-5f0318e0-6d95-495d-9cc9-213616f1351a req-1815e8e1-f6f6-4b66-b335-068974799c34 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Updated VIF entry in instance network info cache for port 677e9f6e-c8b4-450a-a8fa-47c99433fedf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:24:43 np0005542249 nova_compute[254900]: 2025-12-02 11:24:43.861 254904 DEBUG nova.network.neutron [req-5f0318e0-6d95-495d-9cc9-213616f1351a req-1815e8e1-f6f6-4b66-b335-068974799c34 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Updating instance_info_cache with network_info: [{"id": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "address": "fa:16:3e:f5:ea:49", "network": {"id": "e8562a40-d165-4b23-84c8-7c8f664e882e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1724334586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674916d2c2d94b239e86c66b1f0af922", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677e9f6e-c8", "ovs_interfaceid": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:24:43 np0005542249 nova_compute[254900]: 2025-12-02 11:24:43.885 254904 DEBUG oslo_concurrency.lockutils [req-5f0318e0-6d95-495d-9cc9-213616f1351a req-1815e8e1-f6f6-4b66-b335-068974799c34 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-c8ef6338-699a-4f16-8f0b-39f2bb67ee45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:24:43 np0005542249 nova_compute[254900]: 2025-12-02 11:24:43.885 254904 DEBUG oslo_concurrency.lockutils [req-5d45a0f2-9a98-4f2d-b73b-641e8e4451c2 req-b9ca1b9a-db1c-4cad-844a-78e4bb4c96a0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-c8ef6338-699a-4f16-8f0b-39f2bb67ee45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:24:43 np0005542249 nova_compute[254900]: 2025-12-02 11:24:43.886 254904 DEBUG nova.network.neutron [req-5d45a0f2-9a98-4f2d-b73b-641e8e4451c2 req-b9ca1b9a-db1c-4cad-844a-78e4bb4c96a0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Refreshing network info cache for port 677e9f6e-c8b4-450a-a8fa-47c99433fedf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:24:44 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1341: 321 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 314 active+clean; 2.5 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 18 MiB/s rd, 5.8 MiB/s wr, 448 op/s
Dec  2 06:24:44 np0005542249 nova_compute[254900]: 2025-12-02 11:24:44.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:24:44 np0005542249 nova_compute[254900]: 2025-12-02 11:24:44.387 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:24:44 np0005542249 nova_compute[254900]: 2025-12-02 11:24:44.387 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  2 06:24:45 np0005542249 podman[280620]: 2025-12-02 11:24:45.070141773 +0000 UTC m=+0.134749512 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  2 06:24:45 np0005542249 nova_compute[254900]: 2025-12-02 11:24:45.417 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:24:45 np0005542249 nova_compute[254900]: 2025-12-02 11:24:45.419 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:24:45 np0005542249 nova_compute[254900]: 2025-12-02 11:24:45.420 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:24:45 np0005542249 nova_compute[254900]: 2025-12-02 11:24:45.454 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:45 np0005542249 nova_compute[254900]: 2025-12-02 11:24:45.455 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:45 np0005542249 nova_compute[254900]: 2025-12-02 11:24:45.456 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:45 np0005542249 nova_compute[254900]: 2025-12-02 11:24:45.457 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:24:45 np0005542249 nova_compute[254900]: 2025-12-02 11:24:45.457 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:45 np0005542249 nova_compute[254900]: 2025-12-02 11:24:45.623 254904 INFO nova.virt.libvirt.driver [None req-613f74b4-53ef-4184-aab8-3a5f3e66c2ff b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Snapshot image upload complete#033[00m
Dec  2 06:24:45 np0005542249 nova_compute[254900]: 2025-12-02 11:24:45.624 254904 INFO nova.compute.manager [None req-613f74b4-53ef-4184-aab8-3a5f3e66c2ff b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Took 5.51 seconds to snapshot the instance on the hypervisor.#033[00m
Dec  2 06:24:45 np0005542249 nova_compute[254900]: 2025-12-02 11:24:45.715 254904 DEBUG nova.compute.manager [req-2ff8fcd8-c05f-4f2e-940d-906bf84a2812 req-4f6d7a79-2e3f-4aa6-8982-8ec236134293 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Received event network-changed-677e9f6e-c8b4-450a-a8fa-47c99433fedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:24:45 np0005542249 nova_compute[254900]: 2025-12-02 11:24:45.716 254904 DEBUG nova.compute.manager [req-2ff8fcd8-c05f-4f2e-940d-906bf84a2812 req-4f6d7a79-2e3f-4aa6-8982-8ec236134293 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Refreshing instance network info cache due to event network-changed-677e9f6e-c8b4-450a-a8fa-47c99433fedf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:24:45 np0005542249 nova_compute[254900]: 2025-12-02 11:24:45.717 254904 DEBUG oslo_concurrency.lockutils [req-2ff8fcd8-c05f-4f2e-940d-906bf84a2812 req-4f6d7a79-2e3f-4aa6-8982-8ec236134293 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-c8ef6338-699a-4f16-8f0b-39f2bb67ee45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:24:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:24:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/936396252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:24:45 np0005542249 nova_compute[254900]: 2025-12-02 11:24:45.959 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:46 np0005542249 nova_compute[254900]: 2025-12-02 11:24:46.058 254904 DEBUG nova.network.neutron [req-5d45a0f2-9a98-4f2d-b73b-641e8e4451c2 req-b9ca1b9a-db1c-4cad-844a-78e4bb4c96a0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Updated VIF entry in instance network info cache for port 677e9f6e-c8b4-450a-a8fa-47c99433fedf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:24:46 np0005542249 nova_compute[254900]: 2025-12-02 11:24:46.060 254904 DEBUG nova.network.neutron [req-5d45a0f2-9a98-4f2d-b73b-641e8e4451c2 req-b9ca1b9a-db1c-4cad-844a-78e4bb4c96a0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Updating instance_info_cache with network_info: [{"id": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "address": "fa:16:3e:f5:ea:49", "network": {"id": "e8562a40-d165-4b23-84c8-7c8f664e882e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1724334586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674916d2c2d94b239e86c66b1f0af922", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677e9f6e-c8", "ovs_interfaceid": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:24:46 np0005542249 nova_compute[254900]: 2025-12-02 11:24:46.064 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:24:46 np0005542249 nova_compute[254900]: 2025-12-02 11:24:46.065 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:24:46 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1342: 321 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 314 active+clean; 2.5 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 15 MiB/s rd, 8.4 MiB/s wr, 449 op/s
Dec  2 06:24:46 np0005542249 nova_compute[254900]: 2025-12-02 11:24:46.069 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:24:46 np0005542249 nova_compute[254900]: 2025-12-02 11:24:46.069 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:24:46 np0005542249 nova_compute[254900]: 2025-12-02 11:24:46.076 254904 DEBUG oslo_concurrency.lockutils [req-5d45a0f2-9a98-4f2d-b73b-641e8e4451c2 req-b9ca1b9a-db1c-4cad-844a-78e4bb4c96a0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-c8ef6338-699a-4f16-8f0b-39f2bb67ee45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:24:46 np0005542249 nova_compute[254900]: 2025-12-02 11:24:46.077 254904 DEBUG oslo_concurrency.lockutils [req-2ff8fcd8-c05f-4f2e-940d-906bf84a2812 req-4f6d7a79-2e3f-4aa6-8982-8ec236134293 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-c8ef6338-699a-4f16-8f0b-39f2bb67ee45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:24:46 np0005542249 nova_compute[254900]: 2025-12-02 11:24:46.077 254904 DEBUG nova.network.neutron [req-2ff8fcd8-c05f-4f2e-940d-906bf84a2812 req-4f6d7a79-2e3f-4aa6-8982-8ec236134293 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Refreshing network info cache for port 677e9f6e-c8b4-450a-a8fa-47c99433fedf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:24:46 np0005542249 nova_compute[254900]: 2025-12-02 11:24:46.284 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:24:46 np0005542249 nova_compute[254900]: 2025-12-02 11:24:46.286 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4117MB free_disk=59.94249725341797GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:24:46 np0005542249 nova_compute[254900]: 2025-12-02 11:24:46.286 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:46 np0005542249 nova_compute[254900]: 2025-12-02 11:24:46.287 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:46 np0005542249 nova_compute[254900]: 2025-12-02 11:24:46.360 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Instance 7d55326f-52eb-4f7f-a9cd-05282ca6ca20 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 06:24:46 np0005542249 nova_compute[254900]: 2025-12-02 11:24:46.360 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Instance c8ef6338-699a-4f16-8f0b-39f2bb67ee45 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 06:24:46 np0005542249 nova_compute[254900]: 2025-12-02 11:24:46.361 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:24:46 np0005542249 nova_compute[254900]: 2025-12-02 11:24:46.361 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:24:46 np0005542249 nova_compute[254900]: 2025-12-02 11:24:46.431 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:24:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:24:46 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2327182309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:24:46 np0005542249 nova_compute[254900]: 2025-12-02 11:24:46.980 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Dec  2 06:24:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Dec  2 06:24:46 np0005542249 nova_compute[254900]: 2025-12-02 11:24:46.992 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:24:46 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Dec  2 06:24:47 np0005542249 nova_compute[254900]: 2025-12-02 11:24:47.038 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:24:47 np0005542249 nova_compute[254900]: 2025-12-02 11:24:47.085 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:24:47 np0005542249 nova_compute[254900]: 2025-12-02 11:24:47.087 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:47 np0005542249 nova_compute[254900]: 2025-12-02 11:24:47.152 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:47 np0005542249 nova_compute[254900]: 2025-12-02 11:24:47.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:24:48 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1344: 321 pgs: 321 active+clean; 2.5 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 12 MiB/s rd, 12 MiB/s wr, 335 op/s
Dec  2 06:24:48 np0005542249 nova_compute[254900]: 2025-12-02 11:24:48.349 254904 DEBUG nova.network.neutron [req-2ff8fcd8-c05f-4f2e-940d-906bf84a2812 req-4f6d7a79-2e3f-4aa6-8982-8ec236134293 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Updated VIF entry in instance network info cache for port 677e9f6e-c8b4-450a-a8fa-47c99433fedf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:24:48 np0005542249 nova_compute[254900]: 2025-12-02 11:24:48.351 254904 DEBUG nova.network.neutron [req-2ff8fcd8-c05f-4f2e-940d-906bf84a2812 req-4f6d7a79-2e3f-4aa6-8982-8ec236134293 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Updating instance_info_cache with network_info: [{"id": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "address": "fa:16:3e:f5:ea:49", "network": {"id": "e8562a40-d165-4b23-84c8-7c8f664e882e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1724334586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674916d2c2d94b239e86c66b1f0af922", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677e9f6e-c8", "ovs_interfaceid": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:24:48 np0005542249 nova_compute[254900]: 2025-12-02 11:24:48.361 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:48 np0005542249 nova_compute[254900]: 2025-12-02 11:24:48.376 254904 DEBUG oslo_concurrency.lockutils [req-2ff8fcd8-c05f-4f2e-940d-906bf84a2812 req-4f6d7a79-2e3f-4aa6-8982-8ec236134293 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-c8ef6338-699a-4f16-8f0b-39f2bb67ee45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:24:49 np0005542249 podman[280689]: 2025-12-02 11:24:49.039410319 +0000 UTC m=+0.111531474 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  2 06:24:49 np0005542249 nova_compute[254900]: 2025-12-02 11:24:49.866 254904 DEBUG oslo_concurrency.lockutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Acquiring lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:49 np0005542249 nova_compute[254900]: 2025-12-02 11:24:49.868 254904 DEBUG oslo_concurrency.lockutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:49 np0005542249 nova_compute[254900]: 2025-12-02 11:24:49.883 254904 DEBUG nova.compute.manager [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:24:49 np0005542249 nova_compute[254900]: 2025-12-02 11:24:49.983 254904 DEBUG oslo_concurrency.lockutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:49 np0005542249 nova_compute[254900]: 2025-12-02 11:24:49.984 254904 DEBUG oslo_concurrency.lockutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:49 np0005542249 nova_compute[254900]: 2025-12-02 11:24:49.991 254904 DEBUG nova.virt.hardware [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:24:49 np0005542249 nova_compute[254900]: 2025-12-02 11:24:49.991 254904 INFO nova.compute.claims [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:24:50 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1345: 321 pgs: 321 active+clean; 2.5 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 9.2 MiB/s rd, 9.2 MiB/s wr, 322 op/s
Dec  2 06:24:50 np0005542249 nova_compute[254900]: 2025-12-02 11:24:50.130 254904 DEBUG oslo_concurrency.processutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:24:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1978279238' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:24:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:24:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1978279238' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:24:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:24:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/749809005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:24:50 np0005542249 nova_compute[254900]: 2025-12-02 11:24:50.618 254904 DEBUG oslo_concurrency.processutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:50 np0005542249 nova_compute[254900]: 2025-12-02 11:24:50.626 254904 DEBUG nova.compute.provider_tree [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:24:50 np0005542249 nova_compute[254900]: 2025-12-02 11:24:50.645 254904 DEBUG nova.scheduler.client.report [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:24:50 np0005542249 nova_compute[254900]: 2025-12-02 11:24:50.683 254904 DEBUG oslo_concurrency.lockutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:50 np0005542249 nova_compute[254900]: 2025-12-02 11:24:50.684 254904 DEBUG nova.compute.manager [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:24:50 np0005542249 nova_compute[254900]: 2025-12-02 11:24:50.790 254904 DEBUG nova.compute.manager [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:24:50 np0005542249 nova_compute[254900]: 2025-12-02 11:24:50.791 254904 DEBUG nova.network.neutron [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:24:50 np0005542249 nova_compute[254900]: 2025-12-02 11:24:50.812 254904 INFO nova.virt.libvirt.driver [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:24:50 np0005542249 nova_compute[254900]: 2025-12-02 11:24:50.829 254904 DEBUG nova.compute.manager [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:24:50 np0005542249 nova_compute[254900]: 2025-12-02 11:24:50.956 254904 DEBUG nova.policy [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b3ecaaf4f0044a58b99879bf1c55b18e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8bda44a38b8b4f31a8b6e8f6f0548898', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:24:50 np0005542249 nova_compute[254900]: 2025-12-02 11:24:50.964 254904 DEBUG nova.compute.manager [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:24:50 np0005542249 nova_compute[254900]: 2025-12-02 11:24:50.966 254904 DEBUG nova.virt.libvirt.driver [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:24:50 np0005542249 nova_compute[254900]: 2025-12-02 11:24:50.966 254904 INFO nova.virt.libvirt.driver [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Creating image(s)#033[00m
Dec  2 06:24:50 np0005542249 nova_compute[254900]: 2025-12-02 11:24:50.985 254904 DEBUG nova.storage.rbd_utils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] rbd image b5c1d606-86ed-453a-b2e0-ee74a8e24c46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:24:51 np0005542249 nova_compute[254900]: 2025-12-02 11:24:51.007 254904 DEBUG nova.storage.rbd_utils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] rbd image b5c1d606-86ed-453a-b2e0-ee74a8e24c46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:24:51 np0005542249 nova_compute[254900]: 2025-12-02 11:24:51.025 254904 DEBUG nova.storage.rbd_utils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] rbd image b5c1d606-86ed-453a-b2e0-ee74a8e24c46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:24:51 np0005542249 nova_compute[254900]: 2025-12-02 11:24:51.029 254904 DEBUG oslo_concurrency.lockutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Acquiring lock "2128595c266ecf03a2b5995b85bbf92ca9694819" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:51 np0005542249 nova_compute[254900]: 2025-12-02 11:24:51.030 254904 DEBUG oslo_concurrency.lockutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "2128595c266ecf03a2b5995b85bbf92ca9694819" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Dec  2 06:24:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Dec  2 06:24:51 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Dec  2 06:24:51 np0005542249 nova_compute[254900]: 2025-12-02 11:24:51.241 254904 DEBUG nova.virt.libvirt.imagebackend [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Image locations are: [{'url': 'rbd://95bc4eaa-1a14-59bf-acf2-4b3da055547d/images/673eea99-788c-44cc-a8b0-716ae3b6bc5c/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://95bc4eaa-1a14-59bf-acf2-4b3da055547d/images/673eea99-788c-44cc-a8b0-716ae3b6bc5c/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec  2 06:24:51 np0005542249 nova_compute[254900]: 2025-12-02 11:24:51.344 254904 DEBUG nova.virt.libvirt.imagebackend [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Selected location: {'url': 'rbd://95bc4eaa-1a14-59bf-acf2-4b3da055547d/images/673eea99-788c-44cc-a8b0-716ae3b6bc5c/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Dec  2 06:24:51 np0005542249 nova_compute[254900]: 2025-12-02 11:24:51.345 254904 DEBUG nova.storage.rbd_utils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] cloning images/673eea99-788c-44cc-a8b0-716ae3b6bc5c@snap to None/b5c1d606-86ed-453a-b2e0-ee74a8e24c46_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec  2 06:24:51 np0005542249 nova_compute[254900]: 2025-12-02 11:24:51.533 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:51 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:51.533 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:23:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:25:d0:a1:75:ed'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:24:51 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:51.535 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 06:24:51 np0005542249 nova_compute[254900]: 2025-12-02 11:24:51.640 254904 DEBUG oslo_concurrency.lockutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "2128595c266ecf03a2b5995b85bbf92ca9694819" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:51 np0005542249 nova_compute[254900]: 2025-12-02 11:24:51.692 254904 DEBUG nova.network.neutron [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Successfully created port: a908869b-6b7e-4872-bd60-5397511b102a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:24:51 np0005542249 nova_compute[254900]: 2025-12-02 11:24:51.796 254904 DEBUG nova.objects.instance [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lazy-loading 'migration_context' on Instance uuid b5c1d606-86ed-453a-b2e0-ee74a8e24c46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:24:51 np0005542249 nova_compute[254900]: 2025-12-02 11:24:51.815 254904 DEBUG nova.virt.libvirt.driver [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 06:24:51 np0005542249 nova_compute[254900]: 2025-12-02 11:24:51.816 254904 DEBUG nova.virt.libvirt.driver [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Ensure instance console log exists: /var/lib/nova/instances/b5c1d606-86ed-453a-b2e0-ee74a8e24c46/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:24:51 np0005542249 nova_compute[254900]: 2025-12-02 11:24:51.816 254904 DEBUG oslo_concurrency.lockutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:51 np0005542249 nova_compute[254900]: 2025-12-02 11:24:51.817 254904 DEBUG oslo_concurrency.lockutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:51 np0005542249 nova_compute[254900]: 2025-12-02 11:24:51.817 254904 DEBUG oslo_concurrency.lockutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:24:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Dec  2 06:24:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Dec  2 06:24:51 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Dec  2 06:24:52 np0005542249 nova_compute[254900]: 2025-12-02 11:24:52.060 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764674677.0593402, 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:24:52 np0005542249 nova_compute[254900]: 2025-12-02 11:24:52.061 254904 INFO nova.compute.manager [-] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:24:52 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1348: 321 pgs: 321 active+clean; 2.5 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.5 MiB/s wr, 156 op/s
Dec  2 06:24:52 np0005542249 nova_compute[254900]: 2025-12-02 11:24:52.089 254904 DEBUG nova.compute.manager [None req-13ea3728-1276-4899-a948-916d90f8ccc3 - - - - - -] [instance: 2c9c5605-aec2-4e63-a5b5-49c57c3e9e17] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:24:52 np0005542249 nova_compute[254900]: 2025-12-02 11:24:52.187 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:52 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:52Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f5:ea:49 10.100.0.4
Dec  2 06:24:52 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:52Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f5:ea:49 10.100.0.4
Dec  2 06:24:52 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:52.537 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:24:53 np0005542249 nova_compute[254900]: 2025-12-02 11:24:53.073 254904 DEBUG nova.network.neutron [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Successfully updated port: a908869b-6b7e-4872-bd60-5397511b102a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:24:53 np0005542249 nova_compute[254900]: 2025-12-02 11:24:53.097 254904 DEBUG oslo_concurrency.lockutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Acquiring lock "refresh_cache-b5c1d606-86ed-453a-b2e0-ee74a8e24c46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:24:53 np0005542249 nova_compute[254900]: 2025-12-02 11:24:53.097 254904 DEBUG oslo_concurrency.lockutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Acquired lock "refresh_cache-b5c1d606-86ed-453a-b2e0-ee74a8e24c46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:24:53 np0005542249 nova_compute[254900]: 2025-12-02 11:24:53.098 254904 DEBUG nova.network.neutron [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:24:53 np0005542249 nova_compute[254900]: 2025-12-02 11:24:53.194 254904 DEBUG nova.compute.manager [req-05f341e2-d4c7-479e-b8d8-19c57a03e69f req-02bc825a-183d-4d37-8ae5-669459b18dc2 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Received event network-changed-a908869b-6b7e-4872-bd60-5397511b102a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:24:53 np0005542249 nova_compute[254900]: 2025-12-02 11:24:53.195 254904 DEBUG nova.compute.manager [req-05f341e2-d4c7-479e-b8d8-19c57a03e69f req-02bc825a-183d-4d37-8ae5-669459b18dc2 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Refreshing instance network info cache due to event network-changed-a908869b-6b7e-4872-bd60-5397511b102a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:24:53 np0005542249 nova_compute[254900]: 2025-12-02 11:24:53.196 254904 DEBUG oslo_concurrency.lockutils [req-05f341e2-d4c7-479e-b8d8-19c57a03e69f req-02bc825a-183d-4d37-8ae5-669459b18dc2 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-b5c1d606-86ed-453a-b2e0-ee74a8e24c46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:24:53 np0005542249 nova_compute[254900]: 2025-12-02 11:24:53.326 254904 DEBUG nova.network.neutron [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:24:53 np0005542249 nova_compute[254900]: 2025-12-02 11:24:53.395 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:24:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2583013386' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:24:54 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1349: 321 pgs: 321 active+clean; 2.5 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 336 KiB/s rd, 2.8 MiB/s wr, 163 op/s
Dec  2 06:24:54 np0005542249 nova_compute[254900]: 2025-12-02 11:24:54.655 254904 DEBUG nova.network.neutron [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Updating instance_info_cache with network_info: [{"id": "a908869b-6b7e-4872-bd60-5397511b102a", "address": "fa:16:3e:12:29:28", "network": {"id": "202d4c1b-b1c2-4564-b679-1d789b189a11", "bridge": "br-int", "label": "tempest-TestStampPattern-951670540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bda44a38b8b4f31a8b6e8f6f0548898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa908869b-6b", "ovs_interfaceid": "a908869b-6b7e-4872-bd60-5397511b102a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:24:54 np0005542249 nova_compute[254900]: 2025-12-02 11:24:54.673 254904 DEBUG oslo_concurrency.lockutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Releasing lock "refresh_cache-b5c1d606-86ed-453a-b2e0-ee74a8e24c46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:24:54 np0005542249 nova_compute[254900]: 2025-12-02 11:24:54.674 254904 DEBUG nova.compute.manager [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Instance network_info: |[{"id": "a908869b-6b7e-4872-bd60-5397511b102a", "address": "fa:16:3e:12:29:28", "network": {"id": "202d4c1b-b1c2-4564-b679-1d789b189a11", "bridge": "br-int", "label": "tempest-TestStampPattern-951670540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bda44a38b8b4f31a8b6e8f6f0548898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa908869b-6b", "ovs_interfaceid": "a908869b-6b7e-4872-bd60-5397511b102a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:24:54 np0005542249 nova_compute[254900]: 2025-12-02 11:24:54.674 254904 DEBUG oslo_concurrency.lockutils [req-05f341e2-d4c7-479e-b8d8-19c57a03e69f req-02bc825a-183d-4d37-8ae5-669459b18dc2 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-b5c1d606-86ed-453a-b2e0-ee74a8e24c46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:24:54 np0005542249 nova_compute[254900]: 2025-12-02 11:24:54.674 254904 DEBUG nova.network.neutron [req-05f341e2-d4c7-479e-b8d8-19c57a03e69f req-02bc825a-183d-4d37-8ae5-669459b18dc2 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Refreshing network info cache for port a908869b-6b7e-4872-bd60-5397511b102a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:24:54 np0005542249 nova_compute[254900]: 2025-12-02 11:24:54.677 254904 DEBUG nova.virt.libvirt.driver [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Start _get_guest_xml network_info=[{"id": "a908869b-6b7e-4872-bd60-5397511b102a", "address": "fa:16:3e:12:29:28", "network": {"id": "202d4c1b-b1c2-4564-b679-1d789b189a11", "bridge": "br-int", "label": "tempest-TestStampPattern-951670540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bda44a38b8b4f31a8b6e8f6f0548898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa908869b-6b", "ovs_interfaceid": "a908869b-6b7e-4872-bd60-5397511b102a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-02T11:24:39Z,direct_url=<?>,disk_format='raw',id=673eea99-788c-44cc-a8b0-716ae3b6bc5c,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-118643116',owner='8bda44a38b8b4f31a8b6e8f6f0548898',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-02T11:24:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'image_id': '673eea99-788c-44cc-a8b0-716ae3b6bc5c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:24:54 np0005542249 nova_compute[254900]: 2025-12-02 11:24:54.683 254904 WARNING nova.virt.libvirt.driver [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:24:54 np0005542249 nova_compute[254900]: 2025-12-02 11:24:54.688 254904 DEBUG nova.virt.libvirt.host [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:24:54 np0005542249 nova_compute[254900]: 2025-12-02 11:24:54.688 254904 DEBUG nova.virt.libvirt.host [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:24:54 np0005542249 nova_compute[254900]: 2025-12-02 11:24:54.696 254904 DEBUG nova.virt.libvirt.host [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:24:54 np0005542249 nova_compute[254900]: 2025-12-02 11:24:54.697 254904 DEBUG nova.virt.libvirt.host [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:24:54 np0005542249 nova_compute[254900]: 2025-12-02 11:24:54.697 254904 DEBUG nova.virt.libvirt.driver [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:24:54 np0005542249 nova_compute[254900]: 2025-12-02 11:24:54.698 254904 DEBUG nova.virt.hardware [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-02T11:24:39Z,direct_url=<?>,disk_format='raw',id=673eea99-788c-44cc-a8b0-716ae3b6bc5c,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-118643116',owner='8bda44a38b8b4f31a8b6e8f6f0548898',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-02T11:24:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:24:54 np0005542249 nova_compute[254900]: 2025-12-02 11:24:54.698 254904 DEBUG nova.virt.hardware [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:24:54 np0005542249 nova_compute[254900]: 2025-12-02 11:24:54.698 254904 DEBUG nova.virt.hardware [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:24:54 np0005542249 nova_compute[254900]: 2025-12-02 11:24:54.699 254904 DEBUG nova.virt.hardware [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:24:54 np0005542249 nova_compute[254900]: 2025-12-02 11:24:54.699 254904 DEBUG nova.virt.hardware [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:24:54 np0005542249 nova_compute[254900]: 2025-12-02 11:24:54.699 254904 DEBUG nova.virt.hardware [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:24:54 np0005542249 nova_compute[254900]: 2025-12-02 11:24:54.699 254904 DEBUG nova.virt.hardware [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:24:54 np0005542249 nova_compute[254900]: 2025-12-02 11:24:54.700 254904 DEBUG nova.virt.hardware [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:24:54 np0005542249 nova_compute[254900]: 2025-12-02 11:24:54.700 254904 DEBUG nova.virt.hardware [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:24:54 np0005542249 nova_compute[254900]: 2025-12-02 11:24:54.700 254904 DEBUG nova.virt.hardware [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:24:54 np0005542249 nova_compute[254900]: 2025-12-02 11:24:54.700 254904 DEBUG nova.virt.hardware [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:24:54 np0005542249 nova_compute[254900]: 2025-12-02 11:24:54.703 254904 DEBUG oslo_concurrency.processutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:24:55 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/192252138' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.217 254904 DEBUG oslo_concurrency.processutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.256 254904 DEBUG nova.storage.rbd_utils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] rbd image b5c1d606-86ed-453a-b2e0-ee74a8e24c46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.261 254904 DEBUG oslo_concurrency.processutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:24:55 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1636538126' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.782 254904 DEBUG oslo_concurrency.processutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.786 254904 DEBUG nova.virt.libvirt.vif [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:24:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-867719560',display_name='tempest-TestStampPattern-server-867719560',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-867719560',id=15,image_ref='673eea99-788c-44cc-a8b0-716ae3b6bc5c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK61TL5woGKETyNF39DpzDOQY/3175WIhzETkz8a48X3DCBshG53o26Fmw3NbtfN3xWBBqbdYLP1coPVuKFyoXP+AsOB02VoMViHoPOBDDnBHEpudY8V6Kx5OUC4H683Ng==',key_name='tempest-TestStampPattern-1085425870',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8bda44a38b8b4f31a8b6e8f6f0548898',ramdisk_id='',reservation_id='r-uc8s2t0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='7d55326f-52eb-4f7f-a9cd-05282ca6ca20',image_min_disk='1',image_min_ram='0',image_owner_id='8bda44a38b8b4f31a8b6e8f6f0548898',image_owner_project_name='tempest-TestStampPattern-2114823383',image_owner_user_name='tempest-TestStampPattern-2114823383-project-member',image_user_id='b3ecaaf4f0044a58b99879bf1c55b18e',network_allocated='True',owner_project_name='tempest-TestStampPattern-2114823383',owner_user_name='tempest-TestStampPattern-2114823383-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:24:50Z,user_data=None,user_id='b3ecaaf4f0044a58b99879bf1c55b18e',uuid=b5c1d606-86ed-453a-b2e0-ee74a8e24c46,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a908869b-6b7e-4872-bd60-5397511b102a", "address": "fa:16:3e:12:29:28", "network": {"id": "202d4c1b-b1c2-4564-b679-1d789b189a11", "bridge": "br-int", "label": "tempest-TestStampPattern-951670540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bda44a38b8b4f31a8b6e8f6f0548898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa908869b-6b", "ovs_interfaceid": "a908869b-6b7e-4872-bd60-5397511b102a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.786 254904 DEBUG nova.network.os_vif_util [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Converting VIF {"id": "a908869b-6b7e-4872-bd60-5397511b102a", "address": "fa:16:3e:12:29:28", "network": {"id": "202d4c1b-b1c2-4564-b679-1d789b189a11", "bridge": "br-int", "label": "tempest-TestStampPattern-951670540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bda44a38b8b4f31a8b6e8f6f0548898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa908869b-6b", "ovs_interfaceid": "a908869b-6b7e-4872-bd60-5397511b102a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.788 254904 DEBUG nova.network.os_vif_util [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:29:28,bridge_name='br-int',has_traffic_filtering=True,id=a908869b-6b7e-4872-bd60-5397511b102a,network=Network(202d4c1b-b1c2-4564-b679-1d789b189a11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa908869b-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.791 254904 DEBUG nova.objects.instance [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lazy-loading 'pci_devices' on Instance uuid b5c1d606-86ed-453a-b2e0-ee74a8e24c46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.821 254904 DEBUG nova.virt.libvirt.driver [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:24:55 np0005542249 nova_compute[254900]:  <uuid>b5c1d606-86ed-453a-b2e0-ee74a8e24c46</uuid>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:  <name>instance-0000000f</name>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <nova:name>tempest-TestStampPattern-server-867719560</nova:name>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:24:54</nova:creationTime>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:24:55 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:        <nova:user uuid="b3ecaaf4f0044a58b99879bf1c55b18e">tempest-TestStampPattern-2114823383-project-member</nova:user>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:        <nova:project uuid="8bda44a38b8b4f31a8b6e8f6f0548898">tempest-TestStampPattern-2114823383</nova:project>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <nova:root type="image" uuid="673eea99-788c-44cc-a8b0-716ae3b6bc5c"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:        <nova:port uuid="a908869b-6b7e-4872-bd60-5397511b102a">
Dec  2 06:24:55 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <entry name="serial">b5c1d606-86ed-453a-b2e0-ee74a8e24c46</entry>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <entry name="uuid">b5c1d606-86ed-453a-b2e0-ee74a8e24c46</entry>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/b5c1d606-86ed-453a-b2e0-ee74a8e24c46_disk">
Dec  2 06:24:55 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:24:55 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/b5c1d606-86ed-453a-b2e0-ee74a8e24c46_disk.config">
Dec  2 06:24:55 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:24:55 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:12:29:28"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <target dev="tapa908869b-6b"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/b5c1d606-86ed-453a-b2e0-ee74a8e24c46/console.log" append="off"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <input type="keyboard" bus="usb"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:24:55 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:24:55 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:24:55 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:24:55 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.822 254904 DEBUG nova.compute.manager [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Preparing to wait for external event network-vif-plugged-a908869b-6b7e-4872-bd60-5397511b102a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.822 254904 DEBUG oslo_concurrency.lockutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Acquiring lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.823 254904 DEBUG oslo_concurrency.lockutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.823 254904 DEBUG oslo_concurrency.lockutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.824 254904 DEBUG nova.virt.libvirt.vif [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:24:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-867719560',display_name='tempest-TestStampPattern-server-867719560',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-867719560',id=15,image_ref='673eea99-788c-44cc-a8b0-716ae3b6bc5c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK61TL5woGKETyNF39DpzDOQY/3175WIhzETkz8a48X3DCBshG53o26Fmw3NbtfN3xWBBqbdYLP1coPVuKFyoXP+AsOB02VoMViHoPOBDDnBHEpudY8V6Kx5OUC4H683Ng==',key_name='tempest-TestStampPattern-1085425870',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8bda44a38b8b4f31a8b6e8f6f0548898',ramdisk_id='',reservation_id='r-uc8s2t0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='7d55326f-52eb-4f7f-a9cd-05282ca6ca20',image_min_disk='1',image_min_ram='0',image_owner_id='8bda44a38b8b4f31a8b6e8f6f0548898',image_owner_project_name='tempest-TestStampPattern-2114823383',image_owner_user_name='tempest-TestStampPattern-2114823383-project-member',image_user_id='b3ecaaf4f0044a58b99879bf1c55b18e',network_allocated='True',owner_project_name='tempest-TestStampPattern-2114823383',owner_user_name='tempest-TestStampPattern-2114823383-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:24:50Z,user_data=None,user_id='b3ecaaf4f0044a58b99879bf1c55b18e',uuid=b5c1d606-86ed-453a-b2e0-ee74a8e24c46,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a908869b-6b7e-4872-bd60-5397511b102a", "address": "fa:16:3e:12:29:28", "network": {"id": "202d4c1b-b1c2-4564-b679-1d789b189a11", "bridge": "br-int", "label": "tempest-TestStampPattern-951670540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bda44a38b8b4f31a8b6e8f6f0548898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa908869b-6b", "ovs_interfaceid": "a908869b-6b7e-4872-bd60-5397511b102a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.825 254904 DEBUG nova.network.os_vif_util [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Converting VIF {"id": "a908869b-6b7e-4872-bd60-5397511b102a", "address": "fa:16:3e:12:29:28", "network": {"id": "202d4c1b-b1c2-4564-b679-1d789b189a11", "bridge": "br-int", "label": "tempest-TestStampPattern-951670540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bda44a38b8b4f31a8b6e8f6f0548898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa908869b-6b", "ovs_interfaceid": "a908869b-6b7e-4872-bd60-5397511b102a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.829 254904 DEBUG nova.network.os_vif_util [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:29:28,bridge_name='br-int',has_traffic_filtering=True,id=a908869b-6b7e-4872-bd60-5397511b102a,network=Network(202d4c1b-b1c2-4564-b679-1d789b189a11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa908869b-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.830 254904 DEBUG os_vif [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:29:28,bridge_name='br-int',has_traffic_filtering=True,id=a908869b-6b7e-4872-bd60-5397511b102a,network=Network(202d4c1b-b1c2-4564-b679-1d789b189a11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa908869b-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.830 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.831 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.832 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.836 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.837 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa908869b-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.837 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa908869b-6b, col_values=(('external_ids', {'iface-id': 'a908869b-6b7e-4872-bd60-5397511b102a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:12:29:28', 'vm-uuid': 'b5c1d606-86ed-453a-b2e0-ee74a8e24c46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.840 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:55 np0005542249 NetworkManager[48987]: <info>  [1764674695.8423] manager: (tapa908869b-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.843 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.881 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.883 254904 INFO os_vif [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:29:28,bridge_name='br-int',has_traffic_filtering=True,id=a908869b-6b7e-4872-bd60-5397511b102a,network=Network(202d4c1b-b1c2-4564-b679-1d789b189a11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa908869b-6b')#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.941 254904 DEBUG nova.virt.libvirt.driver [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.942 254904 DEBUG nova.virt.libvirt.driver [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.942 254904 DEBUG nova.virt.libvirt.driver [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] No VIF found with MAC fa:16:3e:12:29:28, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.943 254904 INFO nova.virt.libvirt.driver [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Using config drive#033[00m
Dec  2 06:24:55 np0005542249 nova_compute[254900]: 2025-12-02 11:24:55.969 254904 DEBUG nova.storage.rbd_utils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] rbd image b5c1d606-86ed-453a-b2e0-ee74a8e24c46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:24:56 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1350: 321 pgs: 321 active+clean; 2.5 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 513 KiB/s rd, 3.1 MiB/s wr, 210 op/s
Dec  2 06:24:56 np0005542249 nova_compute[254900]: 2025-12-02 11:24:56.329 254904 DEBUG nova.network.neutron [req-05f341e2-d4c7-479e-b8d8-19c57a03e69f req-02bc825a-183d-4d37-8ae5-669459b18dc2 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Updated VIF entry in instance network info cache for port a908869b-6b7e-4872-bd60-5397511b102a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:24:56 np0005542249 nova_compute[254900]: 2025-12-02 11:24:56.330 254904 DEBUG nova.network.neutron [req-05f341e2-d4c7-479e-b8d8-19c57a03e69f req-02bc825a-183d-4d37-8ae5-669459b18dc2 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Updating instance_info_cache with network_info: [{"id": "a908869b-6b7e-4872-bd60-5397511b102a", "address": "fa:16:3e:12:29:28", "network": {"id": "202d4c1b-b1c2-4564-b679-1d789b189a11", "bridge": "br-int", "label": "tempest-TestStampPattern-951670540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bda44a38b8b4f31a8b6e8f6f0548898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa908869b-6b", "ovs_interfaceid": "a908869b-6b7e-4872-bd60-5397511b102a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:24:56 np0005542249 nova_compute[254900]: 2025-12-02 11:24:56.357 254904 DEBUG oslo_concurrency.lockutils [req-05f341e2-d4c7-479e-b8d8-19c57a03e69f req-02bc825a-183d-4d37-8ae5-669459b18dc2 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-b5c1d606-86ed-453a-b2e0-ee74a8e24c46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:24:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:24:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:24:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:24:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:24:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:24:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:24:56 np0005542249 nova_compute[254900]: 2025-12-02 11:24:56.443 254904 INFO nova.virt.libvirt.driver [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Creating config drive at /var/lib/nova/instances/b5c1d606-86ed-453a-b2e0-ee74a8e24c46/disk.config#033[00m
Dec  2 06:24:56 np0005542249 nova_compute[254900]: 2025-12-02 11:24:56.449 254904 DEBUG oslo_concurrency.processutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b5c1d606-86ed-453a-b2e0-ee74a8e24c46/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxivofyxj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:56 np0005542249 nova_compute[254900]: 2025-12-02 11:24:56.599 254904 DEBUG oslo_concurrency.processutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b5c1d606-86ed-453a-b2e0-ee74a8e24c46/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxivofyxj" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:56 np0005542249 nova_compute[254900]: 2025-12-02 11:24:56.638 254904 DEBUG nova.storage.rbd_utils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] rbd image b5c1d606-86ed-453a-b2e0-ee74a8e24c46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:24:56 np0005542249 nova_compute[254900]: 2025-12-02 11:24:56.644 254904 DEBUG oslo_concurrency.processutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b5c1d606-86ed-453a-b2e0-ee74a8e24c46/disk.config b5c1d606-86ed-453a-b2e0-ee74a8e24c46_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:24:56 np0005542249 nova_compute[254900]: 2025-12-02 11:24:56.930 254904 DEBUG oslo_concurrency.processutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b5c1d606-86ed-453a-b2e0-ee74a8e24c46/disk.config b5c1d606-86ed-453a-b2e0-ee74a8e24c46_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:24:56 np0005542249 nova_compute[254900]: 2025-12-02 11:24:56.932 254904 INFO nova.virt.libvirt.driver [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Deleting local config drive /var/lib/nova/instances/b5c1d606-86ed-453a-b2e0-ee74a8e24c46/disk.config because it was imported into RBD.#033[00m
Dec  2 06:24:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:24:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Dec  2 06:24:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Dec  2 06:24:56 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Dec  2 06:24:57 np0005542249 NetworkManager[48987]: <info>  [1764674697.0221] manager: (tapa908869b-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/87)
Dec  2 06:24:57 np0005542249 kernel: tapa908869b-6b: entered promiscuous mode
Dec  2 06:24:57 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:57Z|00145|binding|INFO|Claiming lport a908869b-6b7e-4872-bd60-5397511b102a for this chassis.
Dec  2 06:24:57 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:57Z|00146|binding|INFO|a908869b-6b7e-4872-bd60-5397511b102a: Claiming fa:16:3e:12:29:28 10.100.0.6
Dec  2 06:24:57 np0005542249 nova_compute[254900]: 2025-12-02 11:24:57.027 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:57 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:57.042 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:29:28 10.100.0.6'], port_security=['fa:16:3e:12:29:28 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b5c1d606-86ed-453a-b2e0-ee74a8e24c46', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-202d4c1b-b1c2-4564-b679-1d789b189a11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bda44a38b8b4f31a8b6e8f6f0548898', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5dfe3f56-bbf4-4aa1-aeeb-3cc512936610', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64d73c60-6b3f-4965-8ef1-0cdac8312377, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=a908869b-6b7e-4872-bd60-5397511b102a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:24:57 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:57.044 163757 INFO neutron.agent.ovn.metadata.agent [-] Port a908869b-6b7e-4872-bd60-5397511b102a in datapath 202d4c1b-b1c2-4564-b679-1d789b189a11 bound to our chassis#033[00m
Dec  2 06:24:57 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:57.048 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 202d4c1b-b1c2-4564-b679-1d789b189a11#033[00m
Dec  2 06:24:57 np0005542249 systemd-udevd[281042]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:24:57 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:57.073 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[0b29fd8e-79e2-4ff5-b544-d2e950f03954]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:57 np0005542249 systemd-machined[216222]: New machine qemu-15-instance-0000000f.
Dec  2 06:24:57 np0005542249 NetworkManager[48987]: <info>  [1764674697.0818] device (tapa908869b-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:24:57 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:57Z|00147|binding|INFO|Setting lport a908869b-6b7e-4872-bd60-5397511b102a ovn-installed in OVS
Dec  2 06:24:57 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:57Z|00148|binding|INFO|Setting lport a908869b-6b7e-4872-bd60-5397511b102a up in Southbound
Dec  2 06:24:57 np0005542249 NetworkManager[48987]: <info>  [1764674697.0833] device (tapa908869b-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:24:57 np0005542249 nova_compute[254900]: 2025-12-02 11:24:57.084 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:57 np0005542249 systemd[1]: Started Virtual Machine qemu-15-instance-0000000f.
Dec  2 06:24:57 np0005542249 nova_compute[254900]: 2025-12-02 11:24:57.093 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:57 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:57.119 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[b4f95120-c810-4f64-9327-1913c5dfe968]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:57 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:57.123 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1a7bd8-61f7-4486-9588-e58f37190fe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:57 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:57.168 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[fb65ad0f-3a21-4b75-bcb3-b286db2c0090]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:57 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:57.196 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[952349ad-de05-476e-8f40-95322bf44287]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap202d4c1b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:0f:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 484032, 'reachable_time': 17542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281055, 'error': None, 'target': 'ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:57 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:57.218 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[648ac76f-c3ff-4b80-95a2-fabdd1d231f5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap202d4c1b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 484050, 'tstamp': 484050}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281057, 'error': None, 'target': 'ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap202d4c1b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 484054, 'tstamp': 484054}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281057, 'error': None, 'target': 'ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:57 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:57.220 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap202d4c1b-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:24:57 np0005542249 nova_compute[254900]: 2025-12-02 11:24:57.222 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:57 np0005542249 nova_compute[254900]: 2025-12-02 11:24:57.223 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:57 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:57.224 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap202d4c1b-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:24:57 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:57.224 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:24:57 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:57.224 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap202d4c1b-b0, col_values=(('external_ids', {'iface-id': '3e075ddd-b183-4a84-8a21-ad15a1122c7d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:24:57 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:57.224 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:24:57 np0005542249 nova_compute[254900]: 2025-12-02 11:24:57.443 254904 DEBUG nova.compute.manager [req-370ed8fb-c389-4002-a74f-31f186d2028b req-540c28e2-d9d6-4719-b5bb-06dddaef956f 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Received event network-vif-plugged-a908869b-6b7e-4872-bd60-5397511b102a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:24:57 np0005542249 nova_compute[254900]: 2025-12-02 11:24:57.444 254904 DEBUG oslo_concurrency.lockutils [req-370ed8fb-c389-4002-a74f-31f186d2028b req-540c28e2-d9d6-4719-b5bb-06dddaef956f 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:57 np0005542249 nova_compute[254900]: 2025-12-02 11:24:57.444 254904 DEBUG oslo_concurrency.lockutils [req-370ed8fb-c389-4002-a74f-31f186d2028b req-540c28e2-d9d6-4719-b5bb-06dddaef956f 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:57 np0005542249 nova_compute[254900]: 2025-12-02 11:24:57.444 254904 DEBUG oslo_concurrency.lockutils [req-370ed8fb-c389-4002-a74f-31f186d2028b req-540c28e2-d9d6-4719-b5bb-06dddaef956f 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:57 np0005542249 nova_compute[254900]: 2025-12-02 11:24:57.445 254904 DEBUG nova.compute.manager [req-370ed8fb-c389-4002-a74f-31f186d2028b req-540c28e2-d9d6-4719-b5bb-06dddaef956f 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Processing event network-vif-plugged-a908869b-6b7e-4872-bd60-5397511b102a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.052 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674698.0518587, b5c1d606-86ed-453a-b2e0-ee74a8e24c46 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.052 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] VM Started (Lifecycle Event)#033[00m
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.056 254904 DEBUG nova.compute.manager [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.063 254904 DEBUG nova.virt.libvirt.driver [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.067 254904 INFO nova.virt.libvirt.driver [-] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Instance spawned successfully.#033[00m
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.067 254904 INFO nova.compute.manager [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Took 7.10 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.067 254904 DEBUG nova.compute.manager [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.069 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:24:58 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1352: 321 pgs: 321 active+clean; 2.7 GiB data, 2.8 GiB used, 57 GiB / 60 GiB avail; 717 KiB/s rd, 31 MiB/s wr, 332 op/s
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.078 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.242 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.243 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674698.0555615, b5c1d606-86ed-453a-b2e0-ee74a8e24c46 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.243 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.262 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.276 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674698.0623796, b5c1d606-86ed-453a-b2e0-ee74a8e24c46 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.277 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.279 254904 INFO nova.compute.manager [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Took 8.31 seconds to build instance.#033[00m
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.309 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.313 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.321 254904 DEBUG oslo_concurrency.lockutils [None req-2e5b8e4f-6a2d-451a-93a6-60032a57cec4 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.453s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.405 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.938 254904 DEBUG oslo_concurrency.lockutils [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Acquiring lock "c8ef6338-699a-4f16-8f0b-39f2bb67ee45" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.938 254904 DEBUG oslo_concurrency.lockutils [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Lock "c8ef6338-699a-4f16-8f0b-39f2bb67ee45" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.939 254904 DEBUG oslo_concurrency.lockutils [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Acquiring lock "c8ef6338-699a-4f16-8f0b-39f2bb67ee45-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.939 254904 DEBUG oslo_concurrency.lockutils [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Lock "c8ef6338-699a-4f16-8f0b-39f2bb67ee45-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.939 254904 DEBUG oslo_concurrency.lockutils [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Lock "c8ef6338-699a-4f16-8f0b-39f2bb67ee45-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.940 254904 INFO nova.compute.manager [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Terminating instance#033[00m
Dec  2 06:24:58 np0005542249 nova_compute[254900]: 2025-12-02 11:24:58.945 254904 DEBUG nova.compute.manager [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:24:59 np0005542249 kernel: tap677e9f6e-c8 (unregistering): left promiscuous mode
Dec  2 06:24:59 np0005542249 NetworkManager[48987]: <info>  [1764674699.1355] device (tap677e9f6e-c8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:24:59 np0005542249 nova_compute[254900]: 2025-12-02 11:24:59.145 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:59 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:59Z|00149|binding|INFO|Releasing lport 677e9f6e-c8b4-450a-a8fa-47c99433fedf from this chassis (sb_readonly=0)
Dec  2 06:24:59 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:59Z|00150|binding|INFO|Setting lport 677e9f6e-c8b4-450a-a8fa-47c99433fedf down in Southbound
Dec  2 06:24:59 np0005542249 ovn_controller[153849]: 2025-12-02T11:24:59Z|00151|binding|INFO|Removing iface tap677e9f6e-c8 ovn-installed in OVS
Dec  2 06:24:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:59.152 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:ea:49 10.100.0.4'], port_security=['fa:16:3e:f5:ea:49 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c8ef6338-699a-4f16-8f0b-39f2bb67ee45', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e8562a40-d165-4b23-84c8-7c8f664e882e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674916d2c2d94b239e86c66b1f0af922', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'faee16a0-fea4-495b-80d5-1feb61394d54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c8c90f5e-f1cc-4d3e-8b23-74c26cb1484e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=677e9f6e-c8b4-450a-a8fa-47c99433fedf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:24:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:59.153 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 677e9f6e-c8b4-450a-a8fa-47c99433fedf in datapath e8562a40-d165-4b23-84c8-7c8f664e882e unbound from our chassis#033[00m
Dec  2 06:24:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:59.155 163757 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e8562a40-d165-4b23-84c8-7c8f664e882e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 06:24:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:59.156 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[f2144ef3-ac24-4930-8f69-58835088ed19]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:24:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:24:59.158 163757 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e8562a40-d165-4b23-84c8-7c8f664e882e namespace which is not needed anymore#033[00m
Dec  2 06:24:59 np0005542249 nova_compute[254900]: 2025-12-02 11:24:59.172 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:59 np0005542249 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Dec  2 06:24:59 np0005542249 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Consumed 14.341s CPU time.
Dec  2 06:24:59 np0005542249 systemd-machined[216222]: Machine qemu-14-instance-0000000e terminated.
Dec  2 06:24:59 np0005542249 nova_compute[254900]: 2025-12-02 11:24:59.454 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:59 np0005542249 neutron-haproxy-ovnmeta-e8562a40-d165-4b23-84c8-7c8f664e882e[280442]: [NOTICE]   (280446) : haproxy version is 2.8.14-c23fe91
Dec  2 06:24:59 np0005542249 neutron-haproxy-ovnmeta-e8562a40-d165-4b23-84c8-7c8f664e882e[280442]: [NOTICE]   (280446) : path to executable is /usr/sbin/haproxy
Dec  2 06:24:59 np0005542249 neutron-haproxy-ovnmeta-e8562a40-d165-4b23-84c8-7c8f664e882e[280442]: [WARNING]  (280446) : Exiting Master process...
Dec  2 06:24:59 np0005542249 neutron-haproxy-ovnmeta-e8562a40-d165-4b23-84c8-7c8f664e882e[280442]: [WARNING]  (280446) : Exiting Master process...
Dec  2 06:24:59 np0005542249 neutron-haproxy-ovnmeta-e8562a40-d165-4b23-84c8-7c8f664e882e[280442]: [ALERT]    (280446) : Current worker (280448) exited with code 143 (Terminated)
Dec  2 06:24:59 np0005542249 neutron-haproxy-ovnmeta-e8562a40-d165-4b23-84c8-7c8f664e882e[280442]: [WARNING]  (280446) : All workers exited. Exiting... (0)
Dec  2 06:24:59 np0005542249 systemd[1]: libpod-707d4db4a000ebdb59448ebd131dc084e27367fc55debcab4a62690513370678.scope: Deactivated successfully.
Dec  2 06:24:59 np0005542249 nova_compute[254900]: 2025-12-02 11:24:59.471 254904 INFO nova.virt.libvirt.driver [-] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Instance destroyed successfully.#033[00m
Dec  2 06:24:59 np0005542249 nova_compute[254900]: 2025-12-02 11:24:59.472 254904 DEBUG nova.objects.instance [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Lazy-loading 'resources' on Instance uuid c8ef6338-699a-4f16-8f0b-39f2bb67ee45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:24:59 np0005542249 podman[281120]: 2025-12-02 11:24:59.474319156 +0000 UTC m=+0.177112437 container died 707d4db4a000ebdb59448ebd131dc084e27367fc55debcab4a62690513370678 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8562a40-d165-4b23-84c8-7c8f664e882e, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:24:59 np0005542249 nova_compute[254900]: 2025-12-02 11:24:59.487 254904 DEBUG nova.virt.libvirt.vif [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:24:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-281089066',display_name='tempest-TestVolumeBackupRestore-server-281089066',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-281089066',id=14,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPmzgD+QuHBI397ADK7R+dwgAgosB0Pxeuf5Epg8BSAVtmZeTahW9i6smNPnTbbnmlfUZAuk8gV1dS8px8NP9mt2RK8KUn93g8VdaLWmF6oVZZQJHwqVchLLycXOy7OqYQ==',key_name='tempest-TestVolumeBackupRestore-111150897',keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:24:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674916d2c2d94b239e86c66b1f0af922',ramdisk_id='',reservation_id='r-0gsqleit',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBackupRestore-1579005259',owner_user_name='tempest-TestVolumeBackupRestore-1579005259-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:24:38Z,user_data=None,user_id='1e7562e263fa4d47ae69c1891e4b61ff',uuid=c8ef6338-699a-4f16-8f0b-39f2bb67ee45,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "address": "fa:16:3e:f5:ea:49", "network": {"id": "e8562a40-d165-4b23-84c8-7c8f664e882e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1724334586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674916d2c2d94b239e86c66b1f0af922", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677e9f6e-c8", "ovs_interfaceid": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:24:59 np0005542249 nova_compute[254900]: 2025-12-02 11:24:59.488 254904 DEBUG nova.network.os_vif_util [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Converting VIF {"id": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "address": "fa:16:3e:f5:ea:49", "network": {"id": "e8562a40-d165-4b23-84c8-7c8f664e882e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1724334586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674916d2c2d94b239e86c66b1f0af922", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677e9f6e-c8", "ovs_interfaceid": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:24:59 np0005542249 nova_compute[254900]: 2025-12-02 11:24:59.488 254904 DEBUG nova.network.os_vif_util [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f5:ea:49,bridge_name='br-int',has_traffic_filtering=True,id=677e9f6e-c8b4-450a-a8fa-47c99433fedf,network=Network(e8562a40-d165-4b23-84c8-7c8f664e882e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap677e9f6e-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:24:59 np0005542249 nova_compute[254900]: 2025-12-02 11:24:59.489 254904 DEBUG os_vif [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f5:ea:49,bridge_name='br-int',has_traffic_filtering=True,id=677e9f6e-c8b4-450a-a8fa-47c99433fedf,network=Network(e8562a40-d165-4b23-84c8-7c8f664e882e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap677e9f6e-c8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:24:59 np0005542249 nova_compute[254900]: 2025-12-02 11:24:59.490 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:59 np0005542249 nova_compute[254900]: 2025-12-02 11:24:59.491 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap677e9f6e-c8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:24:59 np0005542249 nova_compute[254900]: 2025-12-02 11:24:59.493 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:59 np0005542249 nova_compute[254900]: 2025-12-02 11:24:59.494 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:24:59 np0005542249 nova_compute[254900]: 2025-12-02 11:24:59.496 254904 INFO os_vif [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f5:ea:49,bridge_name='br-int',has_traffic_filtering=True,id=677e9f6e-c8b4-450a-a8fa-47c99433fedf,network=Network(e8562a40-d165-4b23-84c8-7c8f664e882e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap677e9f6e-c8')#033[00m
Dec  2 06:24:59 np0005542249 nova_compute[254900]: 2025-12-02 11:24:59.528 254904 DEBUG nova.compute.manager [req-db9affcc-9014-4f90-95d9-f916cf12096b req-899552d9-bd1e-4913-9449-48cba1829a8d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Received event network-vif-plugged-a908869b-6b7e-4872-bd60-5397511b102a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:24:59 np0005542249 nova_compute[254900]: 2025-12-02 11:24:59.528 254904 DEBUG oslo_concurrency.lockutils [req-db9affcc-9014-4f90-95d9-f916cf12096b req-899552d9-bd1e-4913-9449-48cba1829a8d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:24:59 np0005542249 nova_compute[254900]: 2025-12-02 11:24:59.529 254904 DEBUG oslo_concurrency.lockutils [req-db9affcc-9014-4f90-95d9-f916cf12096b req-899552d9-bd1e-4913-9449-48cba1829a8d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:24:59 np0005542249 nova_compute[254900]: 2025-12-02 11:24:59.529 254904 DEBUG oslo_concurrency.lockutils [req-db9affcc-9014-4f90-95d9-f916cf12096b req-899552d9-bd1e-4913-9449-48cba1829a8d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:24:59 np0005542249 nova_compute[254900]: 2025-12-02 11:24:59.529 254904 DEBUG nova.compute.manager [req-db9affcc-9014-4f90-95d9-f916cf12096b req-899552d9-bd1e-4913-9449-48cba1829a8d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] No waiting events found dispatching network-vif-plugged-a908869b-6b7e-4872-bd60-5397511b102a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:24:59 np0005542249 nova_compute[254900]: 2025-12-02 11:24:59.530 254904 WARNING nova.compute.manager [req-db9affcc-9014-4f90-95d9-f916cf12096b req-899552d9-bd1e-4913-9449-48cba1829a8d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Received unexpected event network-vif-plugged-a908869b-6b7e-4872-bd60-5397511b102a for instance with vm_state active and task_state None.#033[00m
Dec  2 06:24:59 np0005542249 nova_compute[254900]: 2025-12-02 11:24:59.530 254904 DEBUG nova.compute.manager [req-db9affcc-9014-4f90-95d9-f916cf12096b req-899552d9-bd1e-4913-9449-48cba1829a8d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Received event network-changed-677e9f6e-c8b4-450a-a8fa-47c99433fedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:24:59 np0005542249 nova_compute[254900]: 2025-12-02 11:24:59.530 254904 DEBUG nova.compute.manager [req-db9affcc-9014-4f90-95d9-f916cf12096b req-899552d9-bd1e-4913-9449-48cba1829a8d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Refreshing instance network info cache due to event network-changed-677e9f6e-c8b4-450a-a8fa-47c99433fedf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:24:59 np0005542249 nova_compute[254900]: 2025-12-02 11:24:59.530 254904 DEBUG oslo_concurrency.lockutils [req-db9affcc-9014-4f90-95d9-f916cf12096b req-899552d9-bd1e-4913-9449-48cba1829a8d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-c8ef6338-699a-4f16-8f0b-39f2bb67ee45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:24:59 np0005542249 nova_compute[254900]: 2025-12-02 11:24:59.530 254904 DEBUG oslo_concurrency.lockutils [req-db9affcc-9014-4f90-95d9-f916cf12096b req-899552d9-bd1e-4913-9449-48cba1829a8d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-c8ef6338-699a-4f16-8f0b-39f2bb67ee45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:24:59 np0005542249 nova_compute[254900]: 2025-12-02 11:24:59.531 254904 DEBUG nova.network.neutron [req-db9affcc-9014-4f90-95d9-f916cf12096b req-899552d9-bd1e-4913-9449-48cba1829a8d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Refreshing network info cache for port 677e9f6e-c8b4-450a-a8fa-47c99433fedf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:24:59 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-707d4db4a000ebdb59448ebd131dc084e27367fc55debcab4a62690513370678-userdata-shm.mount: Deactivated successfully.
Dec  2 06:24:59 np0005542249 systemd[1]: var-lib-containers-storage-overlay-df3cbd203ff7be9459bd9ff09ceb108e87360f20e22cafdf099ad4ee0830e36c-merged.mount: Deactivated successfully.
Dec  2 06:24:59 np0005542249 podman[281120]: 2025-12-02 11:24:59.748914816 +0000 UTC m=+0.451708067 container cleanup 707d4db4a000ebdb59448ebd131dc084e27367fc55debcab4a62690513370678 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8562a40-d165-4b23-84c8-7c8f664e882e, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  2 06:24:59 np0005542249 systemd[1]: libpod-conmon-707d4db4a000ebdb59448ebd131dc084e27367fc55debcab4a62690513370678.scope: Deactivated successfully.
Dec  2 06:25:00 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1353: 321 pgs: 321 active+clean; 2.9 GiB data, 2.9 GiB used, 57 GiB / 60 GiB avail; 761 KiB/s rd, 49 MiB/s wr, 274 op/s
Dec  2 06:25:00 np0005542249 podman[281176]: 2025-12-02 11:25:00.369834845 +0000 UTC m=+0.590669792 container remove 707d4db4a000ebdb59448ebd131dc084e27367fc55debcab4a62690513370678 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8562a40-d165-4b23-84c8-7c8f664e882e, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:25:00 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:00.377 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[1989d03f-d31a-4bb4-8059-0e12a590df2e]: (4, ('Tue Dec  2 11:24:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e8562a40-d165-4b23-84c8-7c8f664e882e (707d4db4a000ebdb59448ebd131dc084e27367fc55debcab4a62690513370678)\n707d4db4a000ebdb59448ebd131dc084e27367fc55debcab4a62690513370678\nTue Dec  2 11:24:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e8562a40-d165-4b23-84c8-7c8f664e882e (707d4db4a000ebdb59448ebd131dc084e27367fc55debcab4a62690513370678)\n707d4db4a000ebdb59448ebd131dc084e27367fc55debcab4a62690513370678\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:25:00 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:00.380 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[2e16626b-b757-4bbe-975c-24c5901d747a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:25:00 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:00.381 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8562a40-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:25:00 np0005542249 kernel: tape8562a40-d0: left promiscuous mode
Dec  2 06:25:00 np0005542249 nova_compute[254900]: 2025-12-02 11:25:00.384 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:00 np0005542249 nova_compute[254900]: 2025-12-02 11:25:00.425 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:00 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:00.429 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[299d516c-868d-47de-aa04-6561ea14f6c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:25:00 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:00.448 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[80b1dc09-6620-47d6-98be-9111281c0035]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:25:00 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:00.450 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[deeed0d5-2c7b-442b-80d1-0d8b3a4cedc2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:25:00 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:00.477 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[2693224e-40c2-4b61-9d26-33a31c2916ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487079, 'reachable_time': 17340, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281188, 'error': None, 'target': 'ovnmeta-e8562a40-d165-4b23-84c8-7c8f664e882e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:25:00 np0005542249 systemd[1]: run-netns-ovnmeta\x2de8562a40\x2dd165\x2d4b23\x2d84c8\x2d7c8f664e882e.mount: Deactivated successfully.
Dec  2 06:25:00 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:00.484 164036 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e8562a40-d165-4b23-84c8-7c8f664e882e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 06:25:00 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:00.485 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[c329fe27-06fb-482f-be46-3530826581e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:25:00 np0005542249 nova_compute[254900]: 2025-12-02 11:25:00.854 254904 DEBUG nova.compute.manager [req-ea7a942d-da22-457e-beac-8e2c034ca19a req-ec4ff99b-93bf-4acf-996c-52c2feb5a29e 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Received event network-vif-unplugged-677e9f6e-c8b4-450a-a8fa-47c99433fedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:25:00 np0005542249 nova_compute[254900]: 2025-12-02 11:25:00.854 254904 DEBUG oslo_concurrency.lockutils [req-ea7a942d-da22-457e-beac-8e2c034ca19a req-ec4ff99b-93bf-4acf-996c-52c2feb5a29e 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "c8ef6338-699a-4f16-8f0b-39f2bb67ee45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:25:00 np0005542249 nova_compute[254900]: 2025-12-02 11:25:00.855 254904 DEBUG oslo_concurrency.lockutils [req-ea7a942d-da22-457e-beac-8e2c034ca19a req-ec4ff99b-93bf-4acf-996c-52c2feb5a29e 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "c8ef6338-699a-4f16-8f0b-39f2bb67ee45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:25:00 np0005542249 nova_compute[254900]: 2025-12-02 11:25:00.855 254904 DEBUG oslo_concurrency.lockutils [req-ea7a942d-da22-457e-beac-8e2c034ca19a req-ec4ff99b-93bf-4acf-996c-52c2feb5a29e 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "c8ef6338-699a-4f16-8f0b-39f2bb67ee45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:25:00 np0005542249 nova_compute[254900]: 2025-12-02 11:25:00.856 254904 DEBUG nova.compute.manager [req-ea7a942d-da22-457e-beac-8e2c034ca19a req-ec4ff99b-93bf-4acf-996c-52c2feb5a29e 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] No waiting events found dispatching network-vif-unplugged-677e9f6e-c8b4-450a-a8fa-47c99433fedf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:25:00 np0005542249 nova_compute[254900]: 2025-12-02 11:25:00.856 254904 DEBUG nova.compute.manager [req-ea7a942d-da22-457e-beac-8e2c034ca19a req-ec4ff99b-93bf-4acf-996c-52c2feb5a29e 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Received event network-vif-unplugged-677e9f6e-c8b4-450a-a8fa-47c99433fedf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 06:25:01 np0005542249 nova_compute[254900]: 2025-12-02 11:25:01.746 254904 DEBUG nova.network.neutron [req-db9affcc-9014-4f90-95d9-f916cf12096b req-899552d9-bd1e-4913-9449-48cba1829a8d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Updated VIF entry in instance network info cache for port 677e9f6e-c8b4-450a-a8fa-47c99433fedf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:25:01 np0005542249 nova_compute[254900]: 2025-12-02 11:25:01.747 254904 DEBUG nova.network.neutron [req-db9affcc-9014-4f90-95d9-f916cf12096b req-899552d9-bd1e-4913-9449-48cba1829a8d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Updating instance_info_cache with network_info: [{"id": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "address": "fa:16:3e:f5:ea:49", "network": {"id": "e8562a40-d165-4b23-84c8-7c8f664e882e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1724334586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674916d2c2d94b239e86c66b1f0af922", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677e9f6e-c8", "ovs_interfaceid": "677e9f6e-c8b4-450a-a8fa-47c99433fedf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:25:01 np0005542249 nova_compute[254900]: 2025-12-02 11:25:01.779 254904 DEBUG oslo_concurrency.lockutils [req-db9affcc-9014-4f90-95d9-f916cf12096b req-899552d9-bd1e-4913-9449-48cba1829a8d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-c8ef6338-699a-4f16-8f0b-39f2bb67ee45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:25:01 np0005542249 nova_compute[254900]: 2025-12-02 11:25:01.895 254904 INFO nova.virt.libvirt.driver [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Deleting instance files /var/lib/nova/instances/c8ef6338-699a-4f16-8f0b-39f2bb67ee45_del#033[00m
Dec  2 06:25:01 np0005542249 nova_compute[254900]: 2025-12-02 11:25:01.896 254904 INFO nova.virt.libvirt.driver [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Deletion of /var/lib/nova/instances/c8ef6338-699a-4f16-8f0b-39f2bb67ee45_del complete#033[00m
Dec  2 06:25:01 np0005542249 nova_compute[254900]: 2025-12-02 11:25:01.951 254904 INFO nova.compute.manager [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Took 3.01 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:25:01 np0005542249 nova_compute[254900]: 2025-12-02 11:25:01.953 254904 DEBUG oslo.service.loopingcall [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:25:01 np0005542249 nova_compute[254900]: 2025-12-02 11:25:01.954 254904 DEBUG nova.compute.manager [-] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:25:01 np0005542249 nova_compute[254900]: 2025-12-02 11:25:01.955 254904 DEBUG nova.network.neutron [-] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:25:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:25:02 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1354: 321 pgs: 321 active+clean; 3.0 GiB data, 3.1 GiB used, 57 GiB / 60 GiB avail; 2.2 MiB/s rd, 51 MiB/s wr, 281 op/s
Dec  2 06:25:02 np0005542249 nova_compute[254900]: 2025-12-02 11:25:02.072 254904 DEBUG nova.compute.manager [req-b216a697-f9ef-41c6-b785-bc159f0b4a73 req-452c0449-bd43-4740-a15f-23888335453d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Received event network-changed-a908869b-6b7e-4872-bd60-5397511b102a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:25:02 np0005542249 nova_compute[254900]: 2025-12-02 11:25:02.073 254904 DEBUG nova.compute.manager [req-b216a697-f9ef-41c6-b785-bc159f0b4a73 req-452c0449-bd43-4740-a15f-23888335453d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Refreshing instance network info cache due to event network-changed-a908869b-6b7e-4872-bd60-5397511b102a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:25:02 np0005542249 nova_compute[254900]: 2025-12-02 11:25:02.074 254904 DEBUG oslo_concurrency.lockutils [req-b216a697-f9ef-41c6-b785-bc159f0b4a73 req-452c0449-bd43-4740-a15f-23888335453d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-b5c1d606-86ed-453a-b2e0-ee74a8e24c46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:25:02 np0005542249 nova_compute[254900]: 2025-12-02 11:25:02.074 254904 DEBUG oslo_concurrency.lockutils [req-b216a697-f9ef-41c6-b785-bc159f0b4a73 req-452c0449-bd43-4740-a15f-23888335453d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-b5c1d606-86ed-453a-b2e0-ee74a8e24c46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:25:02 np0005542249 nova_compute[254900]: 2025-12-02 11:25:02.075 254904 DEBUG nova.network.neutron [req-b216a697-f9ef-41c6-b785-bc159f0b4a73 req-452c0449-bd43-4740-a15f-23888335453d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Refreshing network info cache for port a908869b-6b7e-4872-bd60-5397511b102a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:25:02 np0005542249 nova_compute[254900]: 2025-12-02 11:25:02.935 254904 DEBUG nova.compute.manager [req-27892f24-8af8-4e1b-803c-c3070c1ab5f4 req-116be729-78f0-4c3a-842e-6235f140614d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Received event network-vif-plugged-677e9f6e-c8b4-450a-a8fa-47c99433fedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:25:02 np0005542249 nova_compute[254900]: 2025-12-02 11:25:02.936 254904 DEBUG oslo_concurrency.lockutils [req-27892f24-8af8-4e1b-803c-c3070c1ab5f4 req-116be729-78f0-4c3a-842e-6235f140614d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "c8ef6338-699a-4f16-8f0b-39f2bb67ee45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:25:02 np0005542249 nova_compute[254900]: 2025-12-02 11:25:02.936 254904 DEBUG oslo_concurrency.lockutils [req-27892f24-8af8-4e1b-803c-c3070c1ab5f4 req-116be729-78f0-4c3a-842e-6235f140614d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "c8ef6338-699a-4f16-8f0b-39f2bb67ee45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:25:02 np0005542249 nova_compute[254900]: 2025-12-02 11:25:02.937 254904 DEBUG oslo_concurrency.lockutils [req-27892f24-8af8-4e1b-803c-c3070c1ab5f4 req-116be729-78f0-4c3a-842e-6235f140614d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "c8ef6338-699a-4f16-8f0b-39f2bb67ee45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:25:02 np0005542249 nova_compute[254900]: 2025-12-02 11:25:02.937 254904 DEBUG nova.compute.manager [req-27892f24-8af8-4e1b-803c-c3070c1ab5f4 req-116be729-78f0-4c3a-842e-6235f140614d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] No waiting events found dispatching network-vif-plugged-677e9f6e-c8b4-450a-a8fa-47c99433fedf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:25:02 np0005542249 nova_compute[254900]: 2025-12-02 11:25:02.938 254904 WARNING nova.compute.manager [req-27892f24-8af8-4e1b-803c-c3070c1ab5f4 req-116be729-78f0-4c3a-842e-6235f140614d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Received unexpected event network-vif-plugged-677e9f6e-c8b4-450a-a8fa-47c99433fedf for instance with vm_state active and task_state deleting.#033[00m
Dec  2 06:25:03 np0005542249 nova_compute[254900]: 2025-12-02 11:25:03.127 254904 DEBUG nova.network.neutron [-] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:25:03 np0005542249 nova_compute[254900]: 2025-12-02 11:25:03.144 254904 INFO nova.compute.manager [-] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Took 1.19 seconds to deallocate network for instance.#033[00m
Dec  2 06:25:03 np0005542249 nova_compute[254900]: 2025-12-02 11:25:03.331 254904 INFO nova.compute.manager [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Took 0.19 seconds to detach 1 volumes for instance.#033[00m
Dec  2 06:25:03 np0005542249 nova_compute[254900]: 2025-12-02 11:25:03.390 254904 DEBUG oslo_concurrency.lockutils [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:25:03 np0005542249 nova_compute[254900]: 2025-12-02 11:25:03.392 254904 DEBUG oslo_concurrency.lockutils [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:25:03 np0005542249 nova_compute[254900]: 2025-12-02 11:25:03.403 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:03 np0005542249 nova_compute[254900]: 2025-12-02 11:25:03.521 254904 DEBUG oslo_concurrency.processutils [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:25:04 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1355: 321 pgs: 321 active+clean; 3.1 GiB data, 3.2 GiB used, 57 GiB / 60 GiB avail; 2.2 MiB/s rd, 63 MiB/s wr, 322 op/s
Dec  2 06:25:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:25:04 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2661081874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:25:04 np0005542249 nova_compute[254900]: 2025-12-02 11:25:04.188 254904 DEBUG oslo_concurrency.processutils [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.666s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:25:04 np0005542249 nova_compute[254900]: 2025-12-02 11:25:04.198 254904 DEBUG nova.compute.provider_tree [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:25:04 np0005542249 nova_compute[254900]: 2025-12-02 11:25:04.218 254904 DEBUG nova.scheduler.client.report [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:25:04 np0005542249 nova_compute[254900]: 2025-12-02 11:25:04.252 254904 DEBUG oslo_concurrency.lockutils [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:25:04 np0005542249 nova_compute[254900]: 2025-12-02 11:25:04.299 254904 INFO nova.scheduler.client.report [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Deleted allocations for instance c8ef6338-699a-4f16-8f0b-39f2bb67ee45#033[00m
Dec  2 06:25:04 np0005542249 nova_compute[254900]: 2025-12-02 11:25:04.389 254904 DEBUG oslo_concurrency.lockutils [None req-2bee2abe-360d-4888-af7e-7100383d6518 1e7562e263fa4d47ae69c1891e4b61ff 674916d2c2d94b239e86c66b1f0af922 - - default default] Lock "c8ef6338-699a-4f16-8f0b-39f2bb67ee45" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.451s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:25:04 np0005542249 nova_compute[254900]: 2025-12-02 11:25:04.495 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:05 np0005542249 nova_compute[254900]: 2025-12-02 11:25:05.040 254904 DEBUG nova.compute.manager [req-de3553f7-ea40-4069-a2bb-02a4d524454b req-3a19034a-bd6b-4f8d-9b7a-ed7321cb4b7a 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Received event network-vif-deleted-677e9f6e-c8b4-450a-a8fa-47c99433fedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:25:05 np0005542249 nova_compute[254900]: 2025-12-02 11:25:05.359 254904 DEBUG nova.network.neutron [req-b216a697-f9ef-41c6-b785-bc159f0b4a73 req-452c0449-bd43-4740-a15f-23888335453d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Updated VIF entry in instance network info cache for port a908869b-6b7e-4872-bd60-5397511b102a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:25:05 np0005542249 nova_compute[254900]: 2025-12-02 11:25:05.360 254904 DEBUG nova.network.neutron [req-b216a697-f9ef-41c6-b785-bc159f0b4a73 req-452c0449-bd43-4740-a15f-23888335453d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Updating instance_info_cache with network_info: [{"id": "a908869b-6b7e-4872-bd60-5397511b102a", "address": "fa:16:3e:12:29:28", "network": {"id": "202d4c1b-b1c2-4564-b679-1d789b189a11", "bridge": "br-int", "label": "tempest-TestStampPattern-951670540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bda44a38b8b4f31a8b6e8f6f0548898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa908869b-6b", "ovs_interfaceid": "a908869b-6b7e-4872-bd60-5397511b102a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:25:05 np0005542249 nova_compute[254900]: 2025-12-02 11:25:05.385 254904 DEBUG oslo_concurrency.lockutils [req-b216a697-f9ef-41c6-b785-bc159f0b4a73 req-452c0449-bd43-4740-a15f-23888335453d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-b5c1d606-86ed-453a-b2e0-ee74a8e24c46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:25:06 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1356: 321 pgs: 321 active+clean; 3.2 GiB data, 3.3 GiB used, 57 GiB / 60 GiB avail; 2.5 MiB/s rd, 72 MiB/s wr, 295 op/s
Dec  2 06:25:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:25:07 np0005542249 podman[281216]: 2025-12-02 11:25:07.019936701 +0000 UTC m=+0.085055440 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 06:25:08 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1357: 321 pgs: 321 active+clean; 3.4 GiB data, 3.5 GiB used, 57 GiB / 60 GiB avail; 2.2 MiB/s rd, 76 MiB/s wr, 325 op/s
Dec  2 06:25:08 np0005542249 nova_compute[254900]: 2025-12-02 11:25:08.405 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Dec  2 06:25:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Dec  2 06:25:08 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Dec  2 06:25:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:25:09 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/468383377' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:25:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:25:09 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/468383377' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:25:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:25:09 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1219170815' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:25:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:25:09 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1219170815' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:25:09 np0005542249 nova_compute[254900]: 2025-12-02 11:25:09.497 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:09 np0005542249 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 06:25:09 np0005542249 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 06:25:10 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1359: 321 pgs: 2 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 308 active+clean; 3.4 GiB data, 3.5 GiB used, 56 GiB / 60 GiB avail; 2.3 MiB/s rd, 62 MiB/s wr, 289 op/s
Dec  2 06:25:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Dec  2 06:25:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Dec  2 06:25:10 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Dec  2 06:25:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:25:12 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1361: 321 pgs: 2 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 308 active+clean; 3.1 GiB data, 3.2 GiB used, 57 GiB / 60 GiB avail; 805 KiB/s rd, 50 MiB/s wr, 238 op/s
Dec  2 06:25:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:25:12 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3870231505' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:25:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:25:12 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3870231505' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:25:12 np0005542249 ovn_controller[153849]: 2025-12-02T11:25:12Z|00152|binding|INFO|Releasing lport 3e075ddd-b183-4a84-8a21-ad15a1122c7d from this chassis (sb_readonly=0)
Dec  2 06:25:12 np0005542249 nova_compute[254900]: 2025-12-02 11:25:12.394 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:13 np0005542249 nova_compute[254900]: 2025-12-02 11:25:13.408 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:13 np0005542249 ovn_controller[153849]: 2025-12-02T11:25:13Z|00024|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.9 does not match offer 10.100.0.6
Dec  2 06:25:13 np0005542249 ovn_controller[153849]: 2025-12-02T11:25:13Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:12:29:28 10.100.0.6
Dec  2 06:25:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Dec  2 06:25:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Dec  2 06:25:13 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Dec  2 06:25:14 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1363: 321 pgs: 2 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 308 active+clean; 2.7 GiB data, 2.9 GiB used, 57 GiB / 60 GiB avail; 469 KiB/s rd, 17 MiB/s wr, 270 op/s
Dec  2 06:25:14 np0005542249 nova_compute[254900]: 2025-12-02 11:25:14.469 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764674699.4662042, c8ef6338-699a-4f16-8f0b-39f2bb67ee45 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:25:14 np0005542249 nova_compute[254900]: 2025-12-02 11:25:14.469 254904 INFO nova.compute.manager [-] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:25:14 np0005542249 nova_compute[254900]: 2025-12-02 11:25:14.493 254904 DEBUG nova.compute.manager [None req-634b589f-1e30-4147-885e-ce8d461410be - - - - - -] [instance: c8ef6338-699a-4f16-8f0b-39f2bb67ee45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:25:14 np0005542249 nova_compute[254900]: 2025-12-02 11:25:14.501 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Dec  2 06:25:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Dec  2 06:25:14 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Dec  2 06:25:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:25:15 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1481124618' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:25:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:25:15 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1481124618' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:25:16 np0005542249 podman[281235]: 2025-12-02 11:25:16.010610121 +0000 UTC m=+0.093102486 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Dec  2 06:25:16 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1365: 321 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 318 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.6 MiB/s wr, 249 op/s
Dec  2 06:25:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Dec  2 06:25:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Dec  2 06:25:16 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Dec  2 06:25:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:25:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Dec  2 06:25:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Dec  2 06:25:16 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Dec  2 06:25:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:25:17 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/178202713' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:25:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:25:17 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/178202713' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:25:17 np0005542249 ovn_controller[153849]: 2025-12-02T11:25:17Z|00026|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.9 does not match offer 10.100.0.6
Dec  2 06:25:17 np0005542249 ovn_controller[153849]: 2025-12-02T11:25:17Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:12:29:28 10.100.0.6
Dec  2 06:25:18 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1368: 321 pgs: 4 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 310 active+clean; 1.9 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.4 MiB/s rd, 248 KiB/s wr, 253 op/s
Dec  2 06:25:18 np0005542249 nova_compute[254900]: 2025-12-02 11:25:18.409 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:18 np0005542249 ovn_controller[153849]: 2025-12-02T11:25:18Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:12:29:28 10.100.0.6
Dec  2 06:25:18 np0005542249 ovn_controller[153849]: 2025-12-02T11:25:18Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:12:29:28 10.100.0.6
Dec  2 06:25:19 np0005542249 ovn_controller[153849]: 2025-12-02T11:25:19Z|00153|binding|INFO|Releasing lport 3e075ddd-b183-4a84-8a21-ad15a1122c7d from this chassis (sb_readonly=0)
Dec  2 06:25:19 np0005542249 nova_compute[254900]: 2025-12-02 11:25:19.503 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:19 np0005542249 nova_compute[254900]: 2025-12-02 11:25:19.588 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:19.841 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:25:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:19.842 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:25:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:19.842 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:25:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Dec  2 06:25:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Dec  2 06:25:19 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Dec  2 06:25:19 np0005542249 podman[281261]: 2025-12-02 11:25:19.999133048 +0000 UTC m=+0.071651287 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Dec  2 06:25:20 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1370: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 1.6 GiB data, 2.2 GiB used, 58 GiB / 60 GiB avail; 934 KiB/s rd, 31 KiB/s wr, 185 op/s
Dec  2 06:25:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:25:20 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/560020077' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:25:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:25:20 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/560020077' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:25:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:25:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Dec  2 06:25:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Dec  2 06:25:21 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Dec  2 06:25:22 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1372: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 958 MiB data, 1.5 GiB used, 58 GiB / 60 GiB avail; 208 KiB/s rd, 113 KiB/s wr, 182 op/s
Dec  2 06:25:23 np0005542249 nova_compute[254900]: 2025-12-02 11:25:23.453 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:24 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1373: 321 pgs: 321 active+clean; 602 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 214 KiB/s rd, 85 KiB/s wr, 156 op/s
Dec  2 06:25:24 np0005542249 ovn_controller[153849]: 2025-12-02T11:25:24Z|00154|binding|INFO|Releasing lport 3e075ddd-b183-4a84-8a21-ad15a1122c7d from this chassis (sb_readonly=0)
Dec  2 06:25:24 np0005542249 nova_compute[254900]: 2025-12-02 11:25:24.541 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:24 np0005542249 nova_compute[254900]: 2025-12-02 11:25:24.579 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:26 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1374: 321 pgs: 321 active+clean; 266 MiB data, 504 MiB used, 59 GiB / 60 GiB avail; 91 KiB/s rd, 73 KiB/s wr, 111 op/s
Dec  2 06:25:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:25:26
Dec  2 06:25:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:25:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:25:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'volumes', '.mgr', 'vms', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta']
Dec  2 06:25:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:25:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:25:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:25:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:25:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:25:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:25:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:25:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:25:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:25:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:25:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:25:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:25:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:25:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:25:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:25:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:25:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:25:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:25:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Dec  2 06:25:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Dec  2 06:25:26 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Dec  2 06:25:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:25:27 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:25:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:25:27 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:25:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:25:27 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:25:27 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev fb0e1a42-d960-414a-bf1b-ba4a5dead9a6 does not exist
Dec  2 06:25:27 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 797d74e7-670f-4bd0-9f91-231265440467 does not exist
Dec  2 06:25:27 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 543982d4-c950-4bf1-82bd-62c51d8f688d does not exist
Dec  2 06:25:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:25:27 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:25:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:25:27 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:25:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:25:27 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:25:27 np0005542249 podman[281553]: 2025-12-02 11:25:27.918634636 +0000 UTC m=+0.049135038 container create cae5ef9c5966193ecdada8e3efa594b44234a6ec71c69c42d64f98addee9ce4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_varahamihira, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:25:27 np0005542249 systemd[1]: Started libpod-conmon-cae5ef9c5966193ecdada8e3efa594b44234a6ec71c69c42d64f98addee9ce4c.scope.
Dec  2 06:25:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Dec  2 06:25:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Dec  2 06:25:27 np0005542249 podman[281553]: 2025-12-02 11:25:27.895488912 +0000 UTC m=+0.025989324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:25:27 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Dec  2 06:25:28 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:25:28 np0005542249 podman[281553]: 2025-12-02 11:25:28.030336125 +0000 UTC m=+0.160836537 container init cae5ef9c5966193ecdada8e3efa594b44234a6ec71c69c42d64f98addee9ce4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  2 06:25:28 np0005542249 podman[281553]: 2025-12-02 11:25:28.040618553 +0000 UTC m=+0.171118955 container start cae5ef9c5966193ecdada8e3efa594b44234a6ec71c69c42d64f98addee9ce4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_varahamihira, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:25:28 np0005542249 podman[281553]: 2025-12-02 11:25:28.047026036 +0000 UTC m=+0.177526438 container attach cae5ef9c5966193ecdada8e3efa594b44234a6ec71c69c42d64f98addee9ce4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_varahamihira, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  2 06:25:28 np0005542249 magical_varahamihira[281570]: 167 167
Dec  2 06:25:28 np0005542249 systemd[1]: libpod-cae5ef9c5966193ecdada8e3efa594b44234a6ec71c69c42d64f98addee9ce4c.scope: Deactivated successfully.
Dec  2 06:25:28 np0005542249 podman[281553]: 2025-12-02 11:25:28.052590117 +0000 UTC m=+0.183090529 container died cae5ef9c5966193ecdada8e3efa594b44234a6ec71c69c42d64f98addee9ce4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  2 06:25:28 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1377: 321 pgs: 321 active+clean; 269 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 370 KiB/s wr, 62 op/s
Dec  2 06:25:28 np0005542249 systemd[1]: var-lib-containers-storage-overlay-9241a639b8e77b1edbbe6c6bcc68dbc1e7b90c81a8b7a7be3b9d80eaaf01c77a-merged.mount: Deactivated successfully.
Dec  2 06:25:28 np0005542249 podman[281553]: 2025-12-02 11:25:28.102309609 +0000 UTC m=+0.232809992 container remove cae5ef9c5966193ecdada8e3efa594b44234a6ec71c69c42d64f98addee9ce4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:25:28 np0005542249 systemd[1]: libpod-conmon-cae5ef9c5966193ecdada8e3efa594b44234a6ec71c69c42d64f98addee9ce4c.scope: Deactivated successfully.
Dec  2 06:25:28 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:25:28 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:25:28 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:25:28 np0005542249 podman[281594]: 2025-12-02 11:25:28.277740701 +0000 UTC m=+0.038480282 container create c01fda40efdf2fc65396f0db1d4118ce8a34e6c5f1b8864361baa921c9cf205d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:25:28 np0005542249 systemd[1]: Started libpod-conmon-c01fda40efdf2fc65396f0db1d4118ce8a34e6c5f1b8864361baa921c9cf205d.scope.
Dec  2 06:25:28 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:25:28 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d24936888ee8288e5e8c9c62908de24eb79ed35fa4d0f5113b1ddb27a756fe8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:25:28 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d24936888ee8288e5e8c9c62908de24eb79ed35fa4d0f5113b1ddb27a756fe8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:25:28 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d24936888ee8288e5e8c9c62908de24eb79ed35fa4d0f5113b1ddb27a756fe8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:25:28 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d24936888ee8288e5e8c9c62908de24eb79ed35fa4d0f5113b1ddb27a756fe8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:25:28 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d24936888ee8288e5e8c9c62908de24eb79ed35fa4d0f5113b1ddb27a756fe8d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:25:28 np0005542249 podman[281594]: 2025-12-02 11:25:28.260758542 +0000 UTC m=+0.021498153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:25:28 np0005542249 podman[281594]: 2025-12-02 11:25:28.357293721 +0000 UTC m=+0.118033322 container init c01fda40efdf2fc65396f0db1d4118ce8a34e6c5f1b8864361baa921c9cf205d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  2 06:25:28 np0005542249 podman[281594]: 2025-12-02 11:25:28.366300053 +0000 UTC m=+0.127039644 container start c01fda40efdf2fc65396f0db1d4118ce8a34e6c5f1b8864361baa921c9cf205d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  2 06:25:28 np0005542249 podman[281594]: 2025-12-02 11:25:28.36952982 +0000 UTC m=+0.130269411 container attach c01fda40efdf2fc65396f0db1d4118ce8a34e6c5f1b8864361baa921c9cf205d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lewin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:25:28 np0005542249 nova_compute[254900]: 2025-12-02 11:25:28.456 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:29 np0005542249 sleepy_lewin[281611]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:25:29 np0005542249 sleepy_lewin[281611]: --> relative data size: 1.0
Dec  2 06:25:29 np0005542249 sleepy_lewin[281611]: --> All data devices are unavailable
Dec  2 06:25:29 np0005542249 systemd[1]: libpod-c01fda40efdf2fc65396f0db1d4118ce8a34e6c5f1b8864361baa921c9cf205d.scope: Deactivated successfully.
Dec  2 06:25:29 np0005542249 systemd[1]: libpod-c01fda40efdf2fc65396f0db1d4118ce8a34e6c5f1b8864361baa921c9cf205d.scope: Consumed 1.104s CPU time.
Dec  2 06:25:29 np0005542249 podman[281594]: 2025-12-02 11:25:29.531636022 +0000 UTC m=+1.292375603 container died c01fda40efdf2fc65396f0db1d4118ce8a34e6c5f1b8864361baa921c9cf205d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Dec  2 06:25:29 np0005542249 nova_compute[254900]: 2025-12-02 11:25:29.544 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:29 np0005542249 systemd[1]: var-lib-containers-storage-overlay-d24936888ee8288e5e8c9c62908de24eb79ed35fa4d0f5113b1ddb27a756fe8d-merged.mount: Deactivated successfully.
Dec  2 06:25:29 np0005542249 podman[281594]: 2025-12-02 11:25:29.593501174 +0000 UTC m=+1.354240765 container remove c01fda40efdf2fc65396f0db1d4118ce8a34e6c5f1b8864361baa921c9cf205d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lewin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  2 06:25:29 np0005542249 systemd[1]: libpod-conmon-c01fda40efdf2fc65396f0db1d4118ce8a34e6c5f1b8864361baa921c9cf205d.scope: Deactivated successfully.
Dec  2 06:25:30 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1378: 321 pgs: 321 active+clean; 269 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 290 KiB/s wr, 53 op/s
Dec  2 06:25:30 np0005542249 podman[281796]: 2025-12-02 11:25:30.365395822 +0000 UTC m=+0.046463516 container create e13b3f189f6b7df5857731479672d18461738929b8ca5bdd9d32be1d1546be08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_brown, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:25:30 np0005542249 systemd[1]: Started libpod-conmon-e13b3f189f6b7df5857731479672d18461738929b8ca5bdd9d32be1d1546be08.scope.
Dec  2 06:25:30 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:25:30 np0005542249 podman[281796]: 2025-12-02 11:25:30.349640986 +0000 UTC m=+0.030708700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:25:30 np0005542249 podman[281796]: 2025-12-02 11:25:30.446424162 +0000 UTC m=+0.127491906 container init e13b3f189f6b7df5857731479672d18461738929b8ca5bdd9d32be1d1546be08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:25:30 np0005542249 podman[281796]: 2025-12-02 11:25:30.453817962 +0000 UTC m=+0.134885656 container start e13b3f189f6b7df5857731479672d18461738929b8ca5bdd9d32be1d1546be08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  2 06:25:30 np0005542249 podman[281796]: 2025-12-02 11:25:30.457171622 +0000 UTC m=+0.138239326 container attach e13b3f189f6b7df5857731479672d18461738929b8ca5bdd9d32be1d1546be08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  2 06:25:30 np0005542249 intelligent_brown[281812]: 167 167
Dec  2 06:25:30 np0005542249 systemd[1]: libpod-e13b3f189f6b7df5857731479672d18461738929b8ca5bdd9d32be1d1546be08.scope: Deactivated successfully.
Dec  2 06:25:30 np0005542249 podman[281796]: 2025-12-02 11:25:30.459900396 +0000 UTC m=+0.140968100 container died e13b3f189f6b7df5857731479672d18461738929b8ca5bdd9d32be1d1546be08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  2 06:25:30 np0005542249 systemd[1]: var-lib-containers-storage-overlay-04595d2fe9302c53e575d93d4794973d43d8ed477a2aaef8e794539e4b68d38e-merged.mount: Deactivated successfully.
Dec  2 06:25:30 np0005542249 podman[281796]: 2025-12-02 11:25:30.535927051 +0000 UTC m=+0.216994745 container remove e13b3f189f6b7df5857731479672d18461738929b8ca5bdd9d32be1d1546be08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 06:25:30 np0005542249 systemd[1]: libpod-conmon-e13b3f189f6b7df5857731479672d18461738929b8ca5bdd9d32be1d1546be08.scope: Deactivated successfully.
Dec  2 06:25:30 np0005542249 podman[281838]: 2025-12-02 11:25:30.767165839 +0000 UTC m=+0.067563477 container create 4bb40207921c11e908913ba8cced439c73227b7e5dfbe1e4d1f4e2c36ef2581c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  2 06:25:30 np0005542249 systemd[1]: Started libpod-conmon-4bb40207921c11e908913ba8cced439c73227b7e5dfbe1e4d1f4e2c36ef2581c.scope.
Dec  2 06:25:30 np0005542249 podman[281838]: 2025-12-02 11:25:30.741254978 +0000 UTC m=+0.041652616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:25:30 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:25:30 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69579e3ab711714636da0249933f53c1928fdc961149fbf30e22de710567f9f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:25:30 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69579e3ab711714636da0249933f53c1928fdc961149fbf30e22de710567f9f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:25:30 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69579e3ab711714636da0249933f53c1928fdc961149fbf30e22de710567f9f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:25:30 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69579e3ab711714636da0249933f53c1928fdc961149fbf30e22de710567f9f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:25:30 np0005542249 podman[281838]: 2025-12-02 11:25:30.873379809 +0000 UTC m=+0.173777397 container init 4bb40207921c11e908913ba8cced439c73227b7e5dfbe1e4d1f4e2c36ef2581c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  2 06:25:30 np0005542249 podman[281838]: 2025-12-02 11:25:30.880313796 +0000 UTC m=+0.180711394 container start 4bb40207921c11e908913ba8cced439c73227b7e5dfbe1e4d1f4e2c36ef2581c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  2 06:25:30 np0005542249 podman[281838]: 2025-12-02 11:25:30.883368789 +0000 UTC m=+0.183766387 container attach 4bb40207921c11e908913ba8cced439c73227b7e5dfbe1e4d1f4e2c36ef2581c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  2 06:25:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Dec  2 06:25:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Dec  2 06:25:31 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Dec  2 06:25:31 np0005542249 quirky_ride[281855]: {
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:    "0": [
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:        {
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "devices": [
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "/dev/loop3"
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            ],
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "lv_name": "ceph_lv0",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "lv_size": "21470642176",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "name": "ceph_lv0",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "tags": {
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.cluster_name": "ceph",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.crush_device_class": "",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.encrypted": "0",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.osd_id": "0",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.type": "block",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.vdo": "0"
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            },
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "type": "block",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "vg_name": "ceph_vg0"
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:        }
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:    ],
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:    "1": [
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:        {
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "devices": [
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "/dev/loop4"
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            ],
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "lv_name": "ceph_lv1",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "lv_size": "21470642176",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "name": "ceph_lv1",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "tags": {
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.cluster_name": "ceph",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.crush_device_class": "",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.encrypted": "0",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.osd_id": "1",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.type": "block",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.vdo": "0"
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            },
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "type": "block",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "vg_name": "ceph_vg1"
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:        }
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:    ],
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:    "2": [
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:        {
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "devices": [
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "/dev/loop5"
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            ],
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "lv_name": "ceph_lv2",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "lv_size": "21470642176",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "name": "ceph_lv2",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "tags": {
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.cluster_name": "ceph",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.crush_device_class": "",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.encrypted": "0",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.osd_id": "2",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.type": "block",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:                "ceph.vdo": "0"
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            },
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "type": "block",
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:            "vg_name": "ceph_vg2"
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:        }
Dec  2 06:25:31 np0005542249 quirky_ride[281855]:    ]
Dec  2 06:25:31 np0005542249 quirky_ride[281855]: }
Dec  2 06:25:31 np0005542249 systemd[1]: libpod-4bb40207921c11e908913ba8cced439c73227b7e5dfbe1e4d1f4e2c36ef2581c.scope: Deactivated successfully.
Dec  2 06:25:31 np0005542249 podman[281864]: 2025-12-02 11:25:31.728322271 +0000 UTC m=+0.023826385 container died 4bb40207921c11e908913ba8cced439c73227b7e5dfbe1e4d1f4e2c36ef2581c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:25:31 np0005542249 systemd[1]: var-lib-containers-storage-overlay-69579e3ab711714636da0249933f53c1928fdc961149fbf30e22de710567f9f2-merged.mount: Deactivated successfully.
Dec  2 06:25:31 np0005542249 podman[281864]: 2025-12-02 11:25:31.775630459 +0000 UTC m=+0.071134553 container remove 4bb40207921c11e908913ba8cced439c73227b7e5dfbe1e4d1f4e2c36ef2581c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:25:31 np0005542249 systemd[1]: libpod-conmon-4bb40207921c11e908913ba8cced439c73227b7e5dfbe1e4d1f4e2c36ef2581c.scope: Deactivated successfully.
Dec  2 06:25:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:25:32 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1380: 321 pgs: 321 active+clean; 269 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 411 KiB/s rd, 390 KiB/s wr, 61 op/s
Dec  2 06:25:32 np0005542249 podman[282022]: 2025-12-02 11:25:32.381721547 +0000 UTC m=+0.041640507 container create 61e6714922e145fb40100efe47d88ab5535502cdbfb0e638b6e39f1c3dd08777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:25:32 np0005542249 systemd[1]: Started libpod-conmon-61e6714922e145fb40100efe47d88ab5535502cdbfb0e638b6e39f1c3dd08777.scope.
Dec  2 06:25:32 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:25:32 np0005542249 podman[282022]: 2025-12-02 11:25:32.362582869 +0000 UTC m=+0.022501889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:25:32 np0005542249 podman[282022]: 2025-12-02 11:25:32.458531452 +0000 UTC m=+0.118450432 container init 61e6714922e145fb40100efe47d88ab5535502cdbfb0e638b6e39f1c3dd08777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_nightingale, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Dec  2 06:25:32 np0005542249 podman[282022]: 2025-12-02 11:25:32.465326716 +0000 UTC m=+0.125245676 container start 61e6714922e145fb40100efe47d88ab5535502cdbfb0e638b6e39f1c3dd08777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_nightingale, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:25:32 np0005542249 podman[282022]: 2025-12-02 11:25:32.468388269 +0000 UTC m=+0.128307219 container attach 61e6714922e145fb40100efe47d88ab5535502cdbfb0e638b6e39f1c3dd08777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_nightingale, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:25:32 np0005542249 infallible_nightingale[282038]: 167 167
Dec  2 06:25:32 np0005542249 systemd[1]: libpod-61e6714922e145fb40100efe47d88ab5535502cdbfb0e638b6e39f1c3dd08777.scope: Deactivated successfully.
Dec  2 06:25:32 np0005542249 podman[282022]: 2025-12-02 11:25:32.470581417 +0000 UTC m=+0.130500387 container died 61e6714922e145fb40100efe47d88ab5535502cdbfb0e638b6e39f1c3dd08777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_nightingale, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  2 06:25:32 np0005542249 systemd[1]: var-lib-containers-storage-overlay-9e0c86ffccf7f00c2f1318d007bdb04f52f5806c6db201c20d4560bde3f733cf-merged.mount: Deactivated successfully.
Dec  2 06:25:32 np0005542249 podman[282022]: 2025-12-02 11:25:32.505914133 +0000 UTC m=+0.165833103 container remove 61e6714922e145fb40100efe47d88ab5535502cdbfb0e638b6e39f1c3dd08777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_nightingale, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:25:32 np0005542249 systemd[1]: libpod-conmon-61e6714922e145fb40100efe47d88ab5535502cdbfb0e638b6e39f1c3dd08777.scope: Deactivated successfully.
Dec  2 06:25:32 np0005542249 podman[282062]: 2025-12-02 11:25:32.674864027 +0000 UTC m=+0.022265762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:25:33 np0005542249 podman[282062]: 2025-12-02 11:25:33.373940377 +0000 UTC m=+0.721342082 container create edb386f73f452f387a5ef4b37992bd3362c73c687afba7c0d563ca2d85a1ebb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_yonath, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:25:33 np0005542249 systemd[1]: Started libpod-conmon-edb386f73f452f387a5ef4b37992bd3362c73c687afba7c0d563ca2d85a1ebb4.scope.
Dec  2 06:25:33 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:25:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62b6fb66b864d3fc753331992beda13f2d0a5715b046781e7324e28941141487/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:25:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62b6fb66b864d3fc753331992beda13f2d0a5715b046781e7324e28941141487/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:25:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62b6fb66b864d3fc753331992beda13f2d0a5715b046781e7324e28941141487/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:25:33 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62b6fb66b864d3fc753331992beda13f2d0a5715b046781e7324e28941141487/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:25:33 np0005542249 nova_compute[254900]: 2025-12-02 11:25:33.458 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:33 np0005542249 podman[282062]: 2025-12-02 11:25:33.467062134 +0000 UTC m=+0.814463849 container init edb386f73f452f387a5ef4b37992bd3362c73c687afba7c0d563ca2d85a1ebb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec  2 06:25:33 np0005542249 podman[282062]: 2025-12-02 11:25:33.478913373 +0000 UTC m=+0.826315088 container start edb386f73f452f387a5ef4b37992bd3362c73c687afba7c0d563ca2d85a1ebb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  2 06:25:33 np0005542249 podman[282062]: 2025-12-02 11:25:33.482616004 +0000 UTC m=+0.830017719 container attach edb386f73f452f387a5ef4b37992bd3362c73c687afba7c0d563ca2d85a1ebb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_yonath, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  2 06:25:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:25:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/900417691' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:25:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:25:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/900417691' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:25:34 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1381: 321 pgs: 321 active+clean; 269 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 350 KiB/s rd, 330 KiB/s wr, 58 op/s
Dec  2 06:25:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Dec  2 06:25:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Dec  2 06:25:34 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Dec  2 06:25:34 np0005542249 condescending_yonath[282079]: {
Dec  2 06:25:34 np0005542249 condescending_yonath[282079]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:25:34 np0005542249 condescending_yonath[282079]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:25:34 np0005542249 condescending_yonath[282079]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:25:34 np0005542249 condescending_yonath[282079]:        "osd_id": 0,
Dec  2 06:25:34 np0005542249 condescending_yonath[282079]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:25:34 np0005542249 condescending_yonath[282079]:        "type": "bluestore"
Dec  2 06:25:34 np0005542249 condescending_yonath[282079]:    },
Dec  2 06:25:34 np0005542249 condescending_yonath[282079]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:25:34 np0005542249 condescending_yonath[282079]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:25:34 np0005542249 condescending_yonath[282079]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:25:34 np0005542249 condescending_yonath[282079]:        "osd_id": 2,
Dec  2 06:25:34 np0005542249 condescending_yonath[282079]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:25:34 np0005542249 condescending_yonath[282079]:        "type": "bluestore"
Dec  2 06:25:34 np0005542249 condescending_yonath[282079]:    },
Dec  2 06:25:34 np0005542249 condescending_yonath[282079]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:25:34 np0005542249 condescending_yonath[282079]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:25:34 np0005542249 condescending_yonath[282079]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:25:34 np0005542249 condescending_yonath[282079]:        "osd_id": 1,
Dec  2 06:25:34 np0005542249 condescending_yonath[282079]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:25:34 np0005542249 condescending_yonath[282079]:        "type": "bluestore"
Dec  2 06:25:34 np0005542249 condescending_yonath[282079]:    }
Dec  2 06:25:34 np0005542249 condescending_yonath[282079]: }
Dec  2 06:25:34 np0005542249 nova_compute[254900]: 2025-12-02 11:25:34.547 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:34 np0005542249 systemd[1]: libpod-edb386f73f452f387a5ef4b37992bd3362c73c687afba7c0d563ca2d85a1ebb4.scope: Deactivated successfully.
Dec  2 06:25:34 np0005542249 systemd[1]: libpod-edb386f73f452f387a5ef4b37992bd3362c73c687afba7c0d563ca2d85a1ebb4.scope: Consumed 1.079s CPU time.
Dec  2 06:25:34 np0005542249 podman[282112]: 2025-12-02 11:25:34.616414521 +0000 UTC m=+0.032763037 container died edb386f73f452f387a5ef4b37992bd3362c73c687afba7c0d563ca2d85a1ebb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:25:34 np0005542249 systemd[1]: var-lib-containers-storage-overlay-62b6fb66b864d3fc753331992beda13f2d0a5715b046781e7324e28941141487-merged.mount: Deactivated successfully.
Dec  2 06:25:34 np0005542249 podman[282112]: 2025-12-02 11:25:34.679390363 +0000 UTC m=+0.095738849 container remove edb386f73f452f387a5ef4b37992bd3362c73c687afba7c0d563ca2d85a1ebb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_yonath, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  2 06:25:34 np0005542249 systemd[1]: libpod-conmon-edb386f73f452f387a5ef4b37992bd3362c73c687afba7c0d563ca2d85a1ebb4.scope: Deactivated successfully.
Dec  2 06:25:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:25:34 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:25:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:25:34 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:25:34 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 7d253885-f6d2-4708-90dd-9a794674f32c does not exist
Dec  2 06:25:34 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 77f7f742-87dc-498c-a09e-dbcd3ce8adbe does not exist
Dec  2 06:25:34 np0005542249 nova_compute[254900]: 2025-12-02 11:25:34.771 254904 DEBUG oslo_concurrency.lockutils [None req-09bb2198-5bf5-4625-be59-13fcd38dbffa b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Acquiring lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:25:34 np0005542249 nova_compute[254900]: 2025-12-02 11:25:34.771 254904 DEBUG oslo_concurrency.lockutils [None req-09bb2198-5bf5-4625-be59-13fcd38dbffa b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:25:34 np0005542249 nova_compute[254900]: 2025-12-02 11:25:34.793 254904 DEBUG nova.objects.instance [None req-09bb2198-5bf5-4625-be59-13fcd38dbffa b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lazy-loading 'flavor' on Instance uuid b5c1d606-86ed-453a-b2e0-ee74a8e24c46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:25:34 np0005542249 nova_compute[254900]: 2025-12-02 11:25:34.837 254904 DEBUG oslo_concurrency.lockutils [None req-09bb2198-5bf5-4625-be59-13fcd38dbffa b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:25:35 np0005542249 nova_compute[254900]: 2025-12-02 11:25:35.053 254904 DEBUG oslo_concurrency.lockutils [None req-09bb2198-5bf5-4625-be59-13fcd38dbffa b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Acquiring lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:25:35 np0005542249 nova_compute[254900]: 2025-12-02 11:25:35.053 254904 DEBUG oslo_concurrency.lockutils [None req-09bb2198-5bf5-4625-be59-13fcd38dbffa b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:25:35 np0005542249 nova_compute[254900]: 2025-12-02 11:25:35.053 254904 INFO nova.compute.manager [None req-09bb2198-5bf5-4625-be59-13fcd38dbffa b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Attaching volume cc8d0e6c-9ac5-46db-9b9e-cecfd1375b6e to /dev/vdb#033[00m
Dec  2 06:25:35 np0005542249 nova_compute[254900]: 2025-12-02 11:25:35.214 254904 DEBUG os_brick.utils [None req-09bb2198-5bf5-4625-be59-13fcd38dbffa b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:25:35 np0005542249 nova_compute[254900]: 2025-12-02 11:25:35.216 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:25:35 np0005542249 nova_compute[254900]: 2025-12-02 11:25:35.230 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:25:35 np0005542249 nova_compute[254900]: 2025-12-02 11:25:35.231 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[e728f896-cc38-4595-9600-eaa6cd9763c4]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:25:35 np0005542249 nova_compute[254900]: 2025-12-02 11:25:35.232 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:25:35 np0005542249 nova_compute[254900]: 2025-12-02 11:25:35.243 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:25:35 np0005542249 nova_compute[254900]: 2025-12-02 11:25:35.244 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[066fd64d-171d-4015-a905-6e1d6169fdc1]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:25:35 np0005542249 nova_compute[254900]: 2025-12-02 11:25:35.245 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:25:35 np0005542249 nova_compute[254900]: 2025-12-02 11:25:35.256 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:25:35 np0005542249 nova_compute[254900]: 2025-12-02 11:25:35.256 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[1e7dc12b-1843-4bca-8e5d-b75ac65dd07c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:25:35 np0005542249 nova_compute[254900]: 2025-12-02 11:25:35.257 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[177f8b8b-c3da-411e-b20a-91dd2ed1e79d]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:25:35 np0005542249 nova_compute[254900]: 2025-12-02 11:25:35.258 254904 DEBUG oslo_concurrency.processutils [None req-09bb2198-5bf5-4625-be59-13fcd38dbffa b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:25:35 np0005542249 nova_compute[254900]: 2025-12-02 11:25:35.284 254904 DEBUG oslo_concurrency.processutils [None req-09bb2198-5bf5-4625-be59-13fcd38dbffa b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:25:35 np0005542249 nova_compute[254900]: 2025-12-02 11:25:35.287 254904 DEBUG os_brick.initiator.connectors.lightos [None req-09bb2198-5bf5-4625-be59-13fcd38dbffa b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:25:35 np0005542249 nova_compute[254900]: 2025-12-02 11:25:35.287 254904 DEBUG os_brick.initiator.connectors.lightos [None req-09bb2198-5bf5-4625-be59-13fcd38dbffa b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:25:35 np0005542249 nova_compute[254900]: 2025-12-02 11:25:35.288 254904 DEBUG os_brick.initiator.connectors.lightos [None req-09bb2198-5bf5-4625-be59-13fcd38dbffa b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:25:35 np0005542249 nova_compute[254900]: 2025-12-02 11:25:35.288 254904 DEBUG os_brick.utils [None req-09bb2198-5bf5-4625-be59-13fcd38dbffa b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] <== get_connector_properties: return (72ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:25:35 np0005542249 nova_compute[254900]: 2025-12-02 11:25:35.288 254904 DEBUG nova.virt.block_device [None req-09bb2198-5bf5-4625-be59-13fcd38dbffa b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Updating existing volume attachment record: b0002e21-b523-44dd-9c28-f65e382ed05f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:25:35 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:25:35 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:25:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:25:35 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3639693991' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:25:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:25:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:25:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:25:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:25:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009013346926483037 of space, bias 1.0, pg target 0.2704004077944911 quantized to 32 (current 32)
Dec  2 06:25:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:25:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003800180699355343 of space, bias 1.0, pg target 0.11400542098066029 quantized to 32 (current 32)
Dec  2 06:25:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:25:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  2 06:25:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:25:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014244318551800482 of space, bias 1.0, pg target 0.42732955655401444 quantized to 32 (current 32)
Dec  2 06:25:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:25:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:25:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:25:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:25:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:25:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:25:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:25:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:25:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:25:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:25:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:25:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:25:36 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1383: 321 pgs: 321 active+clean; 269 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 348 KiB/s rd, 18 KiB/s wr, 96 op/s
Dec  2 06:25:36 np0005542249 nova_compute[254900]: 2025-12-02 11:25:36.094 254904 DEBUG nova.objects.instance [None req-09bb2198-5bf5-4625-be59-13fcd38dbffa b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lazy-loading 'flavor' on Instance uuid b5c1d606-86ed-453a-b2e0-ee74a8e24c46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:25:36 np0005542249 nova_compute[254900]: 2025-12-02 11:25:36.117 254904 DEBUG nova.virt.libvirt.driver [None req-09bb2198-5bf5-4625-be59-13fcd38dbffa b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Attempting to attach volume cc8d0e6c-9ac5-46db-9b9e-cecfd1375b6e with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Dec  2 06:25:36 np0005542249 nova_compute[254900]: 2025-12-02 11:25:36.121 254904 DEBUG nova.virt.libvirt.guest [None req-09bb2198-5bf5-4625-be59-13fcd38dbffa b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] attach device xml: <disk type="network" device="disk">
Dec  2 06:25:36 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:25:36 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-cc8d0e6c-9ac5-46db-9b9e-cecfd1375b6e">
Dec  2 06:25:36 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:25:36 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:25:36 np0005542249 nova_compute[254900]:  <auth username="openstack">
Dec  2 06:25:36 np0005542249 nova_compute[254900]:    <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:25:36 np0005542249 nova_compute[254900]:  </auth>
Dec  2 06:25:36 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:25:36 np0005542249 nova_compute[254900]:  <serial>cc8d0e6c-9ac5-46db-9b9e-cecfd1375b6e</serial>
Dec  2 06:25:36 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:25:36 np0005542249 nova_compute[254900]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  2 06:25:36 np0005542249 nova_compute[254900]: 2025-12-02 11:25:36.259 254904 DEBUG nova.virt.libvirt.driver [None req-09bb2198-5bf5-4625-be59-13fcd38dbffa b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:25:36 np0005542249 nova_compute[254900]: 2025-12-02 11:25:36.260 254904 DEBUG nova.virt.libvirt.driver [None req-09bb2198-5bf5-4625-be59-13fcd38dbffa b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:25:36 np0005542249 nova_compute[254900]: 2025-12-02 11:25:36.261 254904 DEBUG nova.virt.libvirt.driver [None req-09bb2198-5bf5-4625-be59-13fcd38dbffa b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:25:36 np0005542249 nova_compute[254900]: 2025-12-02 11:25:36.261 254904 DEBUG nova.virt.libvirt.driver [None req-09bb2198-5bf5-4625-be59-13fcd38dbffa b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] No VIF found with MAC fa:16:3e:12:29:28, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:25:36 np0005542249 nova_compute[254900]: 2025-12-02 11:25:36.563 254904 DEBUG oslo_concurrency.lockutils [None req-09bb2198-5bf5-4625-be59-13fcd38dbffa b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:25:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:25:38 np0005542249 podman[282203]: 2025-12-02 11:25:38.045314154 +0000 UTC m=+0.103605900 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  2 06:25:38 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1384: 321 pgs: 321 active+clean; 269 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 401 KiB/s rd, 14 KiB/s wr, 163 op/s
Dec  2 06:25:38 np0005542249 nova_compute[254900]: 2025-12-02 11:25:38.394 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:25:38 np0005542249 nova_compute[254900]: 2025-12-02 11:25:38.461 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Dec  2 06:25:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Dec  2 06:25:38 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Dec  2 06:25:39 np0005542249 nova_compute[254900]: 2025-12-02 11:25:39.194 254904 DEBUG oslo_concurrency.lockutils [None req-7b6a9ef7-1844-4ec8-9d20-f40ea742598d b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Acquiring lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:25:39 np0005542249 nova_compute[254900]: 2025-12-02 11:25:39.194 254904 DEBUG oslo_concurrency.lockutils [None req-7b6a9ef7-1844-4ec8-9d20-f40ea742598d b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:25:39 np0005542249 nova_compute[254900]: 2025-12-02 11:25:39.211 254904 INFO nova.compute.manager [None req-7b6a9ef7-1844-4ec8-9d20-f40ea742598d b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Detaching volume cc8d0e6c-9ac5-46db-9b9e-cecfd1375b6e#033[00m
Dec  2 06:25:39 np0005542249 nova_compute[254900]: 2025-12-02 11:25:39.360 254904 INFO nova.virt.block_device [None req-7b6a9ef7-1844-4ec8-9d20-f40ea742598d b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Attempting to driver detach volume cc8d0e6c-9ac5-46db-9b9e-cecfd1375b6e from mountpoint /dev/vdb#033[00m
Dec  2 06:25:39 np0005542249 nova_compute[254900]: 2025-12-02 11:25:39.371 254904 DEBUG nova.virt.libvirt.driver [None req-7b6a9ef7-1844-4ec8-9d20-f40ea742598d b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Attempting to detach device vdb from instance b5c1d606-86ed-453a-b2e0-ee74a8e24c46 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  2 06:25:39 np0005542249 nova_compute[254900]: 2025-12-02 11:25:39.371 254904 DEBUG nova.virt.libvirt.guest [None req-7b6a9ef7-1844-4ec8-9d20-f40ea742598d b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] detach device xml: <disk type="network" device="disk">
Dec  2 06:25:39 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:25:39 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-cc8d0e6c-9ac5-46db-9b9e-cecfd1375b6e">
Dec  2 06:25:39 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:25:39 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:25:39 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:25:39 np0005542249 nova_compute[254900]:  <serial>cc8d0e6c-9ac5-46db-9b9e-cecfd1375b6e</serial>
Dec  2 06:25:39 np0005542249 nova_compute[254900]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  2 06:25:39 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:25:39 np0005542249 nova_compute[254900]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  2 06:25:39 np0005542249 nova_compute[254900]: 2025-12-02 11:25:39.383 254904 INFO nova.virt.libvirt.driver [None req-7b6a9ef7-1844-4ec8-9d20-f40ea742598d b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Successfully detached device vdb from instance b5c1d606-86ed-453a-b2e0-ee74a8e24c46 from the persistent domain config.#033[00m
Dec  2 06:25:39 np0005542249 nova_compute[254900]: 2025-12-02 11:25:39.384 254904 DEBUG nova.virt.libvirt.driver [None req-7b6a9ef7-1844-4ec8-9d20-f40ea742598d b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance b5c1d606-86ed-453a-b2e0-ee74a8e24c46 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  2 06:25:39 np0005542249 nova_compute[254900]: 2025-12-02 11:25:39.384 254904 DEBUG nova.virt.libvirt.guest [None req-7b6a9ef7-1844-4ec8-9d20-f40ea742598d b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] detach device xml: <disk type="network" device="disk">
Dec  2 06:25:39 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:25:39 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-cc8d0e6c-9ac5-46db-9b9e-cecfd1375b6e">
Dec  2 06:25:39 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:25:39 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:25:39 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:25:39 np0005542249 nova_compute[254900]:  <serial>cc8d0e6c-9ac5-46db-9b9e-cecfd1375b6e</serial>
Dec  2 06:25:39 np0005542249 nova_compute[254900]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  2 06:25:39 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:25:39 np0005542249 nova_compute[254900]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  2 06:25:39 np0005542249 nova_compute[254900]: 2025-12-02 11:25:39.512 254904 DEBUG nova.virt.libvirt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Received event <DeviceRemovedEvent: 1764674739.5121193, b5c1d606-86ed-453a-b2e0-ee74a8e24c46 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  2 06:25:39 np0005542249 nova_compute[254900]: 2025-12-02 11:25:39.514 254904 DEBUG nova.virt.libvirt.driver [None req-7b6a9ef7-1844-4ec8-9d20-f40ea742598d b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance b5c1d606-86ed-453a-b2e0-ee74a8e24c46 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  2 06:25:39 np0005542249 nova_compute[254900]: 2025-12-02 11:25:39.516 254904 INFO nova.virt.libvirt.driver [None req-7b6a9ef7-1844-4ec8-9d20-f40ea742598d b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Successfully detached device vdb from instance b5c1d606-86ed-453a-b2e0-ee74a8e24c46 from the live domain config.#033[00m
Dec  2 06:25:39 np0005542249 nova_compute[254900]: 2025-12-02 11:25:39.549 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:39 np0005542249 nova_compute[254900]: 2025-12-02 11:25:39.701 254904 DEBUG nova.objects.instance [None req-7b6a9ef7-1844-4ec8-9d20-f40ea742598d b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lazy-loading 'flavor' on Instance uuid b5c1d606-86ed-453a-b2e0-ee74a8e24c46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:25:39 np0005542249 nova_compute[254900]: 2025-12-02 11:25:39.735 254904 DEBUG oslo_concurrency.lockutils [None req-7b6a9ef7-1844-4ec8-9d20-f40ea742598d b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:25:40 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1386: 321 pgs: 321 active+clean; 269 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 382 KiB/s rd, 12 KiB/s wr, 163 op/s
Dec  2 06:25:40 np0005542249 nova_compute[254900]: 2025-12-02 11:25:40.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:25:40 np0005542249 nova_compute[254900]: 2025-12-02 11:25:40.382 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:25:40 np0005542249 nova_compute[254900]: 2025-12-02 11:25:40.382 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 06:25:40 np0005542249 nova_compute[254900]: 2025-12-02 11:25:40.605 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "refresh_cache-7d55326f-52eb-4f7f-a9cd-05282ca6ca20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:25:40 np0005542249 nova_compute[254900]: 2025-12-02 11:25:40.605 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquired lock "refresh_cache-7d55326f-52eb-4f7f-a9cd-05282ca6ca20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:25:40 np0005542249 nova_compute[254900]: 2025-12-02 11:25:40.606 254904 DEBUG nova.network.neutron [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 06:25:40 np0005542249 nova_compute[254900]: 2025-12-02 11:25:40.606 254904 DEBUG nova.objects.instance [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7d55326f-52eb-4f7f-a9cd-05282ca6ca20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.035 254904 DEBUG nova.compute.manager [req-70584a53-5b8d-428c-9b89-1ab2a1e089f3 req-73743f05-9a0e-46ae-b7f3-7c94b9251de4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Received event network-changed-a908869b-6b7e-4872-bd60-5397511b102a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.035 254904 DEBUG nova.compute.manager [req-70584a53-5b8d-428c-9b89-1ab2a1e089f3 req-73743f05-9a0e-46ae-b7f3-7c94b9251de4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Refreshing instance network info cache due to event network-changed-a908869b-6b7e-4872-bd60-5397511b102a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.068 254904 DEBUG oslo_concurrency.lockutils [req-70584a53-5b8d-428c-9b89-1ab2a1e089f3 req-73743f05-9a0e-46ae-b7f3-7c94b9251de4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-b5c1d606-86ed-453a-b2e0-ee74a8e24c46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.068 254904 DEBUG oslo_concurrency.lockutils [req-70584a53-5b8d-428c-9b89-1ab2a1e089f3 req-73743f05-9a0e-46ae-b7f3-7c94b9251de4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-b5c1d606-86ed-453a-b2e0-ee74a8e24c46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.068 254904 DEBUG nova.network.neutron [req-70584a53-5b8d-428c-9b89-1ab2a1e089f3 req-73743f05-9a0e-46ae-b7f3-7c94b9251de4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Refreshing network info cache for port a908869b-6b7e-4872-bd60-5397511b102a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.132 254904 DEBUG oslo_concurrency.lockutils [None req-f09d7d04-78fe-49d4-92af-bcf6e6a88e3f b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Acquiring lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.133 254904 DEBUG oslo_concurrency.lockutils [None req-f09d7d04-78fe-49d4-92af-bcf6e6a88e3f b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.133 254904 DEBUG oslo_concurrency.lockutils [None req-f09d7d04-78fe-49d4-92af-bcf6e6a88e3f b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Acquiring lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.133 254904 DEBUG oslo_concurrency.lockutils [None req-f09d7d04-78fe-49d4-92af-bcf6e6a88e3f b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.134 254904 DEBUG oslo_concurrency.lockutils [None req-f09d7d04-78fe-49d4-92af-bcf6e6a88e3f b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.135 254904 INFO nova.compute.manager [None req-f09d7d04-78fe-49d4-92af-bcf6e6a88e3f b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Terminating instance#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.136 254904 DEBUG nova.compute.manager [None req-f09d7d04-78fe-49d4-92af-bcf6e6a88e3f b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:25:41 np0005542249 kernel: tapa908869b-6b (unregistering): left promiscuous mode
Dec  2 06:25:41 np0005542249 NetworkManager[48987]: <info>  [1764674741.1825] device (tapa908869b-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.198 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:41 np0005542249 ovn_controller[153849]: 2025-12-02T11:25:41Z|00155|binding|INFO|Releasing lport a908869b-6b7e-4872-bd60-5397511b102a from this chassis (sb_readonly=0)
Dec  2 06:25:41 np0005542249 ovn_controller[153849]: 2025-12-02T11:25:41Z|00156|binding|INFO|Setting lport a908869b-6b7e-4872-bd60-5397511b102a down in Southbound
Dec  2 06:25:41 np0005542249 ovn_controller[153849]: 2025-12-02T11:25:41Z|00157|binding|INFO|Removing iface tapa908869b-6b ovn-installed in OVS
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.200 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:41.211 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:29:28 10.100.0.6'], port_security=['fa:16:3e:12:29:28 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b5c1d606-86ed-453a-b2e0-ee74a8e24c46', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-202d4c1b-b1c2-4564-b679-1d789b189a11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bda44a38b8b4f31a8b6e8f6f0548898', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5dfe3f56-bbf4-4aa1-aeeb-3cc512936610', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64d73c60-6b3f-4965-8ef1-0cdac8312377, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=a908869b-6b7e-4872-bd60-5397511b102a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:25:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:41.213 163757 INFO neutron.agent.ovn.metadata.agent [-] Port a908869b-6b7e-4872-bd60-5397511b102a in datapath 202d4c1b-b1c2-4564-b679-1d789b189a11 unbound from our chassis#033[00m
Dec  2 06:25:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:41.215 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 202d4c1b-b1c2-4564-b679-1d789b189a11#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.215 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:25:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4173520323' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:25:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:41.236 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[77a298a9-f38e-4cfc-ae42-fc0ae0fd63ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:25:41 np0005542249 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Dec  2 06:25:41 np0005542249 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Consumed 17.614s CPU time.
Dec  2 06:25:41 np0005542249 systemd-machined[216222]: Machine qemu-15-instance-0000000f terminated.
Dec  2 06:25:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:41.267 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[d9693d32-b6bb-4d68-a661-1da6bc7c329c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:25:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:41.271 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[f202c464-1975-490c-aaa1-903127743f70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:25:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:41.305 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[e2d36c56-ee68-4534-b494-1563ec2f8467]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:25:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:41.323 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[4ea62c84-cff8-4a41-a3c1-c4a5a066c241]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap202d4c1b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:0f:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 484032, 'reachable_time': 17542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282236, 'error': None, 'target': 'ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:25:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:41.353 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[b34ffda1-94bf-44ae-a895-7354cbd5495a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap202d4c1b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 484050, 'tstamp': 484050}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282237, 'error': None, 'target': 'ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap202d4c1b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 484054, 'tstamp': 484054}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282237, 'error': None, 'target': 'ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:25:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:41.355 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap202d4c1b-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.356 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.366 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:41.367 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap202d4c1b-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:25:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:41.368 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:25:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:41.368 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap202d4c1b-b0, col_values=(('external_ids', {'iface-id': '3e075ddd-b183-4a84-8a21-ad15a1122c7d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:25:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:41.368 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.374 254904 INFO nova.virt.libvirt.driver [-] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Instance destroyed successfully.#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.375 254904 DEBUG nova.objects.instance [None req-f09d7d04-78fe-49d4-92af-bcf6e6a88e3f b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lazy-loading 'resources' on Instance uuid b5c1d606-86ed-453a-b2e0-ee74a8e24c46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.412 254904 DEBUG nova.virt.libvirt.vif [None req-f09d7d04-78fe-49d4-92af-bcf6e6a88e3f b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:24:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-867719560',display_name='tempest-TestStampPattern-server-867719560',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-867719560',id=15,image_ref='673eea99-788c-44cc-a8b0-716ae3b6bc5c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK61TL5woGKETyNF39DpzDOQY/3175WIhzETkz8a48X3DCBshG53o26Fmw3NbtfN3xWBBqbdYLP1coPVuKFyoXP+AsOB02VoMViHoPOBDDnBHEpudY8V6Kx5OUC4H683Ng==',key_name='tempest-TestStampPattern-1085425870',keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:24:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8bda44a38b8b4f31a8b6e8f6f0548898',ramdisk_id='',reservation_id='r-uc8s2t0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='7d55326f-52eb-4f7f-a9cd-05282ca6ca20',image_min_disk='1',image_min_ram='0',image_owner_id='8bda44a38b8b4f31a8b6e8f6f0548898',image_owner_project_name='tempest-TestStampPattern-2114823383',image_owner_user_name='tempest-TestStampPattern-2114823383-project-member',image_user_id='b3ecaaf4f0044a58b99879bf1c55b18e',owner_project_name='tempest-TestStampPattern-2114823383',owner_user_name='tempest-TestStampPattern-2114823383-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:24:58Z,user_data=None,user_id='b3ecaaf4f0044a58b99879bf1c55b18e',uuid=b5c1d606-86ed-453a-b2e0-ee74a8e24c46,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a908869b-6b7e-4872-bd60-5397511b102a", "address": "fa:16:3e:12:29:28", "network": {"id": "202d4c1b-b1c2-4564-b679-1d789b189a11", "bridge": "br-int", "label": "tempest-TestStampPattern-951670540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bda44a38b8b4f31a8b6e8f6f0548898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa908869b-6b", "ovs_interfaceid": "a908869b-6b7e-4872-bd60-5397511b102a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.413 254904 DEBUG nova.network.os_vif_util [None req-f09d7d04-78fe-49d4-92af-bcf6e6a88e3f b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Converting VIF {"id": "a908869b-6b7e-4872-bd60-5397511b102a", "address": "fa:16:3e:12:29:28", "network": {"id": "202d4c1b-b1c2-4564-b679-1d789b189a11", "bridge": "br-int", "label": "tempest-TestStampPattern-951670540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bda44a38b8b4f31a8b6e8f6f0548898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa908869b-6b", "ovs_interfaceid": "a908869b-6b7e-4872-bd60-5397511b102a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.414 254904 DEBUG nova.network.os_vif_util [None req-f09d7d04-78fe-49d4-92af-bcf6e6a88e3f b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:12:29:28,bridge_name='br-int',has_traffic_filtering=True,id=a908869b-6b7e-4872-bd60-5397511b102a,network=Network(202d4c1b-b1c2-4564-b679-1d789b189a11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa908869b-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.414 254904 DEBUG os_vif [None req-f09d7d04-78fe-49d4-92af-bcf6e6a88e3f b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:29:28,bridge_name='br-int',has_traffic_filtering=True,id=a908869b-6b7e-4872-bd60-5397511b102a,network=Network(202d4c1b-b1c2-4564-b679-1d789b189a11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa908869b-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.416 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.416 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa908869b-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.418 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.419 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.421 254904 INFO os_vif [None req-f09d7d04-78fe-49d4-92af-bcf6e6a88e3f b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:29:28,bridge_name='br-int',has_traffic_filtering=True,id=a908869b-6b7e-4872-bd60-5397511b102a,network=Network(202d4c1b-b1c2-4564-b679-1d789b189a11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa908869b-6b')#033[00m
Dec  2 06:25:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Dec  2 06:25:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Dec  2 06:25:41 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.750 254904 INFO nova.virt.libvirt.driver [None req-f09d7d04-78fe-49d4-92af-bcf6e6a88e3f b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Deleting instance files /var/lib/nova/instances/b5c1d606-86ed-453a-b2e0-ee74a8e24c46_del#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.751 254904 INFO nova.virt.libvirt.driver [None req-f09d7d04-78fe-49d4-92af-bcf6e6a88e3f b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Deletion of /var/lib/nova/instances/b5c1d606-86ed-453a-b2e0-ee74a8e24c46_del complete#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.792 254904 INFO nova.compute.manager [None req-f09d7d04-78fe-49d4-92af-bcf6e6a88e3f b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.792 254904 DEBUG oslo.service.loopingcall [None req-f09d7d04-78fe-49d4-92af-bcf6e6a88e3f b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.793 254904 DEBUG nova.compute.manager [-] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:25:41 np0005542249 nova_compute[254900]: 2025-12-02 11:25:41.793 254904 DEBUG nova.network.neutron [-] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:25:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:25:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Dec  2 06:25:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Dec  2 06:25:41 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Dec  2 06:25:42 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1389: 321 pgs: 321 active+clean; 266 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 556 KiB/s rd, 11 KiB/s wr, 216 op/s
Dec  2 06:25:42 np0005542249 nova_compute[254900]: 2025-12-02 11:25:42.573 254904 DEBUG nova.network.neutron [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Updating instance_info_cache with network_info: [{"id": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "address": "fa:16:3e:02:ef:76", "network": {"id": "202d4c1b-b1c2-4564-b679-1d789b189a11", "bridge": "br-int", "label": "tempest-TestStampPattern-951670540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bda44a38b8b4f31a8b6e8f6f0548898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6b169f0-73", "ovs_interfaceid": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:25:42 np0005542249 nova_compute[254900]: 2025-12-02 11:25:42.588 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Releasing lock "refresh_cache-7d55326f-52eb-4f7f-a9cd-05282ca6ca20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:25:42 np0005542249 nova_compute[254900]: 2025-12-02 11:25:42.588 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 06:25:42 np0005542249 nova_compute[254900]: 2025-12-02 11:25:42.589 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:25:42 np0005542249 nova_compute[254900]: 2025-12-02 11:25:42.590 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:25:42 np0005542249 nova_compute[254900]: 2025-12-02 11:25:42.591 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:25:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:25:42 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3004168982' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:25:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:25:42 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3004168982' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.018 254904 DEBUG nova.network.neutron [-] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.047 254904 INFO nova.compute.manager [-] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Took 1.25 seconds to deallocate network for instance.#033[00m
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.105 254904 DEBUG oslo_concurrency.lockutils [None req-f09d7d04-78fe-49d4-92af-bcf6e6a88e3f b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.106 254904 DEBUG oslo_concurrency.lockutils [None req-f09d7d04-78fe-49d4-92af-bcf6e6a88e3f b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.145 254904 DEBUG nova.compute.manager [req-0c8e4fee-451d-40d3-a044-941c3b4b54a7 req-bb82f355-1624-4062-8ed8-77937ce52aed 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Received event network-vif-unplugged-a908869b-6b7e-4872-bd60-5397511b102a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.145 254904 DEBUG oslo_concurrency.lockutils [req-0c8e4fee-451d-40d3-a044-941c3b4b54a7 req-bb82f355-1624-4062-8ed8-77937ce52aed 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.145 254904 DEBUG oslo_concurrency.lockutils [req-0c8e4fee-451d-40d3-a044-941c3b4b54a7 req-bb82f355-1624-4062-8ed8-77937ce52aed 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.145 254904 DEBUG oslo_concurrency.lockutils [req-0c8e4fee-451d-40d3-a044-941c3b4b54a7 req-bb82f355-1624-4062-8ed8-77937ce52aed 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.146 254904 DEBUG nova.compute.manager [req-0c8e4fee-451d-40d3-a044-941c3b4b54a7 req-bb82f355-1624-4062-8ed8-77937ce52aed 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] No waiting events found dispatching network-vif-unplugged-a908869b-6b7e-4872-bd60-5397511b102a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.146 254904 WARNING nova.compute.manager [req-0c8e4fee-451d-40d3-a044-941c3b4b54a7 req-bb82f355-1624-4062-8ed8-77937ce52aed 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Received unexpected event network-vif-unplugged-a908869b-6b7e-4872-bd60-5397511b102a for instance with vm_state deleted and task_state None.#033[00m
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.146 254904 DEBUG nova.compute.manager [req-0c8e4fee-451d-40d3-a044-941c3b4b54a7 req-bb82f355-1624-4062-8ed8-77937ce52aed 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Received event network-vif-plugged-a908869b-6b7e-4872-bd60-5397511b102a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.146 254904 DEBUG oslo_concurrency.lockutils [req-0c8e4fee-451d-40d3-a044-941c3b4b54a7 req-bb82f355-1624-4062-8ed8-77937ce52aed 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.146 254904 DEBUG oslo_concurrency.lockutils [req-0c8e4fee-451d-40d3-a044-941c3b4b54a7 req-bb82f355-1624-4062-8ed8-77937ce52aed 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.146 254904 DEBUG oslo_concurrency.lockutils [req-0c8e4fee-451d-40d3-a044-941c3b4b54a7 req-bb82f355-1624-4062-8ed8-77937ce52aed 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.146 254904 DEBUG nova.compute.manager [req-0c8e4fee-451d-40d3-a044-941c3b4b54a7 req-bb82f355-1624-4062-8ed8-77937ce52aed 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] No waiting events found dispatching network-vif-plugged-a908869b-6b7e-4872-bd60-5397511b102a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.147 254904 WARNING nova.compute.manager [req-0c8e4fee-451d-40d3-a044-941c3b4b54a7 req-bb82f355-1624-4062-8ed8-77937ce52aed 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Received unexpected event network-vif-plugged-a908869b-6b7e-4872-bd60-5397511b102a for instance with vm_state deleted and task_state None.#033[00m
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.147 254904 DEBUG nova.compute.manager [req-0c8e4fee-451d-40d3-a044-941c3b4b54a7 req-bb82f355-1624-4062-8ed8-77937ce52aed 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Received event network-vif-deleted-a908869b-6b7e-4872-bd60-5397511b102a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.185 254904 DEBUG oslo_concurrency.processutils [None req-f09d7d04-78fe-49d4-92af-bcf6e6a88e3f b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.250 254904 DEBUG nova.network.neutron [req-70584a53-5b8d-428c-9b89-1ab2a1e089f3 req-73743f05-9a0e-46ae-b7f3-7c94b9251de4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Updated VIF entry in instance network info cache for port a908869b-6b7e-4872-bd60-5397511b102a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.251 254904 DEBUG nova.network.neutron [req-70584a53-5b8d-428c-9b89-1ab2a1e089f3 req-73743f05-9a0e-46ae-b7f3-7c94b9251de4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Updating instance_info_cache with network_info: [{"id": "a908869b-6b7e-4872-bd60-5397511b102a", "address": "fa:16:3e:12:29:28", "network": {"id": "202d4c1b-b1c2-4564-b679-1d789b189a11", "bridge": "br-int", "label": "tempest-TestStampPattern-951670540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bda44a38b8b4f31a8b6e8f6f0548898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa908869b-6b", "ovs_interfaceid": "a908869b-6b7e-4872-bd60-5397511b102a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.276 254904 DEBUG oslo_concurrency.lockutils [req-70584a53-5b8d-428c-9b89-1ab2a1e089f3 req-73743f05-9a0e-46ae-b7f3-7c94b9251de4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-b5c1d606-86ed-453a-b2e0-ee74a8e24c46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.464 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:25:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2325924617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.710 254904 DEBUG oslo_concurrency.processutils [None req-f09d7d04-78fe-49d4-92af-bcf6e6a88e3f b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.719 254904 DEBUG nova.compute.provider_tree [None req-f09d7d04-78fe-49d4-92af-bcf6e6a88e3f b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.744 254904 DEBUG nova.scheduler.client.report [None req-f09d7d04-78fe-49d4-92af-bcf6e6a88e3f b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.783 254904 DEBUG oslo_concurrency.lockutils [None req-f09d7d04-78fe-49d4-92af-bcf6e6a88e3f b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.823 254904 INFO nova.scheduler.client.report [None req-f09d7d04-78fe-49d4-92af-bcf6e6a88e3f b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Deleted allocations for instance b5c1d606-86ed-453a-b2e0-ee74a8e24c46#033[00m
Dec  2 06:25:43 np0005542249 nova_compute[254900]: 2025-12-02 11:25:43.909 254904 DEBUG oslo_concurrency.lockutils [None req-f09d7d04-78fe-49d4-92af-bcf6e6a88e3f b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "b5c1d606-86ed-453a-b2e0-ee74a8e24c46" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:25:44 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1390: 321 pgs: 321 active+clean; 256 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 903 KiB/s rd, 351 KiB/s wr, 196 op/s
Dec  2 06:25:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Dec  2 06:25:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Dec  2 06:25:44 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Dec  2 06:25:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:25:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4220663081' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:25:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:25:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4220663081' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:25:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:25:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3315023563' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:25:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:25:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3315023563' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:25:45 np0005542249 nova_compute[254900]: 2025-12-02 11:25:45.587 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:25:45 np0005542249 nova_compute[254900]: 2025-12-02 11:25:45.588 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:25:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:25:46 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2487530199' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:25:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:25:46 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2487530199' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:25:46 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1392: 321 pgs: 321 active+clean; 250 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 600 KiB/s rd, 357 KiB/s wr, 249 op/s
Dec  2 06:25:46 np0005542249 nova_compute[254900]: 2025-12-02 11:25:46.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:25:46 np0005542249 nova_compute[254900]: 2025-12-02 11:25:46.419 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Dec  2 06:25:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Dec  2 06:25:46 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Dec  2 06:25:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:25:47 np0005542249 podman[282291]: 2025-12-02 11:25:47.021701716 +0000 UTC m=+0.102039779 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller)
Dec  2 06:25:47 np0005542249 nova_compute[254900]: 2025-12-02 11:25:47.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:25:47 np0005542249 nova_compute[254900]: 2025-12-02 11:25:47.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:25:47 np0005542249 nova_compute[254900]: 2025-12-02 11:25:47.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:25:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:25:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1632555058' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:25:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:25:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1632555058' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:25:47 np0005542249 nova_compute[254900]: 2025-12-02 11:25:47.407 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:25:47 np0005542249 nova_compute[254900]: 2025-12-02 11:25:47.407 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:25:47 np0005542249 nova_compute[254900]: 2025-12-02 11:25:47.407 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:25:47 np0005542249 nova_compute[254900]: 2025-12-02 11:25:47.407 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:25:47 np0005542249 nova_compute[254900]: 2025-12-02 11:25:47.408 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:25:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Dec  2 06:25:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Dec  2 06:25:47 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Dec  2 06:25:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:25:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/7116201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:25:47 np0005542249 nova_compute[254900]: 2025-12-02 11:25:47.875 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:25:47 np0005542249 nova_compute[254900]: 2025-12-02 11:25:47.941 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:25:47 np0005542249 nova_compute[254900]: 2025-12-02 11:25:47.941 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:25:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:25:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1697039254' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:25:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:25:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1697039254' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:25:48 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1395: 321 pgs: 14 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 303 active+clean; 230 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 620 KiB/s rd, 361 KiB/s wr, 347 op/s
Dec  2 06:25:48 np0005542249 nova_compute[254900]: 2025-12-02 11:25:48.108 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:25:48 np0005542249 nova_compute[254900]: 2025-12-02 11:25:48.109 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4265MB free_disk=59.94264221191406GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:25:48 np0005542249 nova_compute[254900]: 2025-12-02 11:25:48.109 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:25:48 np0005542249 nova_compute[254900]: 2025-12-02 11:25:48.110 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:25:48 np0005542249 nova_compute[254900]: 2025-12-02 11:25:48.170 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Instance 7d55326f-52eb-4f7f-a9cd-05282ca6ca20 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 06:25:48 np0005542249 nova_compute[254900]: 2025-12-02 11:25:48.171 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:25:48 np0005542249 nova_compute[254900]: 2025-12-02 11:25:48.171 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:25:48 np0005542249 nova_compute[254900]: 2025-12-02 11:25:48.204 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:25:48 np0005542249 nova_compute[254900]: 2025-12-02 11:25:48.466 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:25:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2080232507' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:25:48 np0005542249 nova_compute[254900]: 2025-12-02 11:25:48.642 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:25:48 np0005542249 nova_compute[254900]: 2025-12-02 11:25:48.648 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:25:48 np0005542249 nova_compute[254900]: 2025-12-02 11:25:48.663 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:25:48 np0005542249 nova_compute[254900]: 2025-12-02 11:25:48.687 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:25:48 np0005542249 nova_compute[254900]: 2025-12-02 11:25:48.687 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:25:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:25:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2647143497' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:25:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:25:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2647143497' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:25:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:25:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/683608547' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:25:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:25:49 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/683608547' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:25:49 np0005542249 nova_compute[254900]: 2025-12-02 11:25:49.519 254904 DEBUG nova.compute.manager [req-86bb0164-a544-47ff-9a2d-eeaebd612e5c req-89dd485f-0915-4aed-b116-950351b9e138 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Received event network-changed-e6b169f0-73a9-4a53-93e6-a0290b2f4f4a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:25:49 np0005542249 nova_compute[254900]: 2025-12-02 11:25:49.520 254904 DEBUG nova.compute.manager [req-86bb0164-a544-47ff-9a2d-eeaebd612e5c req-89dd485f-0915-4aed-b116-950351b9e138 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Refreshing instance network info cache due to event network-changed-e6b169f0-73a9-4a53-93e6-a0290b2f4f4a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:25:49 np0005542249 nova_compute[254900]: 2025-12-02 11:25:49.520 254904 DEBUG oslo_concurrency.lockutils [req-86bb0164-a544-47ff-9a2d-eeaebd612e5c req-89dd485f-0915-4aed-b116-950351b9e138 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-7d55326f-52eb-4f7f-a9cd-05282ca6ca20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:25:49 np0005542249 nova_compute[254900]: 2025-12-02 11:25:49.521 254904 DEBUG oslo_concurrency.lockutils [req-86bb0164-a544-47ff-9a2d-eeaebd612e5c req-89dd485f-0915-4aed-b116-950351b9e138 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-7d55326f-52eb-4f7f-a9cd-05282ca6ca20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:25:49 np0005542249 nova_compute[254900]: 2025-12-02 11:25:49.521 254904 DEBUG nova.network.neutron [req-86bb0164-a544-47ff-9a2d-eeaebd612e5c req-89dd485f-0915-4aed-b116-950351b9e138 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Refreshing network info cache for port e6b169f0-73a9-4a53-93e6-a0290b2f4f4a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:25:49 np0005542249 nova_compute[254900]: 2025-12-02 11:25:49.570 254904 DEBUG oslo_concurrency.lockutils [None req-16f4912a-5059-4842-adba-eed3aea836e3 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Acquiring lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:25:49 np0005542249 nova_compute[254900]: 2025-12-02 11:25:49.571 254904 DEBUG oslo_concurrency.lockutils [None req-16f4912a-5059-4842-adba-eed3aea836e3 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:25:49 np0005542249 nova_compute[254900]: 2025-12-02 11:25:49.572 254904 DEBUG oslo_concurrency.lockutils [None req-16f4912a-5059-4842-adba-eed3aea836e3 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Acquiring lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:25:49 np0005542249 nova_compute[254900]: 2025-12-02 11:25:49.572 254904 DEBUG oslo_concurrency.lockutils [None req-16f4912a-5059-4842-adba-eed3aea836e3 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:25:49 np0005542249 nova_compute[254900]: 2025-12-02 11:25:49.573 254904 DEBUG oslo_concurrency.lockutils [None req-16f4912a-5059-4842-adba-eed3aea836e3 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:25:49 np0005542249 nova_compute[254900]: 2025-12-02 11:25:49.575 254904 INFO nova.compute.manager [None req-16f4912a-5059-4842-adba-eed3aea836e3 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Terminating instance#033[00m
Dec  2 06:25:49 np0005542249 nova_compute[254900]: 2025-12-02 11:25:49.576 254904 DEBUG nova.compute.manager [None req-16f4912a-5059-4842-adba-eed3aea836e3 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:25:49 np0005542249 kernel: tape6b169f0-73 (unregistering): left promiscuous mode
Dec  2 06:25:49 np0005542249 NetworkManager[48987]: <info>  [1764674749.6346] device (tape6b169f0-73): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:25:49 np0005542249 ovn_controller[153849]: 2025-12-02T11:25:49Z|00158|binding|INFO|Releasing lport e6b169f0-73a9-4a53-93e6-a0290b2f4f4a from this chassis (sb_readonly=0)
Dec  2 06:25:49 np0005542249 ovn_controller[153849]: 2025-12-02T11:25:49Z|00159|binding|INFO|Setting lport e6b169f0-73a9-4a53-93e6-a0290b2f4f4a down in Southbound
Dec  2 06:25:49 np0005542249 ovn_controller[153849]: 2025-12-02T11:25:49Z|00160|binding|INFO|Removing iface tape6b169f0-73 ovn-installed in OVS
Dec  2 06:25:49 np0005542249 nova_compute[254900]: 2025-12-02 11:25:49.697 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:49 np0005542249 nova_compute[254900]: 2025-12-02 11:25:49.699 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:49 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:49.704 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:ef:76 10.100.0.9'], port_security=['fa:16:3e:02:ef:76 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7d55326f-52eb-4f7f-a9cd-05282ca6ca20', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-202d4c1b-b1c2-4564-b679-1d789b189a11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bda44a38b8b4f31a8b6e8f6f0548898', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5dfe3f56-bbf4-4aa1-aeeb-3cc512936610', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64d73c60-6b3f-4965-8ef1-0cdac8312377, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=e6b169f0-73a9-4a53-93e6-a0290b2f4f4a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:25:49 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:49.706 163757 INFO neutron.agent.ovn.metadata.agent [-] Port e6b169f0-73a9-4a53-93e6-a0290b2f4f4a in datapath 202d4c1b-b1c2-4564-b679-1d789b189a11 unbound from our chassis#033[00m
Dec  2 06:25:49 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:49.708 163757 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 202d4c1b-b1c2-4564-b679-1d789b189a11, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 06:25:49 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:49.710 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[c2eb048f-3a9d-4627-acb4-723174ef1dd4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:25:49 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:49.711 163757 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11 namespace which is not needed anymore#033[00m
Dec  2 06:25:49 np0005542249 nova_compute[254900]: 2025-12-02 11:25:49.730 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:49 np0005542249 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Dec  2 06:25:49 np0005542249 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 18.745s CPU time.
Dec  2 06:25:49 np0005542249 systemd-machined[216222]: Machine qemu-13-instance-0000000d terminated.
Dec  2 06:25:49 np0005542249 nova_compute[254900]: 2025-12-02 11:25:49.816 254904 INFO nova.virt.libvirt.driver [-] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Instance destroyed successfully.#033[00m
Dec  2 06:25:49 np0005542249 nova_compute[254900]: 2025-12-02 11:25:49.817 254904 DEBUG nova.objects.instance [None req-16f4912a-5059-4842-adba-eed3aea836e3 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lazy-loading 'resources' on Instance uuid 7d55326f-52eb-4f7f-a9cd-05282ca6ca20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:25:49 np0005542249 nova_compute[254900]: 2025-12-02 11:25:49.844 254904 DEBUG nova.virt.libvirt.vif [None req-16f4912a-5059-4842-adba-eed3aea836e3 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:23:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-1044177341',display_name='tempest-TestStampPattern-server-1044177341',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1044177341',id=13,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK61TL5woGKETyNF39DpzDOQY/3175WIhzETkz8a48X3DCBshG53o26Fmw3NbtfN3xWBBqbdYLP1coPVuKFyoXP+AsOB02VoMViHoPOBDDnBHEpudY8V6Kx5OUC4H683Ng==',key_name='tempest-TestStampPattern-1085425870',keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:24:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8bda44a38b8b4f31a8b6e8f6f0548898',ramdisk_id='',reservation_id='r-9lo6jxsl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestStampPattern-2114823383',owner_user_name='tempest-TestStampPattern-2114823383-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:24:45Z,user_data=None,user_id='b3ecaaf4f0044a58b99879bf1c55b18e',uuid=7d55326f-52eb-4f7f-a9cd-05282ca6ca20,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "address": "fa:16:3e:02:ef:76", "network": {"id": "202d4c1b-b1c2-4564-b679-1d789b189a11", "bridge": "br-int", "label": "tempest-TestStampPattern-951670540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bda44a38b8b4f31a8b6e8f6f0548898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6b169f0-73", "ovs_interfaceid": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:25:49 np0005542249 nova_compute[254900]: 2025-12-02 11:25:49.844 254904 DEBUG nova.network.os_vif_util [None req-16f4912a-5059-4842-adba-eed3aea836e3 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Converting VIF {"id": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "address": "fa:16:3e:02:ef:76", "network": {"id": "202d4c1b-b1c2-4564-b679-1d789b189a11", "bridge": "br-int", "label": "tempest-TestStampPattern-951670540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bda44a38b8b4f31a8b6e8f6f0548898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6b169f0-73", "ovs_interfaceid": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:25:49 np0005542249 nova_compute[254900]: 2025-12-02 11:25:49.845 254904 DEBUG nova.network.os_vif_util [None req-16f4912a-5059-4842-adba-eed3aea836e3 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:02:ef:76,bridge_name='br-int',has_traffic_filtering=True,id=e6b169f0-73a9-4a53-93e6-a0290b2f4f4a,network=Network(202d4c1b-b1c2-4564-b679-1d789b189a11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6b169f0-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:25:49 np0005542249 nova_compute[254900]: 2025-12-02 11:25:49.846 254904 DEBUG os_vif [None req-16f4912a-5059-4842-adba-eed3aea836e3 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:ef:76,bridge_name='br-int',has_traffic_filtering=True,id=e6b169f0-73a9-4a53-93e6-a0290b2f4f4a,network=Network(202d4c1b-b1c2-4564-b679-1d789b189a11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6b169f0-73') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:25:49 np0005542249 nova_compute[254900]: 2025-12-02 11:25:49.847 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:49 np0005542249 nova_compute[254900]: 2025-12-02 11:25:49.848 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape6b169f0-73, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:25:49 np0005542249 nova_compute[254900]: 2025-12-02 11:25:49.849 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:49 np0005542249 nova_compute[254900]: 2025-12-02 11:25:49.852 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:25:49 np0005542249 nova_compute[254900]: 2025-12-02 11:25:49.855 254904 INFO os_vif [None req-16f4912a-5059-4842-adba-eed3aea836e3 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:ef:76,bridge_name='br-int',has_traffic_filtering=True,id=e6b169f0-73a9-4a53-93e6-a0290b2f4f4a,network=Network(202d4c1b-b1c2-4564-b679-1d789b189a11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6b169f0-73')#033[00m
Dec  2 06:25:49 np0005542249 neutron-haproxy-ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11[279081]: [NOTICE]   (279085) : haproxy version is 2.8.14-c23fe91
Dec  2 06:25:49 np0005542249 neutron-haproxy-ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11[279081]: [NOTICE]   (279085) : path to executable is /usr/sbin/haproxy
Dec  2 06:25:49 np0005542249 neutron-haproxy-ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11[279081]: [WARNING]  (279085) : Exiting Master process...
Dec  2 06:25:49 np0005542249 neutron-haproxy-ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11[279081]: [WARNING]  (279085) : Exiting Master process...
Dec  2 06:25:49 np0005542249 neutron-haproxy-ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11[279081]: [ALERT]    (279085) : Current worker (279087) exited with code 143 (Terminated)
Dec  2 06:25:49 np0005542249 neutron-haproxy-ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11[279081]: [WARNING]  (279085) : All workers exited. Exiting... (0)
Dec  2 06:25:49 np0005542249 systemd[1]: libpod-edd9211b774a10b8a03207a8a2ae6c558d7143a33ba55e8b3ea7273d7fba6a4a.scope: Deactivated successfully.
Dec  2 06:25:49 np0005542249 podman[282393]: 2025-12-02 11:25:49.919580671 +0000 UTC m=+0.072832309 container died edd9211b774a10b8a03207a8a2ae6c558d7143a33ba55e8b3ea7273d7fba6a4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  2 06:25:49 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-edd9211b774a10b8a03207a8a2ae6c558d7143a33ba55e8b3ea7273d7fba6a4a-userdata-shm.mount: Deactivated successfully.
Dec  2 06:25:49 np0005542249 systemd[1]: var-lib-containers-storage-overlay-b7456695519acebaf7b25892e7aab2f1867a04bec0b6409570bebc013aee17da-merged.mount: Deactivated successfully.
Dec  2 06:25:49 np0005542249 podman[282393]: 2025-12-02 11:25:49.960324672 +0000 UTC m=+0.113576300 container cleanup edd9211b774a10b8a03207a8a2ae6c558d7143a33ba55e8b3ea7273d7fba6a4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  2 06:25:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:25:49 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3987007652' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:25:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:25:49 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3987007652' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:25:49 np0005542249 systemd[1]: libpod-conmon-edd9211b774a10b8a03207a8a2ae6c558d7143a33ba55e8b3ea7273d7fba6a4a.scope: Deactivated successfully.
Dec  2 06:25:50 np0005542249 podman[282441]: 2025-12-02 11:25:50.053221812 +0000 UTC m=+0.059821398 container remove edd9211b774a10b8a03207a8a2ae6c558d7143a33ba55e8b3ea7273d7fba6a4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  2 06:25:50 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:50.058 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[d068ecff-895b-4e44-83ae-590e97e283da]: (4, ('Tue Dec  2 11:25:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11 (edd9211b774a10b8a03207a8a2ae6c558d7143a33ba55e8b3ea7273d7fba6a4a)\nedd9211b774a10b8a03207a8a2ae6c558d7143a33ba55e8b3ea7273d7fba6a4a\nTue Dec  2 11:25:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11 (edd9211b774a10b8a03207a8a2ae6c558d7143a33ba55e8b3ea7273d7fba6a4a)\nedd9211b774a10b8a03207a8a2ae6c558d7143a33ba55e8b3ea7273d7fba6a4a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:25:50 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:50.060 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[9fb5c0f2-fa1a-4797-bc93-574381395f26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:25:50 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:50.061 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap202d4c1b-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:25:50 np0005542249 nova_compute[254900]: 2025-12-02 11:25:50.064 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:50 np0005542249 kernel: tap202d4c1b-b0: left promiscuous mode
Dec  2 06:25:50 np0005542249 nova_compute[254900]: 2025-12-02 11:25:50.086 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:50 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:50.088 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[be775e9d-f5ae-4062-8268-ee3f96cb30e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:25:50 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1396: 321 pgs: 14 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 303 active+clean; 208 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 18 KiB/s wr, 408 op/s
Dec  2 06:25:50 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:50.101 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[0e7b97a9-d913-4d21-bdc0-c60ef70bbb04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:25:50 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:50.102 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[ebe4787f-d2b0-482a-8537-95c574524768]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:25:50 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:50.122 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[2498c1ea-5054-4e7c-bc2a-ee55be055a2b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 484020, 'reachable_time': 42601, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282460, 'error': None, 'target': 'ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:25:50 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:50.125 164036 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-202d4c1b-b1c2-4564-b679-1d789b189a11 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 06:25:50 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:50.125 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[90bbe36c-5590-4c52-b6e2-29b9ebbc41b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:25:50 np0005542249 systemd[1]: run-netns-ovnmeta\x2d202d4c1b\x2db1c2\x2d4564\x2db679\x2d1d789b189a11.mount: Deactivated successfully.
Dec  2 06:25:50 np0005542249 podman[282456]: 2025-12-02 11:25:50.168200778 +0000 UTC m=+0.058765609 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  2 06:25:50 np0005542249 nova_compute[254900]: 2025-12-02 11:25:50.272 254904 INFO nova.virt.libvirt.driver [None req-16f4912a-5059-4842-adba-eed3aea836e3 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Deleting instance files /var/lib/nova/instances/7d55326f-52eb-4f7f-a9cd-05282ca6ca20_del#033[00m
Dec  2 06:25:50 np0005542249 nova_compute[254900]: 2025-12-02 11:25:50.273 254904 INFO nova.virt.libvirt.driver [None req-16f4912a-5059-4842-adba-eed3aea836e3 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Deletion of /var/lib/nova/instances/7d55326f-52eb-4f7f-a9cd-05282ca6ca20_del complete#033[00m
Dec  2 06:25:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:25:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2958651077' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:25:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:25:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2958651077' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:25:50 np0005542249 nova_compute[254900]: 2025-12-02 11:25:50.335 254904 INFO nova.compute.manager [None req-16f4912a-5059-4842-adba-eed3aea836e3 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:25:50 np0005542249 nova_compute[254900]: 2025-12-02 11:25:50.336 254904 DEBUG oslo.service.loopingcall [None req-16f4912a-5059-4842-adba-eed3aea836e3 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:25:50 np0005542249 nova_compute[254900]: 2025-12-02 11:25:50.336 254904 DEBUG nova.compute.manager [-] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:25:50 np0005542249 nova_compute[254900]: 2025-12-02 11:25:50.336 254904 DEBUG nova.network.neutron [-] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:25:50 np0005542249 nova_compute[254900]: 2025-12-02 11:25:50.638 254904 DEBUG nova.network.neutron [req-86bb0164-a544-47ff-9a2d-eeaebd612e5c req-89dd485f-0915-4aed-b116-950351b9e138 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Updated VIF entry in instance network info cache for port e6b169f0-73a9-4a53-93e6-a0290b2f4f4a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:25:50 np0005542249 nova_compute[254900]: 2025-12-02 11:25:50.638 254904 DEBUG nova.network.neutron [req-86bb0164-a544-47ff-9a2d-eeaebd612e5c req-89dd485f-0915-4aed-b116-950351b9e138 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Updating instance_info_cache with network_info: [{"id": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "address": "fa:16:3e:02:ef:76", "network": {"id": "202d4c1b-b1c2-4564-b679-1d789b189a11", "bridge": "br-int", "label": "tempest-TestStampPattern-951670540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bda44a38b8b4f31a8b6e8f6f0548898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6b169f0-73", "ovs_interfaceid": "e6b169f0-73a9-4a53-93e6-a0290b2f4f4a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:25:50 np0005542249 nova_compute[254900]: 2025-12-02 11:25:50.672 254904 DEBUG oslo_concurrency.lockutils [req-86bb0164-a544-47ff-9a2d-eeaebd612e5c req-89dd485f-0915-4aed-b116-950351b9e138 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-7d55326f-52eb-4f7f-a9cd-05282ca6ca20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:25:50 np0005542249 nova_compute[254900]: 2025-12-02 11:25:50.845 254904 DEBUG nova.network.neutron [-] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:25:50 np0005542249 nova_compute[254900]: 2025-12-02 11:25:50.865 254904 INFO nova.compute.manager [-] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Took 0.53 seconds to deallocate network for instance.#033[00m
Dec  2 06:25:50 np0005542249 nova_compute[254900]: 2025-12-02 11:25:50.909 254904 DEBUG oslo_concurrency.lockutils [None req-16f4912a-5059-4842-adba-eed3aea836e3 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:25:50 np0005542249 nova_compute[254900]: 2025-12-02 11:25:50.909 254904 DEBUG oslo_concurrency.lockutils [None req-16f4912a-5059-4842-adba-eed3aea836e3 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:25:50 np0005542249 nova_compute[254900]: 2025-12-02 11:25:50.960 254904 DEBUG oslo_concurrency.processutils [None req-16f4912a-5059-4842-adba-eed3aea836e3 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:25:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:25:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1650106976' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:25:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:25:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1650106976' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:25:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:25:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1307627081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:25:51 np0005542249 nova_compute[254900]: 2025-12-02 11:25:51.413 254904 DEBUG oslo_concurrency.processutils [None req-16f4912a-5059-4842-adba-eed3aea836e3 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:25:51 np0005542249 nova_compute[254900]: 2025-12-02 11:25:51.421 254904 DEBUG nova.compute.provider_tree [None req-16f4912a-5059-4842-adba-eed3aea836e3 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:25:51 np0005542249 nova_compute[254900]: 2025-12-02 11:25:51.439 254904 DEBUG nova.scheduler.client.report [None req-16f4912a-5059-4842-adba-eed3aea836e3 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:25:51 np0005542249 nova_compute[254900]: 2025-12-02 11:25:51.461 254904 DEBUG oslo_concurrency.lockutils [None req-16f4912a-5059-4842-adba-eed3aea836e3 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:25:51 np0005542249 nova_compute[254900]: 2025-12-02 11:25:51.502 254904 INFO nova.scheduler.client.report [None req-16f4912a-5059-4842-adba-eed3aea836e3 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Deleted allocations for instance 7d55326f-52eb-4f7f-a9cd-05282ca6ca20#033[00m
Dec  2 06:25:51 np0005542249 nova_compute[254900]: 2025-12-02 11:25:51.570 254904 DEBUG oslo_concurrency.lockutils [None req-16f4912a-5059-4842-adba-eed3aea836e3 b3ecaaf4f0044a58b99879bf1c55b18e 8bda44a38b8b4f31a8b6e8f6f0548898 - - default default] Lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.999s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:25:51 np0005542249 nova_compute[254900]: 2025-12-02 11:25:51.596 254904 DEBUG nova.compute.manager [req-c5e66f46-d26a-4aac-9bbf-dfb4f9c0cd89 req-9d0c8918-3189-4379-a7b7-069339a2bced 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Received event network-vif-unplugged-e6b169f0-73a9-4a53-93e6-a0290b2f4f4a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:25:51 np0005542249 nova_compute[254900]: 2025-12-02 11:25:51.597 254904 DEBUG oslo_concurrency.lockutils [req-c5e66f46-d26a-4aac-9bbf-dfb4f9c0cd89 req-9d0c8918-3189-4379-a7b7-069339a2bced 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:25:51 np0005542249 nova_compute[254900]: 2025-12-02 11:25:51.597 254904 DEBUG oslo_concurrency.lockutils [req-c5e66f46-d26a-4aac-9bbf-dfb4f9c0cd89 req-9d0c8918-3189-4379-a7b7-069339a2bced 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:25:51 np0005542249 nova_compute[254900]: 2025-12-02 11:25:51.597 254904 DEBUG oslo_concurrency.lockutils [req-c5e66f46-d26a-4aac-9bbf-dfb4f9c0cd89 req-9d0c8918-3189-4379-a7b7-069339a2bced 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:25:51 np0005542249 nova_compute[254900]: 2025-12-02 11:25:51.597 254904 DEBUG nova.compute.manager [req-c5e66f46-d26a-4aac-9bbf-dfb4f9c0cd89 req-9d0c8918-3189-4379-a7b7-069339a2bced 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] No waiting events found dispatching network-vif-unplugged-e6b169f0-73a9-4a53-93e6-a0290b2f4f4a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:25:51 np0005542249 nova_compute[254900]: 2025-12-02 11:25:51.598 254904 WARNING nova.compute.manager [req-c5e66f46-d26a-4aac-9bbf-dfb4f9c0cd89 req-9d0c8918-3189-4379-a7b7-069339a2bced 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Received unexpected event network-vif-unplugged-e6b169f0-73a9-4a53-93e6-a0290b2f4f4a for instance with vm_state deleted and task_state None.#033[00m
Dec  2 06:25:51 np0005542249 nova_compute[254900]: 2025-12-02 11:25:51.598 254904 DEBUG nova.compute.manager [req-c5e66f46-d26a-4aac-9bbf-dfb4f9c0cd89 req-9d0c8918-3189-4379-a7b7-069339a2bced 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Received event network-vif-plugged-e6b169f0-73a9-4a53-93e6-a0290b2f4f4a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:25:51 np0005542249 nova_compute[254900]: 2025-12-02 11:25:51.598 254904 DEBUG oslo_concurrency.lockutils [req-c5e66f46-d26a-4aac-9bbf-dfb4f9c0cd89 req-9d0c8918-3189-4379-a7b7-069339a2bced 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:25:51 np0005542249 nova_compute[254900]: 2025-12-02 11:25:51.599 254904 DEBUG oslo_concurrency.lockutils [req-c5e66f46-d26a-4aac-9bbf-dfb4f9c0cd89 req-9d0c8918-3189-4379-a7b7-069339a2bced 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:25:51 np0005542249 nova_compute[254900]: 2025-12-02 11:25:51.599 254904 DEBUG oslo_concurrency.lockutils [req-c5e66f46-d26a-4aac-9bbf-dfb4f9c0cd89 req-9d0c8918-3189-4379-a7b7-069339a2bced 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "7d55326f-52eb-4f7f-a9cd-05282ca6ca20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:25:51 np0005542249 nova_compute[254900]: 2025-12-02 11:25:51.599 254904 DEBUG nova.compute.manager [req-c5e66f46-d26a-4aac-9bbf-dfb4f9c0cd89 req-9d0c8918-3189-4379-a7b7-069339a2bced 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] No waiting events found dispatching network-vif-plugged-e6b169f0-73a9-4a53-93e6-a0290b2f4f4a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:25:51 np0005542249 nova_compute[254900]: 2025-12-02 11:25:51.599 254904 WARNING nova.compute.manager [req-c5e66f46-d26a-4aac-9bbf-dfb4f9c0cd89 req-9d0c8918-3189-4379-a7b7-069339a2bced 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Received unexpected event network-vif-plugged-e6b169f0-73a9-4a53-93e6-a0290b2f4f4a for instance with vm_state deleted and task_state None.#033[00m
Dec  2 06:25:51 np0005542249 nova_compute[254900]: 2025-12-02 11:25:51.600 254904 DEBUG nova.compute.manager [req-c5e66f46-d26a-4aac-9bbf-dfb4f9c0cd89 req-9d0c8918-3189-4379-a7b7-069339a2bced 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Received event network-vif-deleted-e6b169f0-73a9-4a53-93e6-a0290b2f4f4a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:25:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Dec  2 06:25:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Dec  2 06:25:51 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Dec  2 06:25:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:25:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Dec  2 06:25:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Dec  2 06:25:51 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Dec  2 06:25:52 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1399: 321 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 309 active+clean; 144 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 299 KiB/s rd, 15 KiB/s wr, 405 op/s
Dec  2 06:25:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Dec  2 06:25:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Dec  2 06:25:53 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Dec  2 06:25:53 np0005542249 nova_compute[254900]: 2025-12-02 11:25:53.513 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:25:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/42177423' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:25:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:25:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/42177423' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:25:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:25:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1978556739' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:25:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:25:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1978556739' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:25:54 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1401: 321 pgs: 321 active+clean; 103 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 240 KiB/s rd, 9.7 KiB/s wr, 321 op/s
Dec  2 06:25:54 np0005542249 nova_compute[254900]: 2025-12-02 11:25:54.863 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e317 do_prune osdmap full prune enabled
Dec  2 06:25:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e318 e318: 3 total, 3 up, 3 in
Dec  2 06:25:55 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e318: 3 total, 3 up, 3 in
Dec  2 06:25:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:55.978 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:23:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:25:d0:a1:75:ed'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:25:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:25:55.980 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 06:25:56 np0005542249 nova_compute[254900]: 2025-12-02 11:25:56.022 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:56 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1403: 321 pgs: 321 active+clean; 88 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 247 KiB/s rd, 13 KiB/s wr, 345 op/s
Dec  2 06:25:56 np0005542249 nova_compute[254900]: 2025-12-02 11:25:56.101 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:56 np0005542249 nova_compute[254900]: 2025-12-02 11:25:56.267 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:25:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:25:56 np0005542249 nova_compute[254900]: 2025-12-02 11:25:56.372 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764674741.3716412, b5c1d606-86ed-453a-b2e0-ee74a8e24c46 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:25:56 np0005542249 nova_compute[254900]: 2025-12-02 11:25:56.373 254904 INFO nova.compute.manager [-] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:25:56 np0005542249 nova_compute[254900]: 2025-12-02 11:25:56.393 254904 DEBUG nova.compute.manager [None req-bac89ea1-e924-4ea1-af0f-1afa17df2e38 - - - - - -] [instance: b5c1d606-86ed-453a-b2e0-ee74a8e24c46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:25:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:25:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:25:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:25:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:25:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:25:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e318 do_prune osdmap full prune enabled
Dec  2 06:25:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e319 e319: 3 total, 3 up, 3 in
Dec  2 06:25:56 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e319: 3 total, 3 up, 3 in
Dec  2 06:25:58 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1405: 321 pgs: 321 active+clean; 88 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 189 KiB/s rd, 9.8 KiB/s wr, 257 op/s
Dec  2 06:25:58 np0005542249 nova_compute[254900]: 2025-12-02 11:25:58.516 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:25:59 np0005542249 nova_compute[254900]: 2025-12-02 11:25:59.898 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:00 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1406: 321 pgs: 321 active+clean; 88 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 6.3 KiB/s wr, 151 op/s
Dec  2 06:26:00 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:00.983 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:26:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e319 do_prune osdmap full prune enabled
Dec  2 06:26:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e320 e320: 3 total, 3 up, 3 in
Dec  2 06:26:01 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e320: 3 total, 3 up, 3 in
Dec  2 06:26:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:26:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e320 do_prune osdmap full prune enabled
Dec  2 06:26:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e321 e321: 3 total, 3 up, 3 in
Dec  2 06:26:02 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e321: 3 total, 3 up, 3 in
Dec  2 06:26:02 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1409: 321 pgs: 321 active+clean; 88 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 8.7 KiB/s wr, 188 op/s
Dec  2 06:26:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:26:02 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2500679274' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:26:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:26:02 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2500679274' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:26:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:26:02 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2545446362' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:26:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:26:02 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2545446362' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:26:03 np0005542249 nova_compute[254900]: 2025-12-02 11:26:03.518 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:26:03 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1106466738' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:26:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:26:03 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1106466738' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:26:04 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1410: 321 pgs: 321 active+clean; 88 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 7.7 KiB/s wr, 149 op/s
Dec  2 06:26:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e321 do_prune osdmap full prune enabled
Dec  2 06:26:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e322 e322: 3 total, 3 up, 3 in
Dec  2 06:26:04 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e322: 3 total, 3 up, 3 in
Dec  2 06:26:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:26:04 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2724603014' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:26:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:26:04 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2724603014' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:26:04 np0005542249 nova_compute[254900]: 2025-12-02 11:26:04.813 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764674749.8121817, 7d55326f-52eb-4f7f-a9cd-05282ca6ca20 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:26:04 np0005542249 nova_compute[254900]: 2025-12-02 11:26:04.813 254904 INFO nova.compute.manager [-] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:26:04 np0005542249 nova_compute[254900]: 2025-12-02 11:26:04.833 254904 DEBUG nova.compute.manager [None req-18e4cbb1-6b16-4daa-a78f-3d11b85605da - - - - - -] [instance: 7d55326f-52eb-4f7f-a9cd-05282ca6ca20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:26:04 np0005542249 nova_compute[254900]: 2025-12-02 11:26:04.900 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:26:05 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2567437590' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:26:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:26:05 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2567437590' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:26:06 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1412: 321 pgs: 321 active+clean; 88 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 6.8 KiB/s wr, 162 op/s
Dec  2 06:26:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:26:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4136695995' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:26:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:26:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4136695995' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:26:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:26:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4280519085' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:26:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:26:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4280519085' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:26:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:26:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e322 do_prune osdmap full prune enabled
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e323 e323: 3 total, 3 up, 3 in
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e323: 3 total, 3 up, 3 in
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:26:07.016464) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674767016521, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2674, "num_deletes": 532, "total_data_size": 3401234, "memory_usage": 3470736, "flush_reason": "Manual Compaction"}
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674767040269, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3342026, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26352, "largest_seqno": 29025, "table_properties": {"data_size": 3330236, "index_size": 7196, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3525, "raw_key_size": 29554, "raw_average_key_size": 21, "raw_value_size": 3304245, "raw_average_value_size": 2387, "num_data_blocks": 311, "num_entries": 1384, "num_filter_entries": 1384, "num_deletions": 532, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764674601, "oldest_key_time": 1764674601, "file_creation_time": 1764674767, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 23847 microseconds, and 13497 cpu microseconds.
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:26:07.040321) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3342026 bytes OK
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:26:07.040343) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:26:07.042401) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:26:07.042437) EVENT_LOG_v1 {"time_micros": 1764674767042413, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:26:07.042461) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3388620, prev total WAL file size 3388620, number of live WAL files 2.
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:26:07.043649) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3263KB)], [59(10MB)]
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674767043735, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 14104458, "oldest_snapshot_seqno": -1}
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5737 keys, 9120270 bytes, temperature: kUnknown
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674767110412, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9120270, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9078382, "index_size": 26418, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14405, "raw_key_size": 143275, "raw_average_key_size": 24, "raw_value_size": 8971614, "raw_average_value_size": 1563, "num_data_blocks": 1071, "num_entries": 5737, "num_filter_entries": 5737, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672515, "oldest_key_time": 0, "file_creation_time": 1764674767, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:26:07.110855) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9120270 bytes
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:26:07.112383) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 211.1 rd, 136.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.3 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(6.9) write-amplify(2.7) OK, records in: 6793, records dropped: 1056 output_compression: NoCompression
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:26:07.112422) EVENT_LOG_v1 {"time_micros": 1764674767112404, "job": 32, "event": "compaction_finished", "compaction_time_micros": 66819, "compaction_time_cpu_micros": 42049, "output_level": 6, "num_output_files": 1, "total_output_size": 9120270, "num_input_records": 6793, "num_output_records": 5737, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674767113705, "job": 32, "event": "table_file_deletion", "file_number": 61}
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674767117163, "job": 32, "event": "table_file_deletion", "file_number": 59}
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:26:07.043476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:26:07.117250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:26:07.117258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:26:07.117263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:26:07.117266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:26:07.117270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2014055590' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:26:07 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2014055590' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:26:08 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1414: 321 pgs: 321 active+clean; 88 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 135 KiB/s rd, 6.4 KiB/s wr, 185 op/s
Dec  2 06:26:08 np0005542249 nova_compute[254900]: 2025-12-02 11:26:08.520 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:09 np0005542249 podman[282505]: 2025-12-02 11:26:09.036914769 +0000 UTC m=+0.105578883 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2)
Dec  2 06:26:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:26:09 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1736596170' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:26:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:26:09 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1736596170' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:26:09 np0005542249 nova_compute[254900]: 2025-12-02 11:26:09.903 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e323 do_prune osdmap full prune enabled
Dec  2 06:26:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e324 e324: 3 total, 3 up, 3 in
Dec  2 06:26:10 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1416: 321 pgs: 321 active+clean; 88 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 170 KiB/s rd, 5.0 KiB/s wr, 229 op/s
Dec  2 06:26:10 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e324: 3 total, 3 up, 3 in
Dec  2 06:26:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:26:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e324 do_prune osdmap full prune enabled
Dec  2 06:26:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e325 e325: 3 total, 3 up, 3 in
Dec  2 06:26:12 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e325: 3 total, 3 up, 3 in
Dec  2 06:26:12 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1418: 321 pgs: 321 active+clean; 88 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 171 KiB/s rd, 5.3 KiB/s wr, 229 op/s
Dec  2 06:26:13 np0005542249 nova_compute[254900]: 2025-12-02 11:26:13.523 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:14 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1419: 321 pgs: 321 active+clean; 88 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 5.2 KiB/s wr, 151 op/s
Dec  2 06:26:14 np0005542249 nova_compute[254900]: 2025-12-02 11:26:14.906 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e325 do_prune osdmap full prune enabled
Dec  2 06:26:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e326 e326: 3 total, 3 up, 3 in
Dec  2 06:26:15 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e326: 3 total, 3 up, 3 in
Dec  2 06:26:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:26:15 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1668204227' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:26:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:26:15 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1668204227' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:26:16 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1421: 321 pgs: 321 active+clean; 88 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 4.0 KiB/s wr, 68 op/s
Dec  2 06:26:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:26:18 np0005542249 podman[282525]: 2025-12-02 11:26:18.046711508 +0000 UTC m=+0.119033528 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  2 06:26:18 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1422: 321 pgs: 321 active+clean; 88 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 6.4 KiB/s wr, 128 op/s
Dec  2 06:26:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e326 do_prune osdmap full prune enabled
Dec  2 06:26:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e327 e327: 3 total, 3 up, 3 in
Dec  2 06:26:18 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e327: 3 total, 3 up, 3 in
Dec  2 06:26:18 np0005542249 nova_compute[254900]: 2025-12-02 11:26:18.526 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:19.841 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:26:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:19.842 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:26:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:19.842 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:26:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:26:19 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4243056229' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:26:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:26:19 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4243056229' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:26:19 np0005542249 nova_compute[254900]: 2025-12-02 11:26:19.908 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:26:20 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2649857683' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:26:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:26:20 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2649857683' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:26:20 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1424: 321 pgs: 321 active+clean; 88 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 5.2 KiB/s wr, 95 op/s
Dec  2 06:26:21 np0005542249 podman[282552]: 2025-12-02 11:26:21.027492602 +0000 UTC m=+0.097085674 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:26:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:26:21 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3815174040' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:26:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:26:21 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3815174040' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:26:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e327 do_prune osdmap full prune enabled
Dec  2 06:26:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e328 e328: 3 total, 3 up, 3 in
Dec  2 06:26:21 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e328: 3 total, 3 up, 3 in
Dec  2 06:26:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:26:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e328 do_prune osdmap full prune enabled
Dec  2 06:26:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e329 e329: 3 total, 3 up, 3 in
Dec  2 06:26:22 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e329: 3 total, 3 up, 3 in
Dec  2 06:26:22 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1427: 321 pgs: 321 active+clean; 88 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 147 KiB/s rd, 7.8 KiB/s wr, 194 op/s
Dec  2 06:26:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:26:22 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3787357537' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:26:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:26:22 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3787357537' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:26:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e329 do_prune osdmap full prune enabled
Dec  2 06:26:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e330 e330: 3 total, 3 up, 3 in
Dec  2 06:26:23 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e330: 3 total, 3 up, 3 in
Dec  2 06:26:23 np0005542249 nova_compute[254900]: 2025-12-02 11:26:23.530 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:24 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1429: 321 pgs: 321 active+clean; 88 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 168 KiB/s rd, 10 KiB/s wr, 224 op/s
Dec  2 06:26:24 np0005542249 nova_compute[254900]: 2025-12-02 11:26:24.910 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:26 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1430: 321 pgs: 321 active+clean; 88 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 188 KiB/s rd, 54 KiB/s wr, 250 op/s
Dec  2 06:26:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:26:26
Dec  2 06:26:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:26:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:26:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['images', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', '.mgr', 'vms', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta']
Dec  2 06:26:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:26:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:26:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:26:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:26:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:26:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:26:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:26:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:26:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:26:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:26:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:26:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:26:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:26:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:26:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:26:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:26:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:26:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:26:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e330 do_prune osdmap full prune enabled
Dec  2 06:26:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e331 e331: 3 total, 3 up, 3 in
Dec  2 06:26:27 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e331: 3 total, 3 up, 3 in
Dec  2 06:26:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:26:27 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1455604260' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:26:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:26:27 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1455604260' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:26:28 np0005542249 nova_compute[254900]: 2025-12-02 11:26:28.052 254904 DEBUG oslo_concurrency.lockutils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:26:28 np0005542249 nova_compute[254900]: 2025-12-02 11:26:28.052 254904 DEBUG oslo_concurrency.lockutils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:26:28 np0005542249 nova_compute[254900]: 2025-12-02 11:26:28.066 254904 DEBUG nova.compute.manager [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:26:28 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1432: 321 pgs: 321 active+clean; 88 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 153 KiB/s rd, 54 KiB/s wr, 207 op/s
Dec  2 06:26:28 np0005542249 nova_compute[254900]: 2025-12-02 11:26:28.144 254904 DEBUG oslo_concurrency.lockutils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:26:28 np0005542249 nova_compute[254900]: 2025-12-02 11:26:28.145 254904 DEBUG oslo_concurrency.lockutils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:26:28 np0005542249 nova_compute[254900]: 2025-12-02 11:26:28.154 254904 DEBUG nova.virt.hardware [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:26:28 np0005542249 nova_compute[254900]: 2025-12-02 11:26:28.154 254904 INFO nova.compute.claims [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:26:28 np0005542249 nova_compute[254900]: 2025-12-02 11:26:28.244 254904 DEBUG oslo_concurrency.processutils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:26:28 np0005542249 nova_compute[254900]: 2025-12-02 11:26:28.532 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:28 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:26:28 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1830430623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:26:28 np0005542249 nova_compute[254900]: 2025-12-02 11:26:28.685 254904 DEBUG oslo_concurrency.processutils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:26:28 np0005542249 nova_compute[254900]: 2025-12-02 11:26:28.695 254904 DEBUG nova.compute.provider_tree [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:26:28 np0005542249 nova_compute[254900]: 2025-12-02 11:26:28.716 254904 DEBUG nova.scheduler.client.report [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:26:28 np0005542249 nova_compute[254900]: 2025-12-02 11:26:28.752 254904 DEBUG oslo_concurrency.lockutils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:26:28 np0005542249 nova_compute[254900]: 2025-12-02 11:26:28.753 254904 DEBUG nova.compute.manager [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:26:28 np0005542249 nova_compute[254900]: 2025-12-02 11:26:28.818 254904 DEBUG nova.compute.manager [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:26:28 np0005542249 nova_compute[254900]: 2025-12-02 11:26:28.819 254904 DEBUG nova.network.neutron [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:26:28 np0005542249 nova_compute[254900]: 2025-12-02 11:26:28.836 254904 INFO nova.virt.libvirt.driver [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:26:28 np0005542249 nova_compute[254900]: 2025-12-02 11:26:28.856 254904 DEBUG nova.compute.manager [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:26:28 np0005542249 nova_compute[254900]: 2025-12-02 11:26:28.904 254904 INFO nova.virt.block_device [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Booting with volume 1cb311ee-9fb8-463b-8644-0867085ecaa3 at /dev/vda#033[00m
Dec  2 06:26:29 np0005542249 nova_compute[254900]: 2025-12-02 11:26:29.128 254904 DEBUG os_brick.utils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:26:29 np0005542249 nova_compute[254900]: 2025-12-02 11:26:29.131 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:26:29 np0005542249 nova_compute[254900]: 2025-12-02 11:26:29.149 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:26:29 np0005542249 nova_compute[254900]: 2025-12-02 11:26:29.149 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[78b2444f-933d-4e30-a73a-6fbf101673f2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:29 np0005542249 nova_compute[254900]: 2025-12-02 11:26:29.151 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:26:29 np0005542249 nova_compute[254900]: 2025-12-02 11:26:29.164 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:26:29 np0005542249 nova_compute[254900]: 2025-12-02 11:26:29.165 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[32e5055a-9a62-4fae-aa83-e9acb9a2017c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:29 np0005542249 nova_compute[254900]: 2025-12-02 11:26:29.166 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:26:29 np0005542249 nova_compute[254900]: 2025-12-02 11:26:29.180 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:26:29 np0005542249 nova_compute[254900]: 2025-12-02 11:26:29.181 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[654f4292-4091-4168-929a-e3497e6bfef0]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:29 np0005542249 nova_compute[254900]: 2025-12-02 11:26:29.182 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[565ec1c0-67f0-42f6-b53f-246be1aa4e93]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:29 np0005542249 nova_compute[254900]: 2025-12-02 11:26:29.183 254904 DEBUG oslo_concurrency.processutils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:26:29 np0005542249 nova_compute[254900]: 2025-12-02 11:26:29.215 254904 DEBUG oslo_concurrency.processutils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:26:29 np0005542249 nova_compute[254900]: 2025-12-02 11:26:29.220 254904 DEBUG os_brick.initiator.connectors.lightos [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:26:29 np0005542249 nova_compute[254900]: 2025-12-02 11:26:29.221 254904 DEBUG os_brick.initiator.connectors.lightos [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:26:29 np0005542249 nova_compute[254900]: 2025-12-02 11:26:29.221 254904 DEBUG os_brick.initiator.connectors.lightos [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:26:29 np0005542249 nova_compute[254900]: 2025-12-02 11:26:29.222 254904 DEBUG os_brick.utils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] <== get_connector_properties: return (92ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:26:29 np0005542249 nova_compute[254900]: 2025-12-02 11:26:29.223 254904 DEBUG nova.virt.block_device [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Updating existing volume attachment record: dbab1cb5-f8cd-48b5-aaae-8ddf335acd39 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:26:29 np0005542249 nova_compute[254900]: 2025-12-02 11:26:29.375 254904 DEBUG nova.policy [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6ccb73a613554d938221b4bf46d7ae83', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '625a6939c31646a4a83ea851774cf28c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:26:29 np0005542249 nova_compute[254900]: 2025-12-02 11:26:29.913 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:26:29 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4231087376' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:26:30 np0005542249 nova_compute[254900]: 2025-12-02 11:26:30.021 254904 DEBUG nova.network.neutron [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Successfully created port: d3b3d8c3-34ba-4f44-a4ac-615ed48abb50 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:26:30 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1433: 321 pgs: 321 active+clean; 88 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 163 KiB/s rd, 42 KiB/s wr, 214 op/s
Dec  2 06:26:30 np0005542249 nova_compute[254900]: 2025-12-02 11:26:30.309 254904 DEBUG nova.compute.manager [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:26:30 np0005542249 nova_compute[254900]: 2025-12-02 11:26:30.311 254904 DEBUG nova.virt.libvirt.driver [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:26:30 np0005542249 nova_compute[254900]: 2025-12-02 11:26:30.312 254904 INFO nova.virt.libvirt.driver [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Creating image(s)#033[00m
Dec  2 06:26:30 np0005542249 nova_compute[254900]: 2025-12-02 11:26:30.314 254904 DEBUG nova.virt.libvirt.driver [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  2 06:26:30 np0005542249 nova_compute[254900]: 2025-12-02 11:26:30.314 254904 DEBUG nova.virt.libvirt.driver [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Ensure instance console log exists: /var/lib/nova/instances/9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:26:30 np0005542249 nova_compute[254900]: 2025-12-02 11:26:30.315 254904 DEBUG oslo_concurrency.lockutils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:26:30 np0005542249 nova_compute[254900]: 2025-12-02 11:26:30.315 254904 DEBUG oslo_concurrency.lockutils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:26:30 np0005542249 nova_compute[254900]: 2025-12-02 11:26:30.315 254904 DEBUG oslo_concurrency.lockutils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:26:30 np0005542249 nova_compute[254900]: 2025-12-02 11:26:30.732 254904 DEBUG nova.network.neutron [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Successfully updated port: d3b3d8c3-34ba-4f44-a4ac-615ed48abb50 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:26:30 np0005542249 nova_compute[254900]: 2025-12-02 11:26:30.757 254904 DEBUG oslo_concurrency.lockutils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "refresh_cache-9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:26:30 np0005542249 nova_compute[254900]: 2025-12-02 11:26:30.757 254904 DEBUG oslo_concurrency.lockutils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquired lock "refresh_cache-9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:26:30 np0005542249 nova_compute[254900]: 2025-12-02 11:26:30.758 254904 DEBUG nova.network.neutron [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:26:30 np0005542249 nova_compute[254900]: 2025-12-02 11:26:30.866 254904 DEBUG nova.compute.manager [req-08a052db-04a5-4a0c-9f74-cf4dc31a75fb req-df3e1622-218d-4cb8-9ec3-585a7e7481ed 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Received event network-changed-d3b3d8c3-34ba-4f44-a4ac-615ed48abb50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:26:30 np0005542249 nova_compute[254900]: 2025-12-02 11:26:30.867 254904 DEBUG nova.compute.manager [req-08a052db-04a5-4a0c-9f74-cf4dc31a75fb req-df3e1622-218d-4cb8-9ec3-585a7e7481ed 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Refreshing instance network info cache due to event network-changed-d3b3d8c3-34ba-4f44-a4ac-615ed48abb50. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:26:30 np0005542249 nova_compute[254900]: 2025-12-02 11:26:30.867 254904 DEBUG oslo_concurrency.lockutils [req-08a052db-04a5-4a0c-9f74-cf4dc31a75fb req-df3e1622-218d-4cb8-9ec3-585a7e7481ed 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:26:30 np0005542249 nova_compute[254900]: 2025-12-02 11:26:30.940 254904 DEBUG nova.network.neutron [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:26:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e331 do_prune osdmap full prune enabled
Dec  2 06:26:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e332 e332: 3 total, 3 up, 3 in
Dec  2 06:26:31 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e332: 3 total, 3 up, 3 in
Dec  2 06:26:31 np0005542249 nova_compute[254900]: 2025-12-02 11:26:31.784 254904 DEBUG nova.network.neutron [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Updating instance_info_cache with network_info: [{"id": "d3b3d8c3-34ba-4f44-a4ac-615ed48abb50", "address": "fa:16:3e:1a:a1:94", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b3d8c3-34", "ovs_interfaceid": "d3b3d8c3-34ba-4f44-a4ac-615ed48abb50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:26:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:26:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3337233864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:26:31 np0005542249 nova_compute[254900]: 2025-12-02 11:26:31.813 254904 DEBUG oslo_concurrency.lockutils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Releasing lock "refresh_cache-9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:26:31 np0005542249 nova_compute[254900]: 2025-12-02 11:26:31.813 254904 DEBUG nova.compute.manager [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Instance network_info: |[{"id": "d3b3d8c3-34ba-4f44-a4ac-615ed48abb50", "address": "fa:16:3e:1a:a1:94", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b3d8c3-34", "ovs_interfaceid": "d3b3d8c3-34ba-4f44-a4ac-615ed48abb50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:26:31 np0005542249 nova_compute[254900]: 2025-12-02 11:26:31.814 254904 DEBUG oslo_concurrency.lockutils [req-08a052db-04a5-4a0c-9f74-cf4dc31a75fb req-df3e1622-218d-4cb8-9ec3-585a7e7481ed 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:26:31 np0005542249 nova_compute[254900]: 2025-12-02 11:26:31.814 254904 DEBUG nova.network.neutron [req-08a052db-04a5-4a0c-9f74-cf4dc31a75fb req-df3e1622-218d-4cb8-9ec3-585a7e7481ed 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Refreshing network info cache for port d3b3d8c3-34ba-4f44-a4ac-615ed48abb50 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:26:31 np0005542249 nova_compute[254900]: 2025-12-02 11:26:31.818 254904 DEBUG nova.virt.libvirt.driver [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Start _get_guest_xml network_info=[{"id": "d3b3d8c3-34ba-4f44-a4ac-615ed48abb50", "address": "fa:16:3e:1a:a1:94", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b3d8c3-34", "ovs_interfaceid": "d3b3d8c3-34ba-4f44-a4ac-615ed48abb50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1cb311ee-9fb8-463b-8644-0867085ecaa3', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1cb311ee-9fb8-463b-8644-0867085ecaa3', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d', 'attached_at': '', 'detached_at': '', 'volume_id': '1cb311ee-9fb8-463b-8644-0867085ecaa3', 'serial': '1cb311ee-9fb8-463b-8644-0867085ecaa3'}, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'attachment_id': 'dbab1cb5-f8cd-48b5-aaae-8ddf335acd39', 'delete_on_termination': False, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:26:31 np0005542249 nova_compute[254900]: 2025-12-02 11:26:31.823 254904 WARNING nova.virt.libvirt.driver [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:26:31 np0005542249 nova_compute[254900]: 2025-12-02 11:26:31.828 254904 DEBUG nova.virt.libvirt.host [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:26:31 np0005542249 nova_compute[254900]: 2025-12-02 11:26:31.829 254904 DEBUG nova.virt.libvirt.host [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:26:31 np0005542249 nova_compute[254900]: 2025-12-02 11:26:31.833 254904 DEBUG nova.virt.libvirt.host [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:26:31 np0005542249 nova_compute[254900]: 2025-12-02 11:26:31.834 254904 DEBUG nova.virt.libvirt.host [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:26:31 np0005542249 nova_compute[254900]: 2025-12-02 11:26:31.834 254904 DEBUG nova.virt.libvirt.driver [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:26:31 np0005542249 nova_compute[254900]: 2025-12-02 11:26:31.834 254904 DEBUG nova.virt.hardware [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:26:31 np0005542249 nova_compute[254900]: 2025-12-02 11:26:31.835 254904 DEBUG nova.virt.hardware [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:26:31 np0005542249 nova_compute[254900]: 2025-12-02 11:26:31.836 254904 DEBUG nova.virt.hardware [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:26:31 np0005542249 nova_compute[254900]: 2025-12-02 11:26:31.836 254904 DEBUG nova.virt.hardware [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:26:31 np0005542249 nova_compute[254900]: 2025-12-02 11:26:31.836 254904 DEBUG nova.virt.hardware [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:26:31 np0005542249 nova_compute[254900]: 2025-12-02 11:26:31.837 254904 DEBUG nova.virt.hardware [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:26:31 np0005542249 nova_compute[254900]: 2025-12-02 11:26:31.837 254904 DEBUG nova.virt.hardware [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:26:31 np0005542249 nova_compute[254900]: 2025-12-02 11:26:31.837 254904 DEBUG nova.virt.hardware [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:26:31 np0005542249 nova_compute[254900]: 2025-12-02 11:26:31.838 254904 DEBUG nova.virt.hardware [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:26:31 np0005542249 nova_compute[254900]: 2025-12-02 11:26:31.838 254904 DEBUG nova.virt.hardware [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:26:31 np0005542249 nova_compute[254900]: 2025-12-02 11:26:31.838 254904 DEBUG nova.virt.hardware [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:26:31 np0005542249 nova_compute[254900]: 2025-12-02 11:26:31.868 254904 DEBUG nova.storage.rbd_utils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] rbd image 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:26:31 np0005542249 nova_compute[254900]: 2025-12-02 11:26:31.874 254904 DEBUG oslo_concurrency.processutils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:26:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:26:32 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1435: 321 pgs: 321 active+clean; 88 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 39 KiB/s wr, 154 op/s
Dec  2 06:26:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e332 do_prune osdmap full prune enabled
Dec  2 06:26:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e333 e333: 3 total, 3 up, 3 in
Dec  2 06:26:32 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e333: 3 total, 3 up, 3 in
Dec  2 06:26:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:26:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2937211845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.427 254904 DEBUG oslo_concurrency.processutils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.587 254904 DEBUG os_brick.encryptors [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Using volume encryption metadata '{'encryption_key_id': '09fb68b6-ab86-4b17-a751-d7c65efd94be', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1cb311ee-9fb8-463b-8644-0867085ecaa3', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1cb311ee-9fb8-463b-8644-0867085ecaa3', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d', 'attached_at': '', 'detached_at': '', 'volume_id': '1cb311ee-9fb8-463b-8644-0867085ecaa3', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.589 254904 DEBUG barbicanclient.client [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.603 254904 DEBUG barbicanclient.v1.secrets [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/09fb68b6-ab86-4b17-a751-d7c65efd94be get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.604 254904 INFO barbicanclient.base [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Calculated Secrets uuid ref: secrets/09fb68b6-ab86-4b17-a751-d7c65efd94be#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.628 254904 DEBUG barbicanclient.client [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.629 254904 INFO barbicanclient.base [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Calculated Secrets uuid ref: secrets/09fb68b6-ab86-4b17-a751-d7c65efd94be#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.652 254904 DEBUG barbicanclient.client [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.653 254904 INFO barbicanclient.base [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Calculated Secrets uuid ref: secrets/09fb68b6-ab86-4b17-a751-d7c65efd94be#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.674 254904 DEBUG barbicanclient.client [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.675 254904 INFO barbicanclient.base [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Calculated Secrets uuid ref: secrets/09fb68b6-ab86-4b17-a751-d7c65efd94be#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.704 254904 DEBUG barbicanclient.client [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.705 254904 INFO barbicanclient.base [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Calculated Secrets uuid ref: secrets/09fb68b6-ab86-4b17-a751-d7c65efd94be#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.736 254904 DEBUG barbicanclient.client [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.737 254904 INFO barbicanclient.base [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Calculated Secrets uuid ref: secrets/09fb68b6-ab86-4b17-a751-d7c65efd94be#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.769 254904 DEBUG barbicanclient.client [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.770 254904 INFO barbicanclient.base [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Calculated Secrets uuid ref: secrets/09fb68b6-ab86-4b17-a751-d7c65efd94be#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.809 254904 DEBUG barbicanclient.client [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.809 254904 INFO barbicanclient.base [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Calculated Secrets uuid ref: secrets/09fb68b6-ab86-4b17-a751-d7c65efd94be#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.837 254904 DEBUG barbicanclient.client [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.838 254904 INFO barbicanclient.base [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Calculated Secrets uuid ref: secrets/09fb68b6-ab86-4b17-a751-d7c65efd94be#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.865 254904 DEBUG barbicanclient.client [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.865 254904 INFO barbicanclient.base [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Calculated Secrets uuid ref: secrets/09fb68b6-ab86-4b17-a751-d7c65efd94be#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.888 254904 DEBUG barbicanclient.client [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.889 254904 INFO barbicanclient.base [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Calculated Secrets uuid ref: secrets/09fb68b6-ab86-4b17-a751-d7c65efd94be#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.920 254904 DEBUG barbicanclient.client [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.921 254904 INFO barbicanclient.base [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Calculated Secrets uuid ref: secrets/09fb68b6-ab86-4b17-a751-d7c65efd94be#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.971 254904 DEBUG barbicanclient.client [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:26:32 np0005542249 nova_compute[254900]: 2025-12-02 11:26:32.972 254904 INFO barbicanclient.base [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Calculated Secrets uuid ref: secrets/09fb68b6-ab86-4b17-a751-d7c65efd94be#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.013 254904 DEBUG barbicanclient.client [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.014 254904 INFO barbicanclient.base [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Calculated Secrets uuid ref: secrets/09fb68b6-ab86-4b17-a751-d7c65efd94be#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.048 254904 DEBUG barbicanclient.client [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.049 254904 INFO barbicanclient.base [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Calculated Secrets uuid ref: secrets/09fb68b6-ab86-4b17-a751-d7c65efd94be#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.073 254904 DEBUG barbicanclient.client [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.074 254904 DEBUG nova.virt.libvirt.host [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Secret XML: <secret ephemeral="no" private="no">
Dec  2 06:26:33 np0005542249 nova_compute[254900]:  <usage type="volume">
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <volume>1cb311ee-9fb8-463b-8644-0867085ecaa3</volume>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:  </usage>
Dec  2 06:26:33 np0005542249 nova_compute[254900]: </secret>
Dec  2 06:26:33 np0005542249 nova_compute[254900]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.106 254904 DEBUG nova.virt.libvirt.vif [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-769951804',display_name='tempest-TestVolumeBootPattern-server-769951804',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-769951804',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='625a6939c31646a4a83ea851774cf28c',ramdisk_id='',reservation_id='r-6ff3sxet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1396850361',owner_user_name='tempest-TestVolumeBootPattern-1396850361-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:26:28Z,user_data=None,user_id='6ccb73a613554d938221b4bf46d7ae83',uuid=9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3b3d8c3-34ba-4f44-a4ac-615ed48abb50", "address": "fa:16:3e:1a:a1:94", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b3d8c3-34", "ovs_interfaceid": "d3b3d8c3-34ba-4f44-a4ac-615ed48abb50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.106 254904 DEBUG nova.network.os_vif_util [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converting VIF {"id": "d3b3d8c3-34ba-4f44-a4ac-615ed48abb50", "address": "fa:16:3e:1a:a1:94", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b3d8c3-34", "ovs_interfaceid": "d3b3d8c3-34ba-4f44-a4ac-615ed48abb50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.108 254904 DEBUG nova.network.os_vif_util [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:a1:94,bridge_name='br-int',has_traffic_filtering=True,id=d3b3d8c3-34ba-4f44-a4ac-615ed48abb50,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b3d8c3-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.109 254904 DEBUG nova.objects.instance [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lazy-loading 'pci_devices' on Instance uuid 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.128 254904 DEBUG nova.virt.libvirt.driver [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:26:33 np0005542249 nova_compute[254900]:  <uuid>9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d</uuid>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:  <name>instance-00000010</name>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <nova:name>tempest-TestVolumeBootPattern-server-769951804</nova:name>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:26:31</nova:creationTime>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:26:33 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:        <nova:user uuid="6ccb73a613554d938221b4bf46d7ae83">tempest-TestVolumeBootPattern-1396850361-project-member</nova:user>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:        <nova:project uuid="625a6939c31646a4a83ea851774cf28c">tempest-TestVolumeBootPattern-1396850361</nova:project>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:        <nova:port uuid="d3b3d8c3-34ba-4f44-a4ac-615ed48abb50">
Dec  2 06:26:33 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <entry name="serial">9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d</entry>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <entry name="uuid">9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d</entry>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d_disk.config">
Dec  2 06:26:33 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:26:33 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="volumes/volume-1cb311ee-9fb8-463b-8644-0867085ecaa3">
Dec  2 06:26:33 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:26:33 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <serial>1cb311ee-9fb8-463b-8644-0867085ecaa3</serial>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <encryption format="luks">
Dec  2 06:26:33 np0005542249 nova_compute[254900]:        <secret type="passphrase" uuid="a2c1af53-a589-4121-906b-035e7edb67d0"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      </encryption>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:1a:a1:94"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <target dev="tapd3b3d8c3-34"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d/console.log" append="off"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:26:33 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:26:33 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:26:33 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:26:33 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.130 254904 DEBUG nova.compute.manager [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Preparing to wait for external event network-vif-plugged-d3b3d8c3-34ba-4f44-a4ac-615ed48abb50 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.130 254904 DEBUG oslo_concurrency.lockutils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.131 254904 DEBUG oslo_concurrency.lockutils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.131 254904 DEBUG oslo_concurrency.lockutils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.132 254904 DEBUG nova.virt.libvirt.vif [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-769951804',display_name='tempest-TestVolumeBootPattern-server-769951804',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-769951804',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='625a6939c31646a4a83ea851774cf28c',ramdisk_id='',reservation_id='r-6ff3sxet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1396850361',owner_user_name='tempest-TestVolumeBootPattern-1396850361-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:26:28Z,user_data=None,user_id='6ccb73a613554d938221b4bf46d7ae83',uuid=9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3b3d8c3-34ba-4f44-a4ac-615ed48abb50", "address": "fa:16:3e:1a:a1:94", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b3d8c3-34", "ovs_interfaceid": "d3b3d8c3-34ba-4f44-a4ac-615ed48abb50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.132 254904 DEBUG nova.network.os_vif_util [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converting VIF {"id": "d3b3d8c3-34ba-4f44-a4ac-615ed48abb50", "address": "fa:16:3e:1a:a1:94", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b3d8c3-34", "ovs_interfaceid": "d3b3d8c3-34ba-4f44-a4ac-615ed48abb50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.133 254904 DEBUG nova.network.os_vif_util [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:a1:94,bridge_name='br-int',has_traffic_filtering=True,id=d3b3d8c3-34ba-4f44-a4ac-615ed48abb50,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b3d8c3-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.134 254904 DEBUG os_vif [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:a1:94,bridge_name='br-int',has_traffic_filtering=True,id=d3b3d8c3-34ba-4f44-a4ac-615ed48abb50,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b3d8c3-34') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.135 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.135 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.136 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.139 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.139 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3b3d8c3-34, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.140 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd3b3d8c3-34, col_values=(('external_ids', {'iface-id': 'd3b3d8c3-34ba-4f44-a4ac-615ed48abb50', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:a1:94', 'vm-uuid': '9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.142 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:33 np0005542249 NetworkManager[48987]: <info>  [1764674793.1437] manager: (tapd3b3d8c3-34): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.145 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.149 254904 DEBUG nova.network.neutron [req-08a052db-04a5-4a0c-9f74-cf4dc31a75fb req-df3e1622-218d-4cb8-9ec3-585a7e7481ed 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Updated VIF entry in instance network info cache for port d3b3d8c3-34ba-4f44-a4ac-615ed48abb50. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.150 254904 DEBUG nova.network.neutron [req-08a052db-04a5-4a0c-9f74-cf4dc31a75fb req-df3e1622-218d-4cb8-9ec3-585a7e7481ed 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Updating instance_info_cache with network_info: [{"id": "d3b3d8c3-34ba-4f44-a4ac-615ed48abb50", "address": "fa:16:3e:1a:a1:94", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b3d8c3-34", "ovs_interfaceid": "d3b3d8c3-34ba-4f44-a4ac-615ed48abb50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.152 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.153 254904 INFO os_vif [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:a1:94,bridge_name='br-int',has_traffic_filtering=True,id=d3b3d8c3-34ba-4f44-a4ac-615ed48abb50,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b3d8c3-34')#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.165 254904 DEBUG oslo_concurrency.lockutils [req-08a052db-04a5-4a0c-9f74-cf4dc31a75fb req-df3e1622-218d-4cb8-9ec3-585a7e7481ed 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.216 254904 DEBUG nova.virt.libvirt.driver [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.216 254904 DEBUG nova.virt.libvirt.driver [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.216 254904 DEBUG nova.virt.libvirt.driver [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] No VIF found with MAC fa:16:3e:1a:a1:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.217 254904 INFO nova.virt.libvirt.driver [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Using config drive#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.235 254904 DEBUG nova.storage.rbd_utils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] rbd image 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:26:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e333 do_prune osdmap full prune enabled
Dec  2 06:26:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e334 e334: 3 total, 3 up, 3 in
Dec  2 06:26:33 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e334: 3 total, 3 up, 3 in
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.535 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.716 254904 INFO nova.virt.libvirt.driver [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Creating config drive at /var/lib/nova/instances/9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d/disk.config#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.721 254904 DEBUG oslo_concurrency.processutils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3ad8ksyy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.868 254904 DEBUG oslo_concurrency.processutils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3ad8ksyy" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.887 254904 DEBUG nova.storage.rbd_utils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] rbd image 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:26:33 np0005542249 nova_compute[254900]: 2025-12-02 11:26:33.890 254904 DEBUG oslo_concurrency.processutils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d/disk.config 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:26:34 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1438: 321 pgs: 321 active+clean; 88 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 5.7 KiB/s wr, 124 op/s
Dec  2 06:26:34 np0005542249 nova_compute[254900]: 2025-12-02 11:26:34.485 254904 DEBUG oslo_concurrency.processutils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d/disk.config 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:26:34 np0005542249 nova_compute[254900]: 2025-12-02 11:26:34.486 254904 INFO nova.virt.libvirt.driver [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Deleting local config drive /var/lib/nova/instances/9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d/disk.config because it was imported into RBD.#033[00m
Dec  2 06:26:34 np0005542249 kernel: tapd3b3d8c3-34: entered promiscuous mode
Dec  2 06:26:34 np0005542249 NetworkManager[48987]: <info>  [1764674794.5514] manager: (tapd3b3d8c3-34): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Dec  2 06:26:34 np0005542249 ovn_controller[153849]: 2025-12-02T11:26:34Z|00161|binding|INFO|Claiming lport d3b3d8c3-34ba-4f44-a4ac-615ed48abb50 for this chassis.
Dec  2 06:26:34 np0005542249 ovn_controller[153849]: 2025-12-02T11:26:34Z|00162|binding|INFO|d3b3d8c3-34ba-4f44-a4ac-615ed48abb50: Claiming fa:16:3e:1a:a1:94 10.100.0.7
Dec  2 06:26:34 np0005542249 nova_compute[254900]: 2025-12-02 11:26:34.551 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:34 np0005542249 nova_compute[254900]: 2025-12-02 11:26:34.556 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.569 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:a1:94 10.100.0.7'], port_security=['fa:16:3e:1a:a1:94 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '625a6939c31646a4a83ea851774cf28c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7d24f0af-82a6-4c34-a172-39a54c318d33', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cc823ef0-3e69-4062-a488-a82f483f10bb, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=d3b3d8c3-34ba-4f44-a4ac-615ed48abb50) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.570 163757 INFO neutron.agent.ovn.metadata.agent [-] Port d3b3d8c3-34ba-4f44-a4ac-615ed48abb50 in datapath acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 bound to our chassis#033[00m
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.571 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754#033[00m
Dec  2 06:26:34 np0005542249 systemd-udevd[282718]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:26:34 np0005542249 systemd-machined[216222]: New machine qemu-16-instance-00000010.
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.592 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[b859c823-3e5b-4e64-8db8-055680c13859]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.594 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapacfaa8ac-01 in ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.597 262398 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapacfaa8ac-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.597 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[3ea1c781-bb94-41c2-9703-4758e0193e5e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.598 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[47023684-72d1-4db5-aebc-8180c7fb7bfe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:34 np0005542249 NetworkManager[48987]: <info>  [1764674794.6024] device (tapd3b3d8c3-34): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:26:34 np0005542249 NetworkManager[48987]: <info>  [1764674794.6044] device (tapd3b3d8c3-34): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.611 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[70c5f5b3-10d7-4f78-847b-80816bc638aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:34 np0005542249 systemd[1]: Started Virtual Machine qemu-16-instance-00000010.
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.631 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[29a5691d-c40c-4a5b-9fa9-4775a3b7faff]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:34 np0005542249 nova_compute[254900]: 2025-12-02 11:26:34.652 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:34 np0005542249 ovn_controller[153849]: 2025-12-02T11:26:34Z|00163|binding|INFO|Setting lport d3b3d8c3-34ba-4f44-a4ac-615ed48abb50 ovn-installed in OVS
Dec  2 06:26:34 np0005542249 ovn_controller[153849]: 2025-12-02T11:26:34Z|00164|binding|INFO|Setting lport d3b3d8c3-34ba-4f44-a4ac-615ed48abb50 up in Southbound
Dec  2 06:26:34 np0005542249 nova_compute[254900]: 2025-12-02 11:26:34.659 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.670 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[2c57ae7c-bc2d-4b6f-89d4-13ea32b1169b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:34 np0005542249 systemd-udevd[282722]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:26:34 np0005542249 NetworkManager[48987]: <info>  [1764674794.6777] manager: (tapacfaa8ac-00): new Veth device (/org/freedesktop/NetworkManager/Devices/90)
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.676 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[6eeef225-d549-4205-8b97-728249cd6bfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.717 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[7e0a8d4e-c7d7-450c-aa84-0240c96d6280]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.720 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[0b0df095-4af4-4f99-8068-4a22ee70586c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:34 np0005542249 NetworkManager[48987]: <info>  [1764674794.7459] device (tapacfaa8ac-00): carrier: link connected
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.753 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[c653cb82-290f-49f1-adaa-fca9b90e08b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.777 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[65909f20-2672-4d21-b0bd-9141c1d06695]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacfaa8ac-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:73:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498756, 'reachable_time': 23688, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282751, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.797 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[1108d5cc-4a9a-4513-9268-03a4677cf7f1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fece:73a3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 498756, 'tstamp': 498756}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282752, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.816 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[d2065771-4c77-49de-ac40-08b536699a8a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacfaa8ac-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:73:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498756, 'reachable_time': 23688, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282753, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.855 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[45e7311c-6f9a-416a-9804-5465bd09e0dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:34 np0005542249 nova_compute[254900]: 2025-12-02 11:26:34.870 254904 DEBUG nova.compute.manager [req-94024588-58fb-4d83-a5a2-0afafce8b44a req-517f429b-b7d5-41b8-9afd-f88fb9f558e7 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Received event network-vif-plugged-d3b3d8c3-34ba-4f44-a4ac-615ed48abb50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:26:34 np0005542249 nova_compute[254900]: 2025-12-02 11:26:34.870 254904 DEBUG oslo_concurrency.lockutils [req-94024588-58fb-4d83-a5a2-0afafce8b44a req-517f429b-b7d5-41b8-9afd-f88fb9f558e7 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:26:34 np0005542249 nova_compute[254900]: 2025-12-02 11:26:34.870 254904 DEBUG oslo_concurrency.lockutils [req-94024588-58fb-4d83-a5a2-0afafce8b44a req-517f429b-b7d5-41b8-9afd-f88fb9f558e7 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:26:34 np0005542249 nova_compute[254900]: 2025-12-02 11:26:34.871 254904 DEBUG oslo_concurrency.lockutils [req-94024588-58fb-4d83-a5a2-0afafce8b44a req-517f429b-b7d5-41b8-9afd-f88fb9f558e7 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:26:34 np0005542249 nova_compute[254900]: 2025-12-02 11:26:34.871 254904 DEBUG nova.compute.manager [req-94024588-58fb-4d83-a5a2-0afafce8b44a req-517f429b-b7d5-41b8-9afd-f88fb9f558e7 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Processing event network-vif-plugged-d3b3d8c3-34ba-4f44-a4ac-615ed48abb50 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.938 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[b8178112-88fb-4a9f-aa78-af17359a0198]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.940 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacfaa8ac-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.940 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.941 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapacfaa8ac-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:26:34 np0005542249 NetworkManager[48987]: <info>  [1764674794.9444] manager: (tapacfaa8ac-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Dec  2 06:26:34 np0005542249 nova_compute[254900]: 2025-12-02 11:26:34.943 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:34 np0005542249 kernel: tapacfaa8ac-00: entered promiscuous mode
Dec  2 06:26:34 np0005542249 nova_compute[254900]: 2025-12-02 11:26:34.946 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.947 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapacfaa8ac-00, col_values=(('external_ids', {'iface-id': '1636ad30-406d-4138-823e-abbe7f4d87ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:26:34 np0005542249 nova_compute[254900]: 2025-12-02 11:26:34.948 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:34 np0005542249 ovn_controller[153849]: 2025-12-02T11:26:34Z|00165|binding|INFO|Releasing lport 1636ad30-406d-4138-823e-abbe7f4d87ac from this chassis (sb_readonly=0)
Dec  2 06:26:34 np0005542249 nova_compute[254900]: 2025-12-02 11:26:34.949 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.950 163757 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.951 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[0304fad8-5f08-4fcb-af03-0ea7cfa2b3cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.954 163757 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: global
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]:    log         /dev/log local0 debug
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]:    log-tag     haproxy-metadata-proxy-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]:    user        root
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]:    group       root
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]:    maxconn     1024
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]:    pidfile     /var/lib/neutron/external/pids/acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754.pid.haproxy
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]:    daemon
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: defaults
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]:    log global
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]:    mode http
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]:    option httplog
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]:    option dontlognull
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]:    option http-server-close
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]:    option forwardfor
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]:    retries                 3
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]:    timeout http-request    30s
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]:    timeout connect         30s
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]:    timeout client          32s
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]:    timeout server          32s
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]:    timeout http-keep-alive 30s
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: listen listener
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]:    bind 169.254.169.254:80
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]:    http-request add-header X-OVN-Network-ID acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 06:26:34 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:34.955 163757 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'env', 'PROCESS_TAG=haproxy-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 06:26:34 np0005542249 nova_compute[254900]: 2025-12-02 11:26:34.970 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:26:35 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2823423868' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:26:35 np0005542249 podman[282921]: 2025-12-02 11:26:35.406128405 +0000 UTC m=+0.082367776 container create 51e79eab0f2611b99ca7ba8cdfd4cc2cf4d9864080757986308a0fb10e63b506 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  2 06:26:35 np0005542249 systemd[1]: Started libpod-conmon-51e79eab0f2611b99ca7ba8cdfd4cc2cf4d9864080757986308a0fb10e63b506.scope.
Dec  2 06:26:35 np0005542249 podman[282921]: 2025-12-02 11:26:35.371706015 +0000 UTC m=+0.047945436 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:26:35 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:26:35 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b20c80f09c2e8d7e31534f5b47057da7c2f323b46faff2fc3f691039c1d53f9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 06:26:35 np0005542249 podman[282921]: 2025-12-02 11:26:35.503608279 +0000 UTC m=+0.179847670 container init 51e79eab0f2611b99ca7ba8cdfd4cc2cf4d9864080757986308a0fb10e63b506 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  2 06:26:35 np0005542249 podman[282921]: 2025-12-02 11:26:35.512187521 +0000 UTC m=+0.188426892 container start 51e79eab0f2611b99ca7ba8cdfd4cc2cf4d9864080757986308a0fb10e63b506 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  2 06:26:35 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[282943]: [NOTICE]   (282955) : New worker (282957) forked
Dec  2 06:26:35 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[282943]: [NOTICE]   (282955) : Loading success.
Dec  2 06:26:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:26:35 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:26:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:26:35 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:26:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:26:35 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:26:35 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 6ab9ee1b-7b4f-4a29-9ff5-37ab711865f4 does not exist
Dec  2 06:26:35 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 1f00f7b9-1445-446a-9e55-2b1b32ffc750 does not exist
Dec  2 06:26:35 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 62c43e65-e093-49f5-b564-d982a796e207 does not exist
Dec  2 06:26:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:26:35 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:26:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:26:35 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:26:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:26:35 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:26:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:26:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:26:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:26:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:26:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  2 06:26:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:26:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00035159302353975384 of space, bias 1.0, pg target 0.10547790706192615 quantized to 32 (current 32)
Dec  2 06:26:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:26:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Dec  2 06:26:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:26:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006663034365435958 of space, bias 1.0, pg target 0.19989103096307873 quantized to 32 (current 32)
Dec  2 06:26:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:26:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:26:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:26:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:26:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:26:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:26:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:26:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:26:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:26:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:26:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:26:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:26:36 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1439: 321 pgs: 321 active+clean; 88 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 7.0 KiB/s wr, 113 op/s
Dec  2 06:26:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e334 do_prune osdmap full prune enabled
Dec  2 06:26:36 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:26:36 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:26:36 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:26:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e335 e335: 3 total, 3 up, 3 in
Dec  2 06:26:36 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e335: 3 total, 3 up, 3 in
Dec  2 06:26:36 np0005542249 podman[283124]: 2025-12-02 11:26:36.713996876 +0000 UTC m=+0.067357641 container create bea889d956b01db697359263fc9b57b650ad739a2e9f9c7046ef83f7cb74ac35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  2 06:26:36 np0005542249 systemd[1]: Started libpod-conmon-bea889d956b01db697359263fc9b57b650ad739a2e9f9c7046ef83f7cb74ac35.scope.
Dec  2 06:26:36 np0005542249 podman[283124]: 2025-12-02 11:26:36.683852891 +0000 UTC m=+0.037213746 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:26:36 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:26:36 np0005542249 podman[283124]: 2025-12-02 11:26:36.826333571 +0000 UTC m=+0.179694356 container init bea889d956b01db697359263fc9b57b650ad739a2e9f9c7046ef83f7cb74ac35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ptolemy, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 06:26:36 np0005542249 podman[283124]: 2025-12-02 11:26:36.836838315 +0000 UTC m=+0.190199080 container start bea889d956b01db697359263fc9b57b650ad739a2e9f9c7046ef83f7cb74ac35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ptolemy, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  2 06:26:36 np0005542249 podman[283124]: 2025-12-02 11:26:36.840213997 +0000 UTC m=+0.193574852 container attach bea889d956b01db697359263fc9b57b650ad739a2e9f9c7046ef83f7cb74ac35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:26:36 np0005542249 systemd[1]: libpod-bea889d956b01db697359263fc9b57b650ad739a2e9f9c7046ef83f7cb74ac35.scope: Deactivated successfully.
Dec  2 06:26:36 np0005542249 confident_ptolemy[283141]: 167 167
Dec  2 06:26:36 np0005542249 conmon[283141]: conmon bea889d956b01db69735 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bea889d956b01db697359263fc9b57b650ad739a2e9f9c7046ef83f7cb74ac35.scope/container/memory.events
Dec  2 06:26:36 np0005542249 podman[283146]: 2025-12-02 11:26:36.904609967 +0000 UTC m=+0.038243945 container died bea889d956b01db697359263fc9b57b650ad739a2e9f9c7046ef83f7cb74ac35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ptolemy, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  2 06:26:36 np0005542249 systemd[1]: var-lib-containers-storage-overlay-ab640b2b453c6acdcf6b9b99ddb32ed09d9ec1552b145ebcf5042e92db0bf293-merged.mount: Deactivated successfully.
Dec  2 06:26:36 np0005542249 podman[283146]: 2025-12-02 11:26:36.953756174 +0000 UTC m=+0.087390082 container remove bea889d956b01db697359263fc9b57b650ad739a2e9f9c7046ef83f7cb74ac35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ptolemy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  2 06:26:36 np0005542249 nova_compute[254900]: 2025-12-02 11:26:36.956 254904 DEBUG nova.compute.manager [req-008477d3-c7fb-47ab-abfd-1d6180af4bed req-21e636aa-27b1-4c27-b512-86f362d9efd4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Received event network-vif-plugged-d3b3d8c3-34ba-4f44-a4ac-615ed48abb50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:26:36 np0005542249 nova_compute[254900]: 2025-12-02 11:26:36.957 254904 DEBUG oslo_concurrency.lockutils [req-008477d3-c7fb-47ab-abfd-1d6180af4bed req-21e636aa-27b1-4c27-b512-86f362d9efd4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:26:36 np0005542249 nova_compute[254900]: 2025-12-02 11:26:36.957 254904 DEBUG oslo_concurrency.lockutils [req-008477d3-c7fb-47ab-abfd-1d6180af4bed req-21e636aa-27b1-4c27-b512-86f362d9efd4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:26:36 np0005542249 nova_compute[254900]: 2025-12-02 11:26:36.957 254904 DEBUG oslo_concurrency.lockutils [req-008477d3-c7fb-47ab-abfd-1d6180af4bed req-21e636aa-27b1-4c27-b512-86f362d9efd4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:26:36 np0005542249 nova_compute[254900]: 2025-12-02 11:26:36.958 254904 DEBUG nova.compute.manager [req-008477d3-c7fb-47ab-abfd-1d6180af4bed req-21e636aa-27b1-4c27-b512-86f362d9efd4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] No waiting events found dispatching network-vif-plugged-d3b3d8c3-34ba-4f44-a4ac-615ed48abb50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:26:36 np0005542249 nova_compute[254900]: 2025-12-02 11:26:36.958 254904 WARNING nova.compute.manager [req-008477d3-c7fb-47ab-abfd-1d6180af4bed req-21e636aa-27b1-4c27-b512-86f362d9efd4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Received unexpected event network-vif-plugged-d3b3d8c3-34ba-4f44-a4ac-615ed48abb50 for instance with vm_state building and task_state spawning.#033[00m
Dec  2 06:26:36 np0005542249 systemd[1]: libpod-conmon-bea889d956b01db697359263fc9b57b650ad739a2e9f9c7046ef83f7cb74ac35.scope: Deactivated successfully.
Dec  2 06:26:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:26:37 np0005542249 podman[283169]: 2025-12-02 11:26:37.172766352 +0000 UTC m=+0.047872534 container create 438debd2284425ebdb53082da9ef716dc78f8d94ba64fb086774e56546cfe334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_johnson, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  2 06:26:37 np0005542249 systemd[1]: Started libpod-conmon-438debd2284425ebdb53082da9ef716dc78f8d94ba64fb086774e56546cfe334.scope.
Dec  2 06:26:37 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:26:37 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3418420221a10fdfa27d1db9982e8c574e0cdea417f167f97da58503ee12d26c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:26:37 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3418420221a10fdfa27d1db9982e8c574e0cdea417f167f97da58503ee12d26c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:26:37 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3418420221a10fdfa27d1db9982e8c574e0cdea417f167f97da58503ee12d26c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:26:37 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3418420221a10fdfa27d1db9982e8c574e0cdea417f167f97da58503ee12d26c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:26:37 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3418420221a10fdfa27d1db9982e8c574e0cdea417f167f97da58503ee12d26c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:26:37 np0005542249 podman[283169]: 2025-12-02 11:26:37.151420826 +0000 UTC m=+0.026527028 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:26:37 np0005542249 podman[283169]: 2025-12-02 11:26:37.2925836 +0000 UTC m=+0.167689862 container init 438debd2284425ebdb53082da9ef716dc78f8d94ba64fb086774e56546cfe334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Dec  2 06:26:37 np0005542249 podman[283169]: 2025-12-02 11:26:37.3036633 +0000 UTC m=+0.178769522 container start 438debd2284425ebdb53082da9ef716dc78f8d94ba64fb086774e56546cfe334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.313 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674797.3132784, 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.316 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] VM Started (Lifecycle Event)#033[00m
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.318 254904 DEBUG nova.compute.manager [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:26:37 np0005542249 podman[283169]: 2025-12-02 11:26:37.324500663 +0000 UTC m=+0.199606875 container attach 438debd2284425ebdb53082da9ef716dc78f8d94ba64fb086774e56546cfe334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_johnson, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.336 254904 DEBUG nova.virt.libvirt.driver [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.343 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.344 254904 INFO nova.virt.libvirt.driver [-] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Instance spawned successfully.#033[00m
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.344 254904 DEBUG nova.virt.libvirt.driver [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.348 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.379 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.380 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674797.313585, 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.380 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.386 254904 DEBUG nova.virt.libvirt.driver [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.388 254904 DEBUG nova.virt.libvirt.driver [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.388 254904 DEBUG nova.virt.libvirt.driver [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.389 254904 DEBUG nova.virt.libvirt.driver [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.389 254904 DEBUG nova.virt.libvirt.driver [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.390 254904 DEBUG nova.virt.libvirt.driver [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.427 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.436 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674797.3214447, 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.437 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.463 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.470 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.477 254904 INFO nova.compute.manager [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Took 7.17 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.478 254904 DEBUG nova.compute.manager [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.490 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.536 254904 INFO nova.compute.manager [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Took 9.43 seconds to build instance.#033[00m
Dec  2 06:26:37 np0005542249 nova_compute[254900]: 2025-12-02 11:26:37.557 254904 DEBUG oslo_concurrency.lockutils [None req-87d77d15-1f1f-4476-a2be-1cc125363b3d 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:26:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e335 do_prune osdmap full prune enabled
Dec  2 06:26:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e336 e336: 3 total, 3 up, 3 in
Dec  2 06:26:37 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e336: 3 total, 3 up, 3 in
Dec  2 06:26:38 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1442: 321 pgs: 321 active+clean; 88 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 32 KiB/s wr, 131 op/s
Dec  2 06:26:38 np0005542249 nova_compute[254900]: 2025-12-02 11:26:38.143 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:38 np0005542249 upbeat_johnson[283187]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:26:38 np0005542249 upbeat_johnson[283187]: --> relative data size: 1.0
Dec  2 06:26:38 np0005542249 upbeat_johnson[283187]: --> All data devices are unavailable
Dec  2 06:26:38 np0005542249 systemd[1]: libpod-438debd2284425ebdb53082da9ef716dc78f8d94ba64fb086774e56546cfe334.scope: Deactivated successfully.
Dec  2 06:26:38 np0005542249 systemd[1]: libpod-438debd2284425ebdb53082da9ef716dc78f8d94ba64fb086774e56546cfe334.scope: Consumed 1.119s CPU time.
Dec  2 06:26:38 np0005542249 podman[283169]: 2025-12-02 11:26:38.487689763 +0000 UTC m=+1.362795965 container died 438debd2284425ebdb53082da9ef716dc78f8d94ba64fb086774e56546cfe334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_johnson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  2 06:26:38 np0005542249 systemd[1]: var-lib-containers-storage-overlay-3418420221a10fdfa27d1db9982e8c574e0cdea417f167f97da58503ee12d26c-merged.mount: Deactivated successfully.
Dec  2 06:26:38 np0005542249 nova_compute[254900]: 2025-12-02 11:26:38.538 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:38 np0005542249 podman[283169]: 2025-12-02 11:26:38.587735647 +0000 UTC m=+1.462841829 container remove 438debd2284425ebdb53082da9ef716dc78f8d94ba64fb086774e56546cfe334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  2 06:26:38 np0005542249 systemd[1]: libpod-conmon-438debd2284425ebdb53082da9ef716dc78f8d94ba64fb086774e56546cfe334.scope: Deactivated successfully.
Dec  2 06:26:39 np0005542249 podman[283373]: 2025-12-02 11:26:39.450282415 +0000 UTC m=+0.062336816 container create 6920930a15bf640367697d2f28b9a29581a80921ab37741165192b125dd819c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swanson, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:26:39 np0005542249 systemd[1]: Started libpod-conmon-6920930a15bf640367697d2f28b9a29581a80921ab37741165192b125dd819c5.scope.
Dec  2 06:26:39 np0005542249 podman[283373]: 2025-12-02 11:26:39.423547942 +0000 UTC m=+0.035602383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:26:39 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:26:39 np0005542249 podman[283373]: 2025-12-02 11:26:39.551266993 +0000 UTC m=+0.163321474 container init 6920930a15bf640367697d2f28b9a29581a80921ab37741165192b125dd819c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  2 06:26:39 np0005542249 podman[283373]: 2025-12-02 11:26:39.558274993 +0000 UTC m=+0.170329384 container start 6920930a15bf640367697d2f28b9a29581a80921ab37741165192b125dd819c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  2 06:26:39 np0005542249 inspiring_swanson[283390]: 167 167
Dec  2 06:26:39 np0005542249 systemd[1]: libpod-6920930a15bf640367697d2f28b9a29581a80921ab37741165192b125dd819c5.scope: Deactivated successfully.
Dec  2 06:26:39 np0005542249 nova_compute[254900]: 2025-12-02 11:26:39.598 254904 DEBUG oslo_concurrency.lockutils [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:26:39 np0005542249 nova_compute[254900]: 2025-12-02 11:26:39.600 254904 DEBUG oslo_concurrency.lockutils [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:26:39 np0005542249 nova_compute[254900]: 2025-12-02 11:26:39.601 254904 DEBUG oslo_concurrency.lockutils [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:26:39 np0005542249 nova_compute[254900]: 2025-12-02 11:26:39.601 254904 DEBUG oslo_concurrency.lockutils [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:26:39 np0005542249 nova_compute[254900]: 2025-12-02 11:26:39.602 254904 DEBUG oslo_concurrency.lockutils [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:26:39 np0005542249 nova_compute[254900]: 2025-12-02 11:26:39.603 254904 INFO nova.compute.manager [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Terminating instance#033[00m
Dec  2 06:26:39 np0005542249 nova_compute[254900]: 2025-12-02 11:26:39.604 254904 DEBUG nova.compute.manager [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:26:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e336 do_prune osdmap full prune enabled
Dec  2 06:26:39 np0005542249 podman[283373]: 2025-12-02 11:26:39.870267164 +0000 UTC m=+0.482321645 container attach 6920930a15bf640367697d2f28b9a29581a80921ab37741165192b125dd819c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  2 06:26:39 np0005542249 podman[283373]: 2025-12-02 11:26:39.87122514 +0000 UTC m=+0.483279601 container died 6920930a15bf640367697d2f28b9a29581a80921ab37741165192b125dd819c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  2 06:26:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e337 e337: 3 total, 3 up, 3 in
Dec  2 06:26:39 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e337: 3 total, 3 up, 3 in
Dec  2 06:26:39 np0005542249 kernel: tapd3b3d8c3-34 (unregistering): left promiscuous mode
Dec  2 06:26:39 np0005542249 NetworkManager[48987]: <info>  [1764674799.9886] device (tapd3b3d8c3-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:26:40 np0005542249 ovn_controller[153849]: 2025-12-02T11:26:40Z|00166|binding|INFO|Releasing lport d3b3d8c3-34ba-4f44-a4ac-615ed48abb50 from this chassis (sb_readonly=0)
Dec  2 06:26:40 np0005542249 ovn_controller[153849]: 2025-12-02T11:26:40Z|00167|binding|INFO|Setting lport d3b3d8c3-34ba-4f44-a4ac-615ed48abb50 down in Southbound
Dec  2 06:26:40 np0005542249 ovn_controller[153849]: 2025-12-02T11:26:40Z|00168|binding|INFO|Removing iface tapd3b3d8c3-34 ovn-installed in OVS
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.001 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:40 np0005542249 systemd[1]: var-lib-containers-storage-overlay-98164e9c042bdb4985b4e37b21c045e000171e70ef09b788cb49f268c3d898ce-merged.mount: Deactivated successfully.
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.025 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:40 np0005542249 podman[283373]: 2025-12-02 11:26:40.031323836 +0000 UTC m=+0.643378237 container remove 6920930a15bf640367697d2f28b9a29581a80921ab37741165192b125dd819c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swanson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  2 06:26:40 np0005542249 systemd[1]: libpod-conmon-6920930a15bf640367697d2f28b9a29581a80921ab37741165192b125dd819c5.scope: Deactivated successfully.
Dec  2 06:26:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:40.046 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:a1:94 10.100.0.7'], port_security=['fa:16:3e:1a:a1:94 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '625a6939c31646a4a83ea851774cf28c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7d24f0af-82a6-4c34-a172-39a54c318d33', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cc823ef0-3e69-4062-a488-a82f483f10bb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=d3b3d8c3-34ba-4f44-a4ac-615ed48abb50) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:26:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:40.048 163757 INFO neutron.agent.ovn.metadata.agent [-] Port d3b3d8c3-34ba-4f44-a4ac-615ed48abb50 in datapath acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 unbound from our chassis#033[00m
Dec  2 06:26:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:40.049 163757 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 06:26:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:40.051 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[8b402e38-5a04-4de1-8b24-7cc8101e7c2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:40.052 163757 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 namespace which is not needed anymore#033[00m
Dec  2 06:26:40 np0005542249 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Deactivated successfully.
Dec  2 06:26:40 np0005542249 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Consumed 3.127s CPU time.
Dec  2 06:26:40 np0005542249 systemd-machined[216222]: Machine qemu-16-instance-00000010 terminated.
Dec  2 06:26:40 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1444: 321 pgs: 321 active+clean; 88 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 189 KiB/s rd, 33 KiB/s wr, 193 op/s
Dec  2 06:26:40 np0005542249 podman[283387]: 2025-12-02 11:26:40.129219101 +0000 UTC m=+0.629138231 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  2 06:26:40 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[282943]: [NOTICE]   (282955) : haproxy version is 2.8.14-c23fe91
Dec  2 06:26:40 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[282943]: [NOTICE]   (282955) : path to executable is /usr/sbin/haproxy
Dec  2 06:26:40 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[282943]: [WARNING]  (282955) : Exiting Master process...
Dec  2 06:26:40 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[282943]: [ALERT]    (282955) : Current worker (282957) exited with code 143 (Terminated)
Dec  2 06:26:40 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[282943]: [WARNING]  (282955) : All workers exited. Exiting... (0)
Dec  2 06:26:40 np0005542249 systemd[1]: libpod-51e79eab0f2611b99ca7ba8cdfd4cc2cf4d9864080757986308a0fb10e63b506.scope: Deactivated successfully.
Dec  2 06:26:40 np0005542249 podman[283453]: 2025-12-02 11:26:40.202527462 +0000 UTC m=+0.048204883 container died 51e79eab0f2611b99ca7ba8cdfd4cc2cf4d9864080757986308a0fb10e63b506 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.233 254904 DEBUG nova.compute.manager [req-b683555c-4b1b-4c29-97af-205fcd898df6 req-05ca4a97-4942-497d-bb04-e7c19c04e397 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Received event network-vif-unplugged-d3b3d8c3-34ba-4f44-a4ac-615ed48abb50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.234 254904 DEBUG oslo_concurrency.lockutils [req-b683555c-4b1b-4c29-97af-205fcd898df6 req-05ca4a97-4942-497d-bb04-e7c19c04e397 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.235 254904 DEBUG oslo_concurrency.lockutils [req-b683555c-4b1b-4c29-97af-205fcd898df6 req-05ca4a97-4942-497d-bb04-e7c19c04e397 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.235 254904 DEBUG oslo_concurrency.lockutils [req-b683555c-4b1b-4c29-97af-205fcd898df6 req-05ca4a97-4942-497d-bb04-e7c19c04e397 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.235 254904 DEBUG nova.compute.manager [req-b683555c-4b1b-4c29-97af-205fcd898df6 req-05ca4a97-4942-497d-bb04-e7c19c04e397 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] No waiting events found dispatching network-vif-unplugged-d3b3d8c3-34ba-4f44-a4ac-615ed48abb50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.235 254904 DEBUG nova.compute.manager [req-b683555c-4b1b-4c29-97af-205fcd898df6 req-05ca4a97-4942-497d-bb04-e7c19c04e397 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Received event network-vif-unplugged-d3b3d8c3-34ba-4f44-a4ac-615ed48abb50 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 06:26:40 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-51e79eab0f2611b99ca7ba8cdfd4cc2cf4d9864080757986308a0fb10e63b506-userdata-shm.mount: Deactivated successfully.
Dec  2 06:26:40 np0005542249 systemd[1]: var-lib-containers-storage-overlay-8b20c80f09c2e8d7e31534f5b47057da7c2f323b46faff2fc3f691039c1d53f9-merged.mount: Deactivated successfully.
Dec  2 06:26:40 np0005542249 podman[283453]: 2025-12-02 11:26:40.24836096 +0000 UTC m=+0.094038381 container cleanup 51e79eab0f2611b99ca7ba8cdfd4cc2cf4d9864080757986308a0fb10e63b506 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  2 06:26:40 np0005542249 podman[283464]: 2025-12-02 11:26:40.253456928 +0000 UTC m=+0.072207422 container create 8cb104b556440f193adf17d1b3abb4bafdb0f60a8bfdc3820803edf96cef1b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.252 254904 INFO nova.virt.libvirt.driver [-] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Instance destroyed successfully.#033[00m
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.253 254904 DEBUG nova.objects.instance [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lazy-loading 'resources' on Instance uuid 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.268 254904 DEBUG nova.virt.libvirt.vif [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-769951804',display_name='tempest-TestVolumeBootPattern-server-769951804',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-769951804',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:26:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='625a6939c31646a4a83ea851774cf28c',ramdisk_id='',reservation_id='r-6ff3sxet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestVolumeBootPattern-1396850361',owner_user_name='tempest-TestVolumeBootPattern-1396850361-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:26:37Z,user_data=None,user_id='6ccb73a613554d938221b4bf46d7ae83',uuid=9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d3b3d8c3-34ba-4f44-a4ac-615ed48abb50", "address": "fa:16:3e:1a:a1:94", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b3d8c3-34", "ovs_interfaceid": "d3b3d8c3-34ba-4f44-a4ac-615ed48abb50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.268 254904 DEBUG nova.network.os_vif_util [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converting VIF {"id": "d3b3d8c3-34ba-4f44-a4ac-615ed48abb50", "address": "fa:16:3e:1a:a1:94", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b3d8c3-34", "ovs_interfaceid": "d3b3d8c3-34ba-4f44-a4ac-615ed48abb50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.269 254904 DEBUG nova.network.os_vif_util [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:a1:94,bridge_name='br-int',has_traffic_filtering=True,id=d3b3d8c3-34ba-4f44-a4ac-615ed48abb50,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b3d8c3-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.269 254904 DEBUG os_vif [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:a1:94,bridge_name='br-int',has_traffic_filtering=True,id=d3b3d8c3-34ba-4f44-a4ac-615ed48abb50,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b3d8c3-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.271 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.271 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3b3d8c3-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.275 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:40 np0005542249 systemd[1]: libpod-conmon-51e79eab0f2611b99ca7ba8cdfd4cc2cf4d9864080757986308a0fb10e63b506.scope: Deactivated successfully.
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.276 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.279 254904 INFO os_vif [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:a1:94,bridge_name='br-int',has_traffic_filtering=True,id=d3b3d8c3-34ba-4f44-a4ac-615ed48abb50,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b3d8c3-34')#033[00m
Dec  2 06:26:40 np0005542249 systemd[1]: Started libpod-conmon-8cb104b556440f193adf17d1b3abb4bafdb0f60a8bfdc3820803edf96cef1b61.scope.
Dec  2 06:26:40 np0005542249 podman[283464]: 2025-12-02 11:26:40.22244527 +0000 UTC m=+0.041200544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:26:40 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:26:40 np0005542249 podman[283513]: 2025-12-02 11:26:40.346964854 +0000 UTC m=+0.061509383 container remove 51e79eab0f2611b99ca7ba8cdfd4cc2cf4d9864080757986308a0fb10e63b506 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:26:40 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cb64db51cb34dfe79c7b1693609894912d2c516bb3affb8cc3ccaf0f8eabbe9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:26:40 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cb64db51cb34dfe79c7b1693609894912d2c516bb3affb8cc3ccaf0f8eabbe9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:26:40 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cb64db51cb34dfe79c7b1693609894912d2c516bb3affb8cc3ccaf0f8eabbe9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:26:40 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cb64db51cb34dfe79c7b1693609894912d2c516bb3affb8cc3ccaf0f8eabbe9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:26:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:40.353 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[f9239609-4a52-4640-b9e6-16bfbd7eb801]: (4, ('Tue Dec  2 11:26:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 (51e79eab0f2611b99ca7ba8cdfd4cc2cf4d9864080757986308a0fb10e63b506)\n51e79eab0f2611b99ca7ba8cdfd4cc2cf4d9864080757986308a0fb10e63b506\nTue Dec  2 11:26:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 (51e79eab0f2611b99ca7ba8cdfd4cc2cf4d9864080757986308a0fb10e63b506)\n51e79eab0f2611b99ca7ba8cdfd4cc2cf4d9864080757986308a0fb10e63b506\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:40.357 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[ab193c25-f501-4a72-a7f1-7d38482466d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:40.358 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacfaa8ac-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.365 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:40 np0005542249 kernel: tapacfaa8ac-00: left promiscuous mode
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.370 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:40 np0005542249 podman[283464]: 2025-12-02 11:26:40.37307531 +0000 UTC m=+0.191825854 container init 8cb104b556440f193adf17d1b3abb4bafdb0f60a8bfdc3820803edf96cef1b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bhaskara, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:26:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:40.374 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[760b26f0-a8bf-4c3e-b578-aaf3a5ae28dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:40 np0005542249 podman[283464]: 2025-12-02 11:26:40.383629906 +0000 UTC m=+0.202380400 container start 8cb104b556440f193adf17d1b3abb4bafdb0f60a8bfdc3820803edf96cef1b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  2 06:26:40 np0005542249 podman[283464]: 2025-12-02 11:26:40.390114061 +0000 UTC m=+0.208864575 container attach 8cb104b556440f193adf17d1b3abb4bafdb0f60a8bfdc3820803edf96cef1b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bhaskara, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.393 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:40.394 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[fa84bcb3-ba6f-48e1-9c1b-32d5d66ff8cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:40.397 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[de69d6c9-6453-4048-acfd-0af5f817eb5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:40.418 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[67162997-2de4-49bb-8733-0bee5385e9cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498748, 'reachable_time': 41952, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283551, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:40.424 164036 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 06:26:40 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:40.424 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[504ba47f-98eb-4d99-8551-7d47dfe9a1b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:26:40 np0005542249 systemd[1]: run-netns-ovnmeta\x2dacfaa8ac\x2d0b3c\x2d4cdd\x2da6b8\x2da70a713ae754.mount: Deactivated successfully.
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.529 254904 INFO nova.virt.libvirt.driver [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Deleting instance files /var/lib/nova/instances/9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d_del#033[00m
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.532 254904 INFO nova.virt.libvirt.driver [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Deletion of /var/lib/nova/instances/9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d_del complete#033[00m
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.620 254904 INFO nova.compute.manager [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Took 1.02 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.623 254904 DEBUG oslo.service.loopingcall [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.624 254904 DEBUG nova.compute.manager [-] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:26:40 np0005542249 nova_compute[254900]: 2025-12-02 11:26:40.625 254904 DEBUG nova.network.neutron [-] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:26:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e337 do_prune osdmap full prune enabled
Dec  2 06:26:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e338 e338: 3 total, 3 up, 3 in
Dec  2 06:26:40 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e338: 3 total, 3 up, 3 in
Dec  2 06:26:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:26:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/359134127' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:26:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:26:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/359134127' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:26:41 np0005542249 nova_compute[254900]: 2025-12-02 11:26:41.161 254904 DEBUG nova.network.neutron [-] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:26:41 np0005542249 nova_compute[254900]: 2025-12-02 11:26:41.181 254904 INFO nova.compute.manager [-] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Took 0.56 seconds to deallocate network for instance.#033[00m
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]: {
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:    "0": [
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:        {
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "devices": [
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "/dev/loop3"
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            ],
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "lv_name": "ceph_lv0",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "lv_size": "21470642176",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "name": "ceph_lv0",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "tags": {
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.cluster_name": "ceph",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.crush_device_class": "",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.encrypted": "0",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.osd_id": "0",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.type": "block",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.vdo": "0"
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            },
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "type": "block",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "vg_name": "ceph_vg0"
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:        }
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:    ],
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:    "1": [
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:        {
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "devices": [
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "/dev/loop4"
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            ],
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "lv_name": "ceph_lv1",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "lv_size": "21470642176",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "name": "ceph_lv1",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "tags": {
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.cluster_name": "ceph",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.crush_device_class": "",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.encrypted": "0",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.osd_id": "1",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.type": "block",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.vdo": "0"
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            },
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "type": "block",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "vg_name": "ceph_vg1"
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:        }
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:    ],
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:    "2": [
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:        {
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "devices": [
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "/dev/loop5"
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            ],
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "lv_name": "ceph_lv2",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "lv_size": "21470642176",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "name": "ceph_lv2",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "tags": {
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.cluster_name": "ceph",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.crush_device_class": "",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.encrypted": "0",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.osd_id": "2",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.type": "block",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:                "ceph.vdo": "0"
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            },
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "type": "block",
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:            "vg_name": "ceph_vg2"
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:        }
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]:    ]
Dec  2 06:26:41 np0005542249 brave_bhaskara[283541]: }
Dec  2 06:26:41 np0005542249 nova_compute[254900]: 2025-12-02 11:26:41.255 254904 DEBUG nova.compute.manager [req-b9c3a34b-dd3b-4f36-a3ef-a127d6192e0e req-2d007745-540f-41f0-b17f-1644f23d5781 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Received event network-vif-deleted-d3b3d8c3-34ba-4f44-a4ac-615ed48abb50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:26:41 np0005542249 systemd[1]: libpod-8cb104b556440f193adf17d1b3abb4bafdb0f60a8bfdc3820803edf96cef1b61.scope: Deactivated successfully.
Dec  2 06:26:41 np0005542249 podman[283464]: 2025-12-02 11:26:41.290749607 +0000 UTC m=+1.109500171 container died 8cb104b556440f193adf17d1b3abb4bafdb0f60a8bfdc3820803edf96cef1b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:26:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:26:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2505712579' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:26:41 np0005542249 nova_compute[254900]: 2025-12-02 11:26:41.415 254904 INFO nova.compute.manager [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Took 0.23 seconds to detach 1 volumes for instance.#033[00m
Dec  2 06:26:41 np0005542249 nova_compute[254900]: 2025-12-02 11:26:41.476 254904 DEBUG oslo_concurrency.lockutils [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:26:41 np0005542249 nova_compute[254900]: 2025-12-02 11:26:41.477 254904 DEBUG oslo_concurrency.lockutils [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:26:41 np0005542249 nova_compute[254900]: 2025-12-02 11:26:41.538 254904 DEBUG oslo_concurrency.processutils [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:26:41 np0005542249 nova_compute[254900]: 2025-12-02 11:26:41.687 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:26:41 np0005542249 nova_compute[254900]: 2025-12-02 11:26:41.688 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:26:41 np0005542249 nova_compute[254900]: 2025-12-02 11:26:41.710 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 06:26:41 np0005542249 nova_compute[254900]: 2025-12-02 11:26:41.712 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:26:41 np0005542249 nova_compute[254900]: 2025-12-02 11:26:41.712 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:26:41 np0005542249 nova_compute[254900]: 2025-12-02 11:26:41.713 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:26:41 np0005542249 nova_compute[254900]: 2025-12-02 11:26:41.713 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:26:42 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1446: 321 pgs: 321 active+clean; 88 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 202 KiB/s rd, 35 KiB/s wr, 204 op/s
Dec  2 06:26:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:26:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:26:42 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1853555066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:26:42 np0005542249 nova_compute[254900]: 2025-12-02 11:26:42.306 254904 DEBUG oslo_concurrency.processutils [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.768s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:26:42 np0005542249 nova_compute[254900]: 2025-12-02 11:26:42.317 254904 DEBUG nova.compute.provider_tree [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:26:42 np0005542249 nova_compute[254900]: 2025-12-02 11:26:42.336 254904 DEBUG nova.scheduler.client.report [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:26:42 np0005542249 systemd[1]: var-lib-containers-storage-overlay-6cb64db51cb34dfe79c7b1693609894912d2c516bb3affb8cc3ccaf0f8eabbe9-merged.mount: Deactivated successfully.
Dec  2 06:26:42 np0005542249 nova_compute[254900]: 2025-12-02 11:26:42.350 254904 DEBUG nova.compute.manager [req-c26d7396-fa97-4a2b-b587-ba1b690cd5f9 req-09d58e2a-3e56-4505-8988-21bbde920972 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Received event network-vif-plugged-d3b3d8c3-34ba-4f44-a4ac-615ed48abb50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:26:42 np0005542249 nova_compute[254900]: 2025-12-02 11:26:42.351 254904 DEBUG oslo_concurrency.lockutils [req-c26d7396-fa97-4a2b-b587-ba1b690cd5f9 req-09d58e2a-3e56-4505-8988-21bbde920972 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:26:42 np0005542249 nova_compute[254900]: 2025-12-02 11:26:42.351 254904 DEBUG oslo_concurrency.lockutils [req-c26d7396-fa97-4a2b-b587-ba1b690cd5f9 req-09d58e2a-3e56-4505-8988-21bbde920972 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:26:42 np0005542249 nova_compute[254900]: 2025-12-02 11:26:42.352 254904 DEBUG oslo_concurrency.lockutils [req-c26d7396-fa97-4a2b-b587-ba1b690cd5f9 req-09d58e2a-3e56-4505-8988-21bbde920972 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:26:42 np0005542249 nova_compute[254900]: 2025-12-02 11:26:42.353 254904 DEBUG nova.compute.manager [req-c26d7396-fa97-4a2b-b587-ba1b690cd5f9 req-09d58e2a-3e56-4505-8988-21bbde920972 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] No waiting events found dispatching network-vif-plugged-d3b3d8c3-34ba-4f44-a4ac-615ed48abb50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:26:42 np0005542249 nova_compute[254900]: 2025-12-02 11:26:42.353 254904 WARNING nova.compute.manager [req-c26d7396-fa97-4a2b-b587-ba1b690cd5f9 req-09d58e2a-3e56-4505-8988-21bbde920972 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Received unexpected event network-vif-plugged-d3b3d8c3-34ba-4f44-a4ac-615ed48abb50 for instance with vm_state deleted and task_state None.#033[00m
Dec  2 06:26:42 np0005542249 nova_compute[254900]: 2025-12-02 11:26:42.370 254904 DEBUG oslo_concurrency.lockutils [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:26:42 np0005542249 podman[283464]: 2025-12-02 11:26:42.390797722 +0000 UTC m=+2.209548256 container remove 8cb104b556440f193adf17d1b3abb4bafdb0f60a8bfdc3820803edf96cef1b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bhaskara, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:26:42 np0005542249 nova_compute[254900]: 2025-12-02 11:26:42.403 254904 INFO nova.scheduler.client.report [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Deleted allocations for instance 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d#033[00m
Dec  2 06:26:42 np0005542249 systemd[1]: libpod-conmon-8cb104b556440f193adf17d1b3abb4bafdb0f60a8bfdc3820803edf96cef1b61.scope: Deactivated successfully.
Dec  2 06:26:42 np0005542249 nova_compute[254900]: 2025-12-02 11:26:42.458 254904 DEBUG oslo_concurrency.lockutils [None req-26f4ab6d-d177-4049-99d6-a040c94e72f1 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:26:43 np0005542249 podman[283734]: 2025-12-02 11:26:43.123412578 +0000 UTC m=+0.041909323 container create d39e02c56b5251143d57b6e7ea4a2df1d4b266eb3beb277c518a07b4ffc0e39c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:26:43 np0005542249 podman[283734]: 2025-12-02 11:26:43.102918725 +0000 UTC m=+0.021415540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:26:43 np0005542249 systemd[1]: Started libpod-conmon-d39e02c56b5251143d57b6e7ea4a2df1d4b266eb3beb277c518a07b4ffc0e39c.scope.
Dec  2 06:26:43 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:26:43 np0005542249 podman[283734]: 2025-12-02 11:26:43.312425975 +0000 UTC m=+0.230922750 container init d39e02c56b5251143d57b6e7ea4a2df1d4b266eb3beb277c518a07b4ffc0e39c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sutherland, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  2 06:26:43 np0005542249 podman[283734]: 2025-12-02 11:26:43.321178413 +0000 UTC m=+0.239675168 container start d39e02c56b5251143d57b6e7ea4a2df1d4b266eb3beb277c518a07b4ffc0e39c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sutherland, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:26:43 np0005542249 podman[283734]: 2025-12-02 11:26:43.326222568 +0000 UTC m=+0.244719333 container attach d39e02c56b5251143d57b6e7ea4a2df1d4b266eb3beb277c518a07b4ffc0e39c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sutherland, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:26:43 np0005542249 lucid_sutherland[283750]: 167 167
Dec  2 06:26:43 np0005542249 systemd[1]: libpod-d39e02c56b5251143d57b6e7ea4a2df1d4b266eb3beb277c518a07b4ffc0e39c.scope: Deactivated successfully.
Dec  2 06:26:43 np0005542249 podman[283734]: 2025-12-02 11:26:43.33071801 +0000 UTC m=+0.249214765 container died d39e02c56b5251143d57b6e7ea4a2df1d4b266eb3beb277c518a07b4ffc0e39c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sutherland, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  2 06:26:43 np0005542249 systemd[1]: var-lib-containers-storage-overlay-6995f009ad9f43ab6f8eb685276e9bf1f29178a6a7f2bc32a4803112442ab966-merged.mount: Deactivated successfully.
Dec  2 06:26:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e338 do_prune osdmap full prune enabled
Dec  2 06:26:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e339 e339: 3 total, 3 up, 3 in
Dec  2 06:26:43 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e339: 3 total, 3 up, 3 in
Dec  2 06:26:43 np0005542249 podman[283734]: 2025-12-02 11:26:43.3695769 +0000 UTC m=+0.288073645 container remove d39e02c56b5251143d57b6e7ea4a2df1d4b266eb3beb277c518a07b4ffc0e39c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  2 06:26:43 np0005542249 systemd[1]: libpod-conmon-d39e02c56b5251143d57b6e7ea4a2df1d4b266eb3beb277c518a07b4ffc0e39c.scope: Deactivated successfully.
Dec  2 06:26:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:26:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2900235108' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:26:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:26:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2900235108' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:26:43 np0005542249 podman[283776]: 2025-12-02 11:26:43.530375725 +0000 UTC m=+0.045990764 container create dd262881877199ca4acb216e50a532d91ff333a96266044eacf5bd8951678fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:26:43 np0005542249 nova_compute[254900]: 2025-12-02 11:26:43.540 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:43 np0005542249 systemd[1]: Started libpod-conmon-dd262881877199ca4acb216e50a532d91ff333a96266044eacf5bd8951678fde.scope.
Dec  2 06:26:43 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:26:43 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5d87b1151b21eda28cd162a97bc2b6e421a933625170330afa21897544759fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:26:43 np0005542249 podman[283776]: 2025-12-02 11:26:43.508995918 +0000 UTC m=+0.024610967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:26:43 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5d87b1151b21eda28cd162a97bc2b6e421a933625170330afa21897544759fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:26:43 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5d87b1151b21eda28cd162a97bc2b6e421a933625170330afa21897544759fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:26:43 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5d87b1151b21eda28cd162a97bc2b6e421a933625170330afa21897544759fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:26:43 np0005542249 podman[283776]: 2025-12-02 11:26:43.617655163 +0000 UTC m=+0.133270212 container init dd262881877199ca4acb216e50a532d91ff333a96266044eacf5bd8951678fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldwasser, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  2 06:26:43 np0005542249 podman[283776]: 2025-12-02 11:26:43.62457941 +0000 UTC m=+0.140194429 container start dd262881877199ca4acb216e50a532d91ff333a96266044eacf5bd8951678fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldwasser, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:26:43 np0005542249 podman[283776]: 2025-12-02 11:26:43.628306991 +0000 UTC m=+0.143922040 container attach dd262881877199ca4acb216e50a532d91ff333a96266044eacf5bd8951678fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldwasser, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  2 06:26:44 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1448: 321 pgs: 321 active+clean; 88 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 191 KiB/s rd, 8.8 KiB/s wr, 198 op/s
Dec  2 06:26:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e339 do_prune osdmap full prune enabled
Dec  2 06:26:44 np0005542249 vibrant_goldwasser[283793]: {
Dec  2 06:26:44 np0005542249 vibrant_goldwasser[283793]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:26:44 np0005542249 vibrant_goldwasser[283793]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:26:44 np0005542249 vibrant_goldwasser[283793]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:26:44 np0005542249 vibrant_goldwasser[283793]:        "osd_id": 0,
Dec  2 06:26:44 np0005542249 vibrant_goldwasser[283793]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:26:44 np0005542249 vibrant_goldwasser[283793]:        "type": "bluestore"
Dec  2 06:26:44 np0005542249 vibrant_goldwasser[283793]:    },
Dec  2 06:26:44 np0005542249 vibrant_goldwasser[283793]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:26:44 np0005542249 vibrant_goldwasser[283793]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:26:44 np0005542249 vibrant_goldwasser[283793]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:26:44 np0005542249 vibrant_goldwasser[283793]:        "osd_id": 2,
Dec  2 06:26:44 np0005542249 vibrant_goldwasser[283793]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:26:44 np0005542249 vibrant_goldwasser[283793]:        "type": "bluestore"
Dec  2 06:26:44 np0005542249 vibrant_goldwasser[283793]:    },
Dec  2 06:26:44 np0005542249 vibrant_goldwasser[283793]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:26:44 np0005542249 vibrant_goldwasser[283793]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:26:44 np0005542249 vibrant_goldwasser[283793]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:26:44 np0005542249 vibrant_goldwasser[283793]:        "osd_id": 1,
Dec  2 06:26:44 np0005542249 vibrant_goldwasser[283793]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:26:44 np0005542249 vibrant_goldwasser[283793]:        "type": "bluestore"
Dec  2 06:26:44 np0005542249 vibrant_goldwasser[283793]:    }
Dec  2 06:26:44 np0005542249 vibrant_goldwasser[283793]: }
Dec  2 06:26:44 np0005542249 systemd[1]: libpod-dd262881877199ca4acb216e50a532d91ff333a96266044eacf5bd8951678fde.scope: Deactivated successfully.
Dec  2 06:26:44 np0005542249 podman[283776]: 2025-12-02 11:26:44.575843606 +0000 UTC m=+1.091458635 container died dd262881877199ca4acb216e50a532d91ff333a96266044eacf5bd8951678fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:26:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e340 e340: 3 total, 3 up, 3 in
Dec  2 06:26:44 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e340: 3 total, 3 up, 3 in
Dec  2 06:26:44 np0005542249 systemd[1]: var-lib-containers-storage-overlay-c5d87b1151b21eda28cd162a97bc2b6e421a933625170330afa21897544759fd-merged.mount: Deactivated successfully.
Dec  2 06:26:44 np0005542249 podman[283776]: 2025-12-02 11:26:44.859479099 +0000 UTC m=+1.375094138 container remove dd262881877199ca4acb216e50a532d91ff333a96266044eacf5bd8951678fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:26:44 np0005542249 systemd[1]: libpod-conmon-dd262881877199ca4acb216e50a532d91ff333a96266044eacf5bd8951678fde.scope: Deactivated successfully.
Dec  2 06:26:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:26:44 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:26:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:26:44 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:26:44 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 14e2e8ed-2206-4708-905b-11d9ce9703a6 does not exist
Dec  2 06:26:44 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 91dae5d6-aad8-4144-b74d-54b5eb01ad42 does not exist
Dec  2 06:26:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:26:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2854235272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:26:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:26:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2854235272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:26:45 np0005542249 nova_compute[254900]: 2025-12-02 11:26:45.312 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:45 np0005542249 nova_compute[254900]: 2025-12-02 11:26:45.404 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:26:45 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:26:45 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:26:46 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1450: 321 pgs: 321 active+clean; 88 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 184 KiB/s rd, 8.0 KiB/s wr, 241 op/s
Dec  2 06:26:46 np0005542249 nova_compute[254900]: 2025-12-02 11:26:46.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:26:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e340 do_prune osdmap full prune enabled
Dec  2 06:26:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e341 e341: 3 total, 3 up, 3 in
Dec  2 06:26:46 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e341: 3 total, 3 up, 3 in
Dec  2 06:26:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:26:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e341 do_prune osdmap full prune enabled
Dec  2 06:26:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e342 e342: 3 total, 3 up, 3 in
Dec  2 06:26:47 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e342: 3 total, 3 up, 3 in
Dec  2 06:26:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:26:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3324723357' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:26:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:26:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3324723357' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:26:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:26:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/394488867' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:26:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:26:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/394488867' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:26:48 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1453: 321 pgs: 321 active+clean; 88 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 10 KiB/s wr, 285 op/s
Dec  2 06:26:48 np0005542249 nova_compute[254900]: 2025-12-02 11:26:48.582 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:49 np0005542249 podman[283890]: 2025-12-02 11:26:49.097934429 +0000 UTC m=+0.169714537 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  2 06:26:49 np0005542249 nova_compute[254900]: 2025-12-02 11:26:49.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:26:49 np0005542249 nova_compute[254900]: 2025-12-02 11:26:49.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:26:49 np0005542249 nova_compute[254900]: 2025-12-02 11:26:49.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:26:49 np0005542249 nova_compute[254900]: 2025-12-02 11:26:49.410 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:26:49 np0005542249 nova_compute[254900]: 2025-12-02 11:26:49.411 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:26:49 np0005542249 nova_compute[254900]: 2025-12-02 11:26:49.411 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:26:49 np0005542249 nova_compute[254900]: 2025-12-02 11:26:49.411 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:26:49 np0005542249 nova_compute[254900]: 2025-12-02 11:26:49.411 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:26:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:26:49 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2133883573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:26:49 np0005542249 nova_compute[254900]: 2025-12-02 11:26:49.897 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:26:50 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1454: 321 pgs: 321 active+clean; 88 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 9.3 KiB/s wr, 251 op/s
Dec  2 06:26:50 np0005542249 nova_compute[254900]: 2025-12-02 11:26:50.132 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:26:50 np0005542249 nova_compute[254900]: 2025-12-02 11:26:50.135 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4443MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:26:50 np0005542249 nova_compute[254900]: 2025-12-02 11:26:50.135 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:26:50 np0005542249 nova_compute[254900]: 2025-12-02 11:26:50.136 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:26:50 np0005542249 nova_compute[254900]: 2025-12-02 11:26:50.213 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:26:50 np0005542249 nova_compute[254900]: 2025-12-02 11:26:50.214 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:26:50 np0005542249 nova_compute[254900]: 2025-12-02 11:26:50.229 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Refreshing inventories for resource provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  2 06:26:50 np0005542249 nova_compute[254900]: 2025-12-02 11:26:50.248 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Updating ProviderTree inventory for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  2 06:26:50 np0005542249 nova_compute[254900]: 2025-12-02 11:26:50.249 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Updating inventory in ProviderTree for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  2 06:26:50 np0005542249 nova_compute[254900]: 2025-12-02 11:26:50.273 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Refreshing aggregate associations for resource provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  2 06:26:50 np0005542249 nova_compute[254900]: 2025-12-02 11:26:50.299 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Refreshing trait associations for resource provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062, traits: HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SVM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_MMX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,HW_CPU_X86_CLMUL,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE41,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  2 06:26:50 np0005542249 nova_compute[254900]: 2025-12-02 11:26:50.316 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:50 np0005542249 nova_compute[254900]: 2025-12-02 11:26:50.319 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:26:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:26:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2229155265' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:26:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:26:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2229155265' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:26:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:26:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/246617042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:26:50 np0005542249 nova_compute[254900]: 2025-12-02 11:26:50.749 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:26:50 np0005542249 nova_compute[254900]: 2025-12-02 11:26:50.758 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:26:50 np0005542249 nova_compute[254900]: 2025-12-02 11:26:50.789 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:26:50 np0005542249 nova_compute[254900]: 2025-12-02 11:26:50.820 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:26:50 np0005542249 nova_compute[254900]: 2025-12-02 11:26:50.821 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:26:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:26:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4186770229' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:26:52 np0005542249 podman[283963]: 2025-12-02 11:26:52.018976491 +0000 UTC m=+0.084831154 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec  2 06:26:52 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1455: 321 pgs: 321 active+clean; 88 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.5 KiB/s wr, 158 op/s
Dec  2 06:26:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e342 do_prune osdmap full prune enabled
Dec  2 06:26:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e343 e343: 3 total, 3 up, 3 in
Dec  2 06:26:52 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e343: 3 total, 3 up, 3 in
Dec  2 06:26:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:26:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e343 do_prune osdmap full prune enabled
Dec  2 06:26:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e344 e344: 3 total, 3 up, 3 in
Dec  2 06:26:52 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e344: 3 total, 3 up, 3 in
Dec  2 06:26:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e344 do_prune osdmap full prune enabled
Dec  2 06:26:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e345 e345: 3 total, 3 up, 3 in
Dec  2 06:26:53 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e345: 3 total, 3 up, 3 in
Dec  2 06:26:53 np0005542249 nova_compute[254900]: 2025-12-02 11:26:53.584 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:54 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1459: 321 pgs: 321 active+clean; 110 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.6 MiB/s wr, 92 op/s
Dec  2 06:26:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e345 do_prune osdmap full prune enabled
Dec  2 06:26:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e346 e346: 3 total, 3 up, 3 in
Dec  2 06:26:54 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e346: 3 total, 3 up, 3 in
Dec  2 06:26:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:26:55 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2372228148' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:26:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:26:55 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2372228148' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:26:55 np0005542249 nova_compute[254900]: 2025-12-02 11:26:55.250 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764674800.2486567, 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:26:55 np0005542249 nova_compute[254900]: 2025-12-02 11:26:55.251 254904 INFO nova.compute.manager [-] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:26:55 np0005542249 nova_compute[254900]: 2025-12-02 11:26:55.274 254904 DEBUG nova.compute.manager [None req-e17c0571-7e8f-4005-b278-16ba8c6775b1 - - - - - -] [instance: 9ffbc7dc-5dfb-4265-bbd5-ea2e2a12a62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:26:55 np0005542249 nova_compute[254900]: 2025-12-02 11:26:55.320 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:56 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1461: 321 pgs: 321 active+clean; 121 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 3.8 MiB/s wr, 86 op/s
Dec  2 06:26:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:26:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:26:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:26:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:26:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:26:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:26:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e346 do_prune osdmap full prune enabled
Dec  2 06:26:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e347 e347: 3 total, 3 up, 3 in
Dec  2 06:26:56 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e347: 3 total, 3 up, 3 in
Dec  2 06:26:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:26:57 np0005542249 nova_compute[254900]: 2025-12-02 11:26:57.930 254904 DEBUG oslo_concurrency.lockutils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "24f4facb-0cf2-4494-a321-eb07ca6288b7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:26:57 np0005542249 nova_compute[254900]: 2025-12-02 11:26:57.931 254904 DEBUG oslo_concurrency.lockutils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "24f4facb-0cf2-4494-a321-eb07ca6288b7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:26:57 np0005542249 nova_compute[254900]: 2025-12-02 11:26:57.945 254904 DEBUG nova.compute.manager [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:26:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:26:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2843020241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:26:58 np0005542249 nova_compute[254900]: 2025-12-02 11:26:58.023 254904 DEBUG oslo_concurrency.lockutils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:26:58 np0005542249 nova_compute[254900]: 2025-12-02 11:26:58.024 254904 DEBUG oslo_concurrency.lockutils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:26:58 np0005542249 nova_compute[254900]: 2025-12-02 11:26:58.032 254904 DEBUG nova.virt.hardware [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:26:58 np0005542249 nova_compute[254900]: 2025-12-02 11:26:58.032 254904 INFO nova.compute.claims [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:26:58 np0005542249 nova_compute[254900]: 2025-12-02 11:26:58.127 254904 DEBUG oslo_concurrency.processutils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:26:58 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1463: 321 pgs: 321 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 3.8 MiB/s wr, 158 op/s
Dec  2 06:26:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:26:58 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3538132364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:26:58 np0005542249 nova_compute[254900]: 2025-12-02 11:26:58.533 254904 DEBUG oslo_concurrency.processutils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:26:58 np0005542249 nova_compute[254900]: 2025-12-02 11:26:58.542 254904 DEBUG nova.compute.provider_tree [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:26:58 np0005542249 nova_compute[254900]: 2025-12-02 11:26:58.568 254904 DEBUG nova.scheduler.client.report [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:26:58 np0005542249 nova_compute[254900]: 2025-12-02 11:26:58.585 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:58 np0005542249 nova_compute[254900]: 2025-12-02 11:26:58.594 254904 DEBUG oslo_concurrency.lockutils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:26:58 np0005542249 nova_compute[254900]: 2025-12-02 11:26:58.595 254904 DEBUG nova.compute.manager [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:26:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e347 do_prune osdmap full prune enabled
Dec  2 06:26:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e348 e348: 3 total, 3 up, 3 in
Dec  2 06:26:58 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e348: 3 total, 3 up, 3 in
Dec  2 06:26:58 np0005542249 nova_compute[254900]: 2025-12-02 11:26:58.720 254904 DEBUG nova.compute.manager [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:26:58 np0005542249 nova_compute[254900]: 2025-12-02 11:26:58.721 254904 DEBUG nova.network.neutron [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:26:58 np0005542249 nova_compute[254900]: 2025-12-02 11:26:58.758 254904 INFO nova.virt.libvirt.driver [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:26:58 np0005542249 nova_compute[254900]: 2025-12-02 11:26:58.781 254904 DEBUG nova.compute.manager [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:26:58 np0005542249 nova_compute[254900]: 2025-12-02 11:26:58.836 254904 INFO nova.virt.block_device [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Booting with volume snapshot 5a5cd12d-6d03-48ae-b503-2bd44810020f at /dev/vda#033[00m
Dec  2 06:26:58 np0005542249 nova_compute[254900]: 2025-12-02 11:26:58.934 254904 DEBUG nova.policy [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6ccb73a613554d938221b4bf46d7ae83', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '625a6939c31646a4a83ea851774cf28c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:26:59 np0005542249 nova_compute[254900]: 2025-12-02 11:26:59.402 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:26:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:59.403 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:23:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:25:d0:a1:75:ed'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:26:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:26:59.404 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 06:26:59 np0005542249 nova_compute[254900]: 2025-12-02 11:26:59.470 254904 DEBUG nova.network.neutron [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Successfully created port: fd7c57f0-6cda-4a73-9695-895342d135db _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:26:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e348 do_prune osdmap full prune enabled
Dec  2 06:26:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e349 e349: 3 total, 3 up, 3 in
Dec  2 06:26:59 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e349: 3 total, 3 up, 3 in
Dec  2 06:27:00 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1466: 321 pgs: 321 active+clean; 134 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 1.1 MiB/s wr, 138 op/s
Dec  2 06:27:00 np0005542249 nova_compute[254900]: 2025-12-02 11:27:00.192 254904 DEBUG nova.network.neutron [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Successfully updated port: fd7c57f0-6cda-4a73-9695-895342d135db _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:27:00 np0005542249 nova_compute[254900]: 2025-12-02 11:27:00.212 254904 DEBUG oslo_concurrency.lockutils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "refresh_cache-24f4facb-0cf2-4494-a321-eb07ca6288b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:27:00 np0005542249 nova_compute[254900]: 2025-12-02 11:27:00.212 254904 DEBUG oslo_concurrency.lockutils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquired lock "refresh_cache-24f4facb-0cf2-4494-a321-eb07ca6288b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:27:00 np0005542249 nova_compute[254900]: 2025-12-02 11:27:00.212 254904 DEBUG nova.network.neutron [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:27:00 np0005542249 nova_compute[254900]: 2025-12-02 11:27:00.303 254904 DEBUG nova.compute.manager [req-7ae21ae4-6c1b-4811-9708-66f47707ab41 req-f6fd33c9-98bd-4e59-a868-f849342c0349 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Received event network-changed-fd7c57f0-6cda-4a73-9695-895342d135db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:27:00 np0005542249 nova_compute[254900]: 2025-12-02 11:27:00.303 254904 DEBUG nova.compute.manager [req-7ae21ae4-6c1b-4811-9708-66f47707ab41 req-f6fd33c9-98bd-4e59-a868-f849342c0349 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Refreshing instance network info cache due to event network-changed-fd7c57f0-6cda-4a73-9695-895342d135db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:27:00 np0005542249 nova_compute[254900]: 2025-12-02 11:27:00.303 254904 DEBUG oslo_concurrency.lockutils [req-7ae21ae4-6c1b-4811-9708-66f47707ab41 req-f6fd33c9-98bd-4e59-a868-f849342c0349 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-24f4facb-0cf2-4494-a321-eb07ca6288b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:27:00 np0005542249 nova_compute[254900]: 2025-12-02 11:27:00.323 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:00 np0005542249 nova_compute[254900]: 2025-12-02 11:27:00.336 254904 DEBUG nova.network.neutron [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:27:01 np0005542249 nova_compute[254900]: 2025-12-02 11:27:01.048 254904 DEBUG nova.network.neutron [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Updating instance_info_cache with network_info: [{"id": "fd7c57f0-6cda-4a73-9695-895342d135db", "address": "fa:16:3e:c4:c7:84", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd7c57f0-6c", "ovs_interfaceid": "fd7c57f0-6cda-4a73-9695-895342d135db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:27:01 np0005542249 nova_compute[254900]: 2025-12-02 11:27:01.072 254904 DEBUG oslo_concurrency.lockutils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Releasing lock "refresh_cache-24f4facb-0cf2-4494-a321-eb07ca6288b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:27:01 np0005542249 nova_compute[254900]: 2025-12-02 11:27:01.073 254904 DEBUG nova.compute.manager [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Instance network_info: |[{"id": "fd7c57f0-6cda-4a73-9695-895342d135db", "address": "fa:16:3e:c4:c7:84", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd7c57f0-6c", "ovs_interfaceid": "fd7c57f0-6cda-4a73-9695-895342d135db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:27:01 np0005542249 nova_compute[254900]: 2025-12-02 11:27:01.074 254904 DEBUG oslo_concurrency.lockutils [req-7ae21ae4-6c1b-4811-9708-66f47707ab41 req-f6fd33c9-98bd-4e59-a868-f849342c0349 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-24f4facb-0cf2-4494-a321-eb07ca6288b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:27:01 np0005542249 nova_compute[254900]: 2025-12-02 11:27:01.074 254904 DEBUG nova.network.neutron [req-7ae21ae4-6c1b-4811-9708-66f47707ab41 req-f6fd33c9-98bd-4e59-a868-f849342c0349 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Refreshing network info cache for port fd7c57f0-6cda-4a73-9695-895342d135db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:27:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:27:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2094157216' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:27:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:27:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2094157216' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:27:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e349 do_prune osdmap full prune enabled
Dec  2 06:27:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e350 e350: 3 total, 3 up, 3 in
Dec  2 06:27:01 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e350: 3 total, 3 up, 3 in
Dec  2 06:27:02 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1468: 321 pgs: 321 active+clean; 134 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 140 KiB/s rd, 1.1 MiB/s wr, 198 op/s
Dec  2 06:27:02 np0005542249 nova_compute[254900]: 2025-12-02 11:27:02.405 254904 DEBUG nova.network.neutron [req-7ae21ae4-6c1b-4811-9708-66f47707ab41 req-f6fd33c9-98bd-4e59-a868-f849342c0349 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Updated VIF entry in instance network info cache for port fd7c57f0-6cda-4a73-9695-895342d135db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:27:02 np0005542249 nova_compute[254900]: 2025-12-02 11:27:02.406 254904 DEBUG nova.network.neutron [req-7ae21ae4-6c1b-4811-9708-66f47707ab41 req-f6fd33c9-98bd-4e59-a868-f849342c0349 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Updating instance_info_cache with network_info: [{"id": "fd7c57f0-6cda-4a73-9695-895342d135db", "address": "fa:16:3e:c4:c7:84", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd7c57f0-6c", "ovs_interfaceid": "fd7c57f0-6cda-4a73-9695-895342d135db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:27:02 np0005542249 nova_compute[254900]: 2025-12-02 11:27:02.420 254904 DEBUG oslo_concurrency.lockutils [req-7ae21ae4-6c1b-4811-9708-66f47707ab41 req-f6fd33c9-98bd-4e59-a868-f849342c0349 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-24f4facb-0cf2-4494-a321-eb07ca6288b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:27:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:27:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e350 do_prune osdmap full prune enabled
Dec  2 06:27:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e351 e351: 3 total, 3 up, 3 in
Dec  2 06:27:02 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e351: 3 total, 3 up, 3 in
Dec  2 06:27:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:27:02 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1354737893' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:27:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:27:02 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1354737893' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:27:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:27:02 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1823493254' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:27:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:27:02 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1823493254' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:27:02 np0005542249 nova_compute[254900]: 2025-12-02 11:27:02.966 254904 DEBUG os_brick.utils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:27:02 np0005542249 nova_compute[254900]: 2025-12-02 11:27:02.967 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:27:02 np0005542249 nova_compute[254900]: 2025-12-02 11:27:02.977 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:27:02 np0005542249 nova_compute[254900]: 2025-12-02 11:27:02.977 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[5b43abe1-f4a7-44b9-bce9-3b85dc51a498]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:02 np0005542249 nova_compute[254900]: 2025-12-02 11:27:02.978 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:27:02 np0005542249 nova_compute[254900]: 2025-12-02 11:27:02.991 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:27:02 np0005542249 nova_compute[254900]: 2025-12-02 11:27:02.992 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[a133beba-9c0b-44c6-af69-26a48fd46f23]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:02 np0005542249 nova_compute[254900]: 2025-12-02 11:27:02.994 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:27:03 np0005542249 nova_compute[254900]: 2025-12-02 11:27:03.012 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:27:03 np0005542249 nova_compute[254900]: 2025-12-02 11:27:03.012 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[2ac8af22-8ec4-4fd8-87b0-76080cbf82f0]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:03 np0005542249 nova_compute[254900]: 2025-12-02 11:27:03.014 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[578b2aef-8c83-475a-a09e-e55705f10508]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:03 np0005542249 nova_compute[254900]: 2025-12-02 11:27:03.015 254904 DEBUG oslo_concurrency.processutils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:27:03 np0005542249 nova_compute[254900]: 2025-12-02 11:27:03.042 254904 DEBUG oslo_concurrency.processutils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:27:03 np0005542249 nova_compute[254900]: 2025-12-02 11:27:03.046 254904 DEBUG os_brick.initiator.connectors.lightos [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:27:03 np0005542249 nova_compute[254900]: 2025-12-02 11:27:03.046 254904 DEBUG os_brick.initiator.connectors.lightos [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:27:03 np0005542249 nova_compute[254900]: 2025-12-02 11:27:03.047 254904 DEBUG os_brick.initiator.connectors.lightos [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:27:03 np0005542249 nova_compute[254900]: 2025-12-02 11:27:03.047 254904 DEBUG os_brick.utils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] <== get_connector_properties: return (80ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:27:03 np0005542249 nova_compute[254900]: 2025-12-02 11:27:03.048 254904 DEBUG nova.virt.block_device [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Updating existing volume attachment record: c3800f55-876b-47c0-945d-056161d19a6f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:27:03 np0005542249 nova_compute[254900]: 2025-12-02 11:27:03.633 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:27:03 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1694152608' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:27:04 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1470: 321 pgs: 321 active+clean; 134 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 6.9 KiB/s wr, 138 op/s
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.222 254904 DEBUG nova.compute.manager [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.224 254904 DEBUG nova.virt.libvirt.driver [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.224 254904 INFO nova.virt.libvirt.driver [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Creating image(s)#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.225 254904 DEBUG nova.virt.libvirt.driver [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.225 254904 DEBUG nova.virt.libvirt.driver [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Ensure instance console log exists: /var/lib/nova/instances/24f4facb-0cf2-4494-a321-eb07ca6288b7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.225 254904 DEBUG oslo_concurrency.lockutils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.225 254904 DEBUG oslo_concurrency.lockutils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.226 254904 DEBUG oslo_concurrency.lockutils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.227 254904 DEBUG nova.virt.libvirt.driver [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Start _get_guest_xml network_info=[{"id": "fd7c57f0-6cda-4a73-9695-895342d135db", "address": "fa:16:3e:c4:c7:84", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd7c57f0-6c", "ovs_interfaceid": "fd7c57f0-6cda-4a73-9695-895342d135db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-45df0d82-3996-4a1e-a4b2-322511a14400', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '45df0d82-3996-4a1e-a4b2-322511a14400', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '24f4facb-0cf2-4494-a321-eb07ca6288b7', 'attached_at': '', 'detached_at': '', 'volume_id': '45df0d82-3996-4a1e-a4b2-322511a14400', 'serial': '45df0d82-3996-4a1e-a4b2-322511a14400'}, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'attachment_id': 'c3800f55-876b-47c0-945d-056161d19a6f', 'delete_on_termination': True, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.233 254904 WARNING nova.virt.libvirt.driver [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.238 254904 DEBUG nova.virt.libvirt.host [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.239 254904 DEBUG nova.virt.libvirt.host [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.242 254904 DEBUG nova.virt.libvirt.host [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.243 254904 DEBUG nova.virt.libvirt.host [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.243 254904 DEBUG nova.virt.libvirt.driver [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.243 254904 DEBUG nova.virt.hardware [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.244 254904 DEBUG nova.virt.hardware [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.244 254904 DEBUG nova.virt.hardware [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.244 254904 DEBUG nova.virt.hardware [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.244 254904 DEBUG nova.virt.hardware [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.244 254904 DEBUG nova.virt.hardware [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.244 254904 DEBUG nova.virt.hardware [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.245 254904 DEBUG nova.virt.hardware [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.245 254904 DEBUG nova.virt.hardware [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.245 254904 DEBUG nova.virt.hardware [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.245 254904 DEBUG nova.virt.hardware [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.270 254904 DEBUG nova.storage.rbd_utils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] rbd image 24f4facb-0cf2-4494-a321-eb07ca6288b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.274 254904 DEBUG oslo_concurrency.processutils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:27:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:27:04 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/186610221' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.763 254904 DEBUG oslo_concurrency.processutils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:27:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:27:04 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/187569470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.791 254904 DEBUG nova.virt.libvirt.vif [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:26:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1646485961',display_name='tempest-TestVolumeBootPattern-server-1646485961',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1646485961',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='625a6939c31646a4a83ea851774cf28c',ramdisk_id='',reservation_id='r-tci0ty26',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1396850361',owner_user_name='tempest-TestVolumeBootPattern-1396850361-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:26:58Z,user_data=None,user_id='6ccb73a613554d938221b4bf46d7ae83',uuid=24f4facb-0cf2-4494-a321-eb07ca6288b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd7c57f0-6cda-4a73-9695-895342d135db", "address": "fa:16:3e:c4:c7:84", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd7c57f0-6c", "ovs_interfaceid": "fd7c57f0-6cda-4a73-9695-895342d135db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.792 254904 DEBUG nova.network.os_vif_util [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converting VIF {"id": "fd7c57f0-6cda-4a73-9695-895342d135db", "address": "fa:16:3e:c4:c7:84", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd7c57f0-6c", "ovs_interfaceid": "fd7c57f0-6cda-4a73-9695-895342d135db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.793 254904 DEBUG nova.network.os_vif_util [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:c7:84,bridge_name='br-int',has_traffic_filtering=True,id=fd7c57f0-6cda-4a73-9695-895342d135db,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd7c57f0-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.794 254904 DEBUG nova.objects.instance [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lazy-loading 'pci_devices' on Instance uuid 24f4facb-0cf2-4494-a321-eb07ca6288b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.811 254904 DEBUG nova.virt.libvirt.driver [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:27:04 np0005542249 nova_compute[254900]:  <uuid>24f4facb-0cf2-4494-a321-eb07ca6288b7</uuid>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:  <name>instance-00000011</name>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <nova:name>tempest-TestVolumeBootPattern-server-1646485961</nova:name>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:27:04</nova:creationTime>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:27:04 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:        <nova:user uuid="6ccb73a613554d938221b4bf46d7ae83">tempest-TestVolumeBootPattern-1396850361-project-member</nova:user>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:        <nova:project uuid="625a6939c31646a4a83ea851774cf28c">tempest-TestVolumeBootPattern-1396850361</nova:project>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:        <nova:port uuid="fd7c57f0-6cda-4a73-9695-895342d135db">
Dec  2 06:27:04 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <entry name="serial">24f4facb-0cf2-4494-a321-eb07ca6288b7</entry>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <entry name="uuid">24f4facb-0cf2-4494-a321-eb07ca6288b7</entry>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/24f4facb-0cf2-4494-a321-eb07ca6288b7_disk.config">
Dec  2 06:27:04 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:27:04 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="volumes/volume-45df0d82-3996-4a1e-a4b2-322511a14400">
Dec  2 06:27:04 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:27:04 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <serial>45df0d82-3996-4a1e-a4b2-322511a14400</serial>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:c4:c7:84"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <target dev="tapfd7c57f0-6c"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/24f4facb-0cf2-4494-a321-eb07ca6288b7/console.log" append="off"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:27:04 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:27:04 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:27:04 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:27:04 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.812 254904 DEBUG nova.compute.manager [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Preparing to wait for external event network-vif-plugged-fd7c57f0-6cda-4a73-9695-895342d135db prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.813 254904 DEBUG oslo_concurrency.lockutils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "24f4facb-0cf2-4494-a321-eb07ca6288b7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.813 254904 DEBUG oslo_concurrency.lockutils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "24f4facb-0cf2-4494-a321-eb07ca6288b7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.813 254904 DEBUG oslo_concurrency.lockutils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "24f4facb-0cf2-4494-a321-eb07ca6288b7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.814 254904 DEBUG nova.virt.libvirt.vif [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:26:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1646485961',display_name='tempest-TestVolumeBootPattern-server-1646485961',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1646485961',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='625a6939c31646a4a83ea851774cf28c',ramdisk_id='',reservation_id='r-tci0ty26',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1396850361',owner_user_name='tempest-TestVolumeBootPattern-1396850361-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:26:58Z,user_data=None,user_id='6ccb73a613554d938221b4bf46d7ae83',uuid=24f4facb-0cf2-4494-a321-eb07ca6288b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd7c57f0-6cda-4a73-9695-895342d135db", "address": "fa:16:3e:c4:c7:84", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd7c57f0-6c", "ovs_interfaceid": "fd7c57f0-6cda-4a73-9695-895342d135db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.814 254904 DEBUG nova.network.os_vif_util [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converting VIF {"id": "fd7c57f0-6cda-4a73-9695-895342d135db", "address": "fa:16:3e:c4:c7:84", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd7c57f0-6c", "ovs_interfaceid": "fd7c57f0-6cda-4a73-9695-895342d135db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.815 254904 DEBUG nova.network.os_vif_util [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:c7:84,bridge_name='br-int',has_traffic_filtering=True,id=fd7c57f0-6cda-4a73-9695-895342d135db,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd7c57f0-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.816 254904 DEBUG os_vif [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:c7:84,bridge_name='br-int',has_traffic_filtering=True,id=fd7c57f0-6cda-4a73-9695-895342d135db,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd7c57f0-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.816 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.817 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.817 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.821 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.821 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd7c57f0-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.822 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfd7c57f0-6c, col_values=(('external_ids', {'iface-id': 'fd7c57f0-6cda-4a73-9695-895342d135db', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:c7:84', 'vm-uuid': '24f4facb-0cf2-4494-a321-eb07ca6288b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.824 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:04 np0005542249 NetworkManager[48987]: <info>  [1764674824.8256] manager: (tapfd7c57f0-6c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.826 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.834 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.835 254904 INFO os_vif [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:c7:84,bridge_name='br-int',has_traffic_filtering=True,id=fd7c57f0-6cda-4a73-9695-895342d135db,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd7c57f0-6c')#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.934 254904 DEBUG nova.virt.libvirt.driver [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.935 254904 DEBUG nova.virt.libvirt.driver [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.935 254904 DEBUG nova.virt.libvirt.driver [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] No VIF found with MAC fa:16:3e:c4:c7:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.935 254904 INFO nova.virt.libvirt.driver [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Using config drive#033[00m
Dec  2 06:27:04 np0005542249 nova_compute[254900]: 2025-12-02 11:27:04.962 254904 DEBUG nova.storage.rbd_utils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] rbd image 24f4facb-0cf2-4494-a321-eb07ca6288b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:27:05 np0005542249 nova_compute[254900]: 2025-12-02 11:27:05.680 254904 INFO nova.virt.libvirt.driver [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Creating config drive at /var/lib/nova/instances/24f4facb-0cf2-4494-a321-eb07ca6288b7/disk.config#033[00m
Dec  2 06:27:05 np0005542249 nova_compute[254900]: 2025-12-02 11:27:05.685 254904 DEBUG oslo_concurrency.processutils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/24f4facb-0cf2-4494-a321-eb07ca6288b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvaiagoc3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:27:05 np0005542249 nova_compute[254900]: 2025-12-02 11:27:05.813 254904 DEBUG oslo_concurrency.processutils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/24f4facb-0cf2-4494-a321-eb07ca6288b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvaiagoc3" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:27:05 np0005542249 nova_compute[254900]: 2025-12-02 11:27:05.839 254904 DEBUG nova.storage.rbd_utils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] rbd image 24f4facb-0cf2-4494-a321-eb07ca6288b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:27:05 np0005542249 nova_compute[254900]: 2025-12-02 11:27:05.843 254904 DEBUG oslo_concurrency.processutils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/24f4facb-0cf2-4494-a321-eb07ca6288b7/disk.config 24f4facb-0cf2-4494-a321-eb07ca6288b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:27:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e351 do_prune osdmap full prune enabled
Dec  2 06:27:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e352 e352: 3 total, 3 up, 3 in
Dec  2 06:27:05 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e352: 3 total, 3 up, 3 in
Dec  2 06:27:05 np0005542249 nova_compute[254900]: 2025-12-02 11:27:05.993 254904 DEBUG oslo_concurrency.processutils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/24f4facb-0cf2-4494-a321-eb07ca6288b7/disk.config 24f4facb-0cf2-4494-a321-eb07ca6288b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:27:05 np0005542249 nova_compute[254900]: 2025-12-02 11:27:05.994 254904 INFO nova.virt.libvirt.driver [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Deleting local config drive /var/lib/nova/instances/24f4facb-0cf2-4494-a321-eb07ca6288b7/disk.config because it was imported into RBD.#033[00m
Dec  2 06:27:06 np0005542249 kernel: tapfd7c57f0-6c: entered promiscuous mode
Dec  2 06:27:06 np0005542249 NetworkManager[48987]: <info>  [1764674826.0659] manager: (tapfd7c57f0-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/93)
Dec  2 06:27:06 np0005542249 systemd-udevd[284124]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:27:06 np0005542249 ovn_controller[153849]: 2025-12-02T11:27:06Z|00169|binding|INFO|Claiming lport fd7c57f0-6cda-4a73-9695-895342d135db for this chassis.
Dec  2 06:27:06 np0005542249 ovn_controller[153849]: 2025-12-02T11:27:06Z|00170|binding|INFO|fd7c57f0-6cda-4a73-9695-895342d135db: Claiming fa:16:3e:c4:c7:84 10.100.0.3
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.096 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.106 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:c7:84 10.100.0.3'], port_security=['fa:16:3e:c4:c7:84 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '24f4facb-0cf2-4494-a321-eb07ca6288b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '625a6939c31646a4a83ea851774cf28c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7d24f0af-82a6-4c34-a172-39a54c318d33', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cc823ef0-3e69-4062-a488-a82f483f10bb, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=fd7c57f0-6cda-4a73-9695-895342d135db) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.107 163757 INFO neutron.agent.ovn.metadata.agent [-] Port fd7c57f0-6cda-4a73-9695-895342d135db in datapath acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 bound to our chassis#033[00m
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.108 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754#033[00m
Dec  2 06:27:06 np0005542249 NetworkManager[48987]: <info>  [1764674826.1117] device (tapfd7c57f0-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:27:06 np0005542249 NetworkManager[48987]: <info>  [1764674826.1149] device (tapfd7c57f0-6c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:27:06 np0005542249 systemd-machined[216222]: New machine qemu-17-instance-00000011.
Dec  2 06:27:06 np0005542249 ovn_controller[153849]: 2025-12-02T11:27:06Z|00171|binding|INFO|Setting lport fd7c57f0-6cda-4a73-9695-895342d135db up in Southbound
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.124 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[6536d22c-b305-4ef0-9060-0505a4d7d139]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.125 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapacfaa8ac-01 in ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 06:27:06 np0005542249 ovn_controller[153849]: 2025-12-02T11:27:06Z|00172|binding|INFO|Setting lport fd7c57f0-6cda-4a73-9695-895342d135db ovn-installed in OVS
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.126 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.129 262398 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapacfaa8ac-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.130 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[646fc081-d20a-4bc5-855d-32ce1f152016]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.131 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[6a8608ff-37e6-4991-8cab-4a1f32058c0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.131 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:06 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1472: 321 pgs: 321 active+clean; 134 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 5.8 KiB/s wr, 91 op/s
Dec  2 06:27:06 np0005542249 systemd[1]: Started Virtual Machine qemu-17-instance-00000011.
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.150 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[9832e73a-e316-4e3b-b41c-379f69c4434a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.176 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[2b8c2dd5-0898-425d-8bb9-a2955974e6c7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.204 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[02548d70-f0a9-48cf-834d-8477a43e1e98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:06 np0005542249 NetworkManager[48987]: <info>  [1764674826.2121] manager: (tapacfaa8ac-00): new Veth device (/org/freedesktop/NetworkManager/Devices/94)
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.212 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[38fab555-562d-45b7-8e0c-48231a0718c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:06 np0005542249 systemd-udevd[284128]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.239 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[4c70c2c6-65be-44ab-a86f-f2f68e687932]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.242 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[665ddd56-49c8-4f86-85c2-d1cc6704cce6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:06 np0005542249 NetworkManager[48987]: <info>  [1764674826.2599] device (tapacfaa8ac-00): carrier: link connected
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.264 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[3dc1c17d-f055-40a2-971e-43912614c9c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.277 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[777e9ca0-5a0a-4b95-bbef-43f1f5454bf3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacfaa8ac-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:73:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501908, 'reachable_time': 44025, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284160, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.290 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[540712c4-9b6a-4bf8-b909-8ab8ddbf22f7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fece:73a3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501908, 'tstamp': 501908}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284161, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.305 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[99e2e6e3-b156-489b-8e34-be51f6a75953]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacfaa8ac-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:73:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501908, 'reachable_time': 44025, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284162, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.331 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[fc83e988-75b9-4e09-b8ad-690643dc0e28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.390 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[75adb843-1150-4f2d-9b84-0b56ee75fae4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.391 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacfaa8ac-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.391 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.391 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapacfaa8ac-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:27:06 np0005542249 kernel: tapacfaa8ac-00: entered promiscuous mode
Dec  2 06:27:06 np0005542249 NetworkManager[48987]: <info>  [1764674826.3939] manager: (tapacfaa8ac-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.393 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.395 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.396 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapacfaa8ac-00, col_values=(('external_ids', {'iface-id': '1636ad30-406d-4138-823e-abbe7f4d87ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.397 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:06 np0005542249 ovn_controller[153849]: 2025-12-02T11:27:06Z|00173|binding|INFO|Releasing lport 1636ad30-406d-4138-823e-abbe7f4d87ac from this chassis (sb_readonly=0)
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.413 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.414 163757 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.415 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[38b1dfe3-5268-4a19-a14a-4a17d187fe20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.416 163757 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: global
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]:    log         /dev/log local0 debug
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]:    log-tag     haproxy-metadata-proxy-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]:    user        root
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]:    group       root
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]:    maxconn     1024
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]:    pidfile     /var/lib/neutron/external/pids/acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754.pid.haproxy
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]:    daemon
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: defaults
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]:    log global
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]:    mode http
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]:    option httplog
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]:    option dontlognull
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]:    option http-server-close
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]:    option forwardfor
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]:    retries                 3
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]:    timeout http-request    30s
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]:    timeout connect         30s
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]:    timeout client          32s
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]:    timeout server          32s
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]:    timeout http-keep-alive 30s
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: listen listener
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]:    bind 169.254.169.254:80
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]:    http-request add-header X-OVN-Network-ID acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 06:27:06 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:06.418 163757 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'env', 'PROCESS_TAG=haproxy-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.542 254904 DEBUG nova.compute.manager [req-3bd19467-dd95-4f80-b43c-2f43fed169bf req-4bc4e6a8-4616-43b5-8d82-aed6322461a5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Received event network-vif-plugged-fd7c57f0-6cda-4a73-9695-895342d135db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.543 254904 DEBUG oslo_concurrency.lockutils [req-3bd19467-dd95-4f80-b43c-2f43fed169bf req-4bc4e6a8-4616-43b5-8d82-aed6322461a5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "24f4facb-0cf2-4494-a321-eb07ca6288b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.543 254904 DEBUG oslo_concurrency.lockutils [req-3bd19467-dd95-4f80-b43c-2f43fed169bf req-4bc4e6a8-4616-43b5-8d82-aed6322461a5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "24f4facb-0cf2-4494-a321-eb07ca6288b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.544 254904 DEBUG oslo_concurrency.lockutils [req-3bd19467-dd95-4f80-b43c-2f43fed169bf req-4bc4e6a8-4616-43b5-8d82-aed6322461a5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "24f4facb-0cf2-4494-a321-eb07ca6288b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.545 254904 DEBUG nova.compute.manager [req-3bd19467-dd95-4f80-b43c-2f43fed169bf req-4bc4e6a8-4616-43b5-8d82-aed6322461a5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Processing event network-vif-plugged-fd7c57f0-6cda-4a73-9695-895342d135db _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:27:06 np0005542249 podman[284212]: 2025-12-02 11:27:06.779431127 +0000 UTC m=+0.044813312 container create c977f729513b52de10f2d1fc98ef72ae216f12b361346acdeb0cd60c92d23237 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  2 06:27:06 np0005542249 systemd[1]: Started libpod-conmon-c977f729513b52de10f2d1fc98ef72ae216f12b361346acdeb0cd60c92d23237.scope.
Dec  2 06:27:06 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:27:06 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/985c7a6486d6202dd46666fddc62dc1e14e42b3fe4e6aba825fe98c144c9683b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 06:27:06 np0005542249 podman[284212]: 2025-12-02 11:27:06.755556702 +0000 UTC m=+0.020938887 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:27:06 np0005542249 podman[284212]: 2025-12-02 11:27:06.860400925 +0000 UTC m=+0.125783140 container init c977f729513b52de10f2d1fc98ef72ae216f12b361346acdeb0cd60c92d23237 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:27:06 np0005542249 podman[284212]: 2025-12-02 11:27:06.867499237 +0000 UTC m=+0.132881422 container start c977f729513b52de10f2d1fc98ef72ae216f12b361346acdeb0cd60c92d23237 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.890 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674826.8895617, 24f4facb-0cf2-4494-a321-eb07ca6288b7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.891 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] VM Started (Lifecycle Event)#033[00m
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.895 254904 DEBUG nova.compute.manager [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.900 254904 DEBUG nova.virt.libvirt.driver [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.904 254904 INFO nova.virt.libvirt.driver [-] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Instance spawned successfully.#033[00m
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.905 254904 DEBUG nova.virt.libvirt.driver [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 06:27:06 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[284251]: [NOTICE]   (284256) : New worker (284258) forked
Dec  2 06:27:06 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[284251]: [NOTICE]   (284256) : Loading success.
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.919 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.930 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.934 254904 DEBUG nova.virt.libvirt.driver [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.935 254904 DEBUG nova.virt.libvirt.driver [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.935 254904 DEBUG nova.virt.libvirt.driver [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.936 254904 DEBUG nova.virt.libvirt.driver [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.936 254904 DEBUG nova.virt.libvirt.driver [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.937 254904 DEBUG nova.virt.libvirt.driver [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.991 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.992 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674826.8908706, 24f4facb-0cf2-4494-a321-eb07ca6288b7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:27:06 np0005542249 nova_compute[254900]: 2025-12-02 11:27:06.992 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:27:07 np0005542249 nova_compute[254900]: 2025-12-02 11:27:07.013 254904 INFO nova.compute.manager [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Took 2.79 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:27:07 np0005542249 nova_compute[254900]: 2025-12-02 11:27:07.013 254904 DEBUG nova.compute.manager [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:27:07 np0005542249 nova_compute[254900]: 2025-12-02 11:27:07.014 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:27:07 np0005542249 nova_compute[254900]: 2025-12-02 11:27:07.021 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674826.8989327, 24f4facb-0cf2-4494-a321-eb07ca6288b7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:27:07 np0005542249 nova_compute[254900]: 2025-12-02 11:27:07.021 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:27:07 np0005542249 nova_compute[254900]: 2025-12-02 11:27:07.048 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:27:07 np0005542249 nova_compute[254900]: 2025-12-02 11:27:07.051 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:27:07 np0005542249 nova_compute[254900]: 2025-12-02 11:27:07.070 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:27:07 np0005542249 nova_compute[254900]: 2025-12-02 11:27:07.082 254904 INFO nova.compute.manager [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Took 9.10 seconds to build instance.#033[00m
Dec  2 06:27:07 np0005542249 nova_compute[254900]: 2025-12-02 11:27:07.097 254904 DEBUG oslo_concurrency.lockutils [None req-848e5c31-7563-47d4-b31a-ab3c2091bd24 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "24f4facb-0cf2-4494-a321-eb07ca6288b7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:27:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:27:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e352 do_prune osdmap full prune enabled
Dec  2 06:27:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e353 e353: 3 total, 3 up, 3 in
Dec  2 06:27:07 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e353: 3 total, 3 up, 3 in
Dec  2 06:27:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:27:07 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1459371370' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:27:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:27:07 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1459371370' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:27:08 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1474: 321 pgs: 321 active+clean; 134 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 278 KiB/s rd, 5.2 KiB/s wr, 122 op/s
Dec  2 06:27:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e353 do_prune osdmap full prune enabled
Dec  2 06:27:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e354 e354: 3 total, 3 up, 3 in
Dec  2 06:27:08 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e354: 3 total, 3 up, 3 in
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.629 254904 DEBUG nova.compute.manager [req-6f989921-92b6-4fb3-9a56-a582fec76371 req-d0860f48-22b8-4207-a7e2-37ae57d55660 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Received event network-vif-plugged-fd7c57f0-6cda-4a73-9695-895342d135db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.630 254904 DEBUG oslo_concurrency.lockutils [req-6f989921-92b6-4fb3-9a56-a582fec76371 req-d0860f48-22b8-4207-a7e2-37ae57d55660 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "24f4facb-0cf2-4494-a321-eb07ca6288b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.630 254904 DEBUG oslo_concurrency.lockutils [req-6f989921-92b6-4fb3-9a56-a582fec76371 req-d0860f48-22b8-4207-a7e2-37ae57d55660 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "24f4facb-0cf2-4494-a321-eb07ca6288b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.630 254904 DEBUG oslo_concurrency.lockutils [req-6f989921-92b6-4fb3-9a56-a582fec76371 req-d0860f48-22b8-4207-a7e2-37ae57d55660 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "24f4facb-0cf2-4494-a321-eb07ca6288b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.631 254904 DEBUG nova.compute.manager [req-6f989921-92b6-4fb3-9a56-a582fec76371 req-d0860f48-22b8-4207-a7e2-37ae57d55660 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] No waiting events found dispatching network-vif-plugged-fd7c57f0-6cda-4a73-9695-895342d135db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.631 254904 WARNING nova.compute.manager [req-6f989921-92b6-4fb3-9a56-a582fec76371 req-d0860f48-22b8-4207-a7e2-37ae57d55660 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Received unexpected event network-vif-plugged-fd7c57f0-6cda-4a73-9695-895342d135db for instance with vm_state active and task_state None.#033[00m
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.635 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.720 254904 DEBUG oslo_concurrency.lockutils [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "24f4facb-0cf2-4494-a321-eb07ca6288b7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.721 254904 DEBUG oslo_concurrency.lockutils [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "24f4facb-0cf2-4494-a321-eb07ca6288b7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.721 254904 DEBUG oslo_concurrency.lockutils [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "24f4facb-0cf2-4494-a321-eb07ca6288b7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.723 254904 DEBUG oslo_concurrency.lockutils [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "24f4facb-0cf2-4494-a321-eb07ca6288b7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.723 254904 DEBUG oslo_concurrency.lockutils [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "24f4facb-0cf2-4494-a321-eb07ca6288b7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.725 254904 INFO nova.compute.manager [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Terminating instance#033[00m
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.726 254904 DEBUG nova.compute.manager [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:27:08 np0005542249 kernel: tapfd7c57f0-6c (unregistering): left promiscuous mode
Dec  2 06:27:08 np0005542249 NetworkManager[48987]: <info>  [1764674828.7739] device (tapfd7c57f0-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:27:08 np0005542249 ovn_controller[153849]: 2025-12-02T11:27:08Z|00174|binding|INFO|Releasing lport fd7c57f0-6cda-4a73-9695-895342d135db from this chassis (sb_readonly=0)
Dec  2 06:27:08 np0005542249 ovn_controller[153849]: 2025-12-02T11:27:08Z|00175|binding|INFO|Setting lport fd7c57f0-6cda-4a73-9695-895342d135db down in Southbound
Dec  2 06:27:08 np0005542249 ovn_controller[153849]: 2025-12-02T11:27:08Z|00176|binding|INFO|Removing iface tapfd7c57f0-6c ovn-installed in OVS
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.790 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:08.796 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:c7:84 10.100.0.3'], port_security=['fa:16:3e:c4:c7:84 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '24f4facb-0cf2-4494-a321-eb07ca6288b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '625a6939c31646a4a83ea851774cf28c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7d24f0af-82a6-4c34-a172-39a54c318d33', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cc823ef0-3e69-4062-a488-a82f483f10bb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=fd7c57f0-6cda-4a73-9695-895342d135db) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:27:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:08.798 163757 INFO neutron.agent.ovn.metadata.agent [-] Port fd7c57f0-6cda-4a73-9695-895342d135db in datapath acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 unbound from our chassis#033[00m
Dec  2 06:27:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:08.799 163757 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 06:27:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:08.800 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[d6979f27-5406-4259-9e71-ec8e331286dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:08.801 163757 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 namespace which is not needed anymore#033[00m
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.823 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:08 np0005542249 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Deactivated successfully.
Dec  2 06:27:08 np0005542249 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Consumed 2.823s CPU time.
Dec  2 06:27:08 np0005542249 systemd-machined[216222]: Machine qemu-17-instance-00000011 terminated.
Dec  2 06:27:08 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[284251]: [NOTICE]   (284256) : haproxy version is 2.8.14-c23fe91
Dec  2 06:27:08 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[284251]: [NOTICE]   (284256) : path to executable is /usr/sbin/haproxy
Dec  2 06:27:08 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[284251]: [WARNING]  (284256) : Exiting Master process...
Dec  2 06:27:08 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[284251]: [WARNING]  (284256) : Exiting Master process...
Dec  2 06:27:08 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[284251]: [ALERT]    (284256) : Current worker (284258) exited with code 143 (Terminated)
Dec  2 06:27:08 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[284251]: [WARNING]  (284256) : All workers exited. Exiting... (0)
Dec  2 06:27:08 np0005542249 systemd[1]: libpod-c977f729513b52de10f2d1fc98ef72ae216f12b361346acdeb0cd60c92d23237.scope: Deactivated successfully.
Dec  2 06:27:08 np0005542249 conmon[284251]: conmon c977f729513b52de10f2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c977f729513b52de10f2d1fc98ef72ae216f12b361346acdeb0cd60c92d23237.scope/container/memory.events
Dec  2 06:27:08 np0005542249 podman[284289]: 2025-12-02 11:27:08.952973899 +0000 UTC m=+0.051002479 container died c977f729513b52de10f2d1fc98ef72ae216f12b361346acdeb0cd60c92d23237 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:27:08 np0005542249 NetworkManager[48987]: <info>  [1764674828.9554] manager: (tapfd7c57f0-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/96)
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.958 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.965 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.972 254904 INFO nova.virt.libvirt.driver [-] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Instance destroyed successfully.#033[00m
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.973 254904 DEBUG nova.objects.instance [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lazy-loading 'resources' on Instance uuid 24f4facb-0cf2-4494-a321-eb07ca6288b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.990 254904 DEBUG nova.virt.libvirt.vif [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:26:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1646485961',display_name='tempest-TestVolumeBootPattern-server-1646485961',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1646485961',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:27:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='625a6939c31646a4a83ea851774cf28c',ramdisk_id='',reservation_id='r-tci0ty26',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1396850361',owner_user_name='tempest-TestVolumeBootPattern-1396850361-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:27:07Z,user_data=None,user_id='6ccb73a613554d938221b4bf46d7ae83',uuid=24f4facb-0cf2-4494-a321-eb07ca6288b7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fd7c57f0-6cda-4a73-9695-895342d135db", "address": "fa:16:3e:c4:c7:84", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd7c57f0-6c", "ovs_interfaceid": "fd7c57f0-6cda-4a73-9695-895342d135db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.990 254904 DEBUG nova.network.os_vif_util [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converting VIF {"id": "fd7c57f0-6cda-4a73-9695-895342d135db", "address": "fa:16:3e:c4:c7:84", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd7c57f0-6c", "ovs_interfaceid": "fd7c57f0-6cda-4a73-9695-895342d135db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:27:08 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c977f729513b52de10f2d1fc98ef72ae216f12b361346acdeb0cd60c92d23237-userdata-shm.mount: Deactivated successfully.
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.991 254904 DEBUG nova.network.os_vif_util [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:c7:84,bridge_name='br-int',has_traffic_filtering=True,id=fd7c57f0-6cda-4a73-9695-895342d135db,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd7c57f0-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.992 254904 DEBUG os_vif [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:c7:84,bridge_name='br-int',has_traffic_filtering=True,id=fd7c57f0-6cda-4a73-9695-895342d135db,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd7c57f0-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.995 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.995 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd7c57f0-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:27:08 np0005542249 systemd[1]: var-lib-containers-storage-overlay-985c7a6486d6202dd46666fddc62dc1e14e42b3fe4e6aba825fe98c144c9683b-merged.mount: Deactivated successfully.
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.997 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:08 np0005542249 nova_compute[254900]: 2025-12-02 11:27:08.999 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:09 np0005542249 nova_compute[254900]: 2025-12-02 11:27:09.002 254904 INFO os_vif [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:c7:84,bridge_name='br-int',has_traffic_filtering=True,id=fd7c57f0-6cda-4a73-9695-895342d135db,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd7c57f0-6c')#033[00m
Dec  2 06:27:09 np0005542249 podman[284289]: 2025-12-02 11:27:09.01331577 +0000 UTC m=+0.111344260 container cleanup c977f729513b52de10f2d1fc98ef72ae216f12b361346acdeb0cd60c92d23237 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  2 06:27:09 np0005542249 systemd[1]: libpod-conmon-c977f729513b52de10f2d1fc98ef72ae216f12b361346acdeb0cd60c92d23237.scope: Deactivated successfully.
Dec  2 06:27:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:27:09 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1292150801' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:27:09 np0005542249 podman[284335]: 2025-12-02 11:27:09.101191014 +0000 UTC m=+0.059729625 container remove c977f729513b52de10f2d1fc98ef72ae216f12b361346acdeb0cd60c92d23237 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec  2 06:27:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:09.110 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[10bb3688-6890-4d92-98ef-7983958e8d82]: (4, ('Tue Dec  2 11:27:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 (c977f729513b52de10f2d1fc98ef72ae216f12b361346acdeb0cd60c92d23237)\nc977f729513b52de10f2d1fc98ef72ae216f12b361346acdeb0cd60c92d23237\nTue Dec  2 11:27:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 (c977f729513b52de10f2d1fc98ef72ae216f12b361346acdeb0cd60c92d23237)\nc977f729513b52de10f2d1fc98ef72ae216f12b361346acdeb0cd60c92d23237\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:09.113 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[027c0b40-b577-4b72-b695-a7e9136e07ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:09.115 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacfaa8ac-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:27:09 np0005542249 kernel: tapacfaa8ac-00: left promiscuous mode
Dec  2 06:27:09 np0005542249 nova_compute[254900]: 2025-12-02 11:27:09.117 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:09 np0005542249 nova_compute[254900]: 2025-12-02 11:27:09.136 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:09.139 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[194f4689-5f67-4354-a515-30bf0f6868af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:09.153 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[6332843d-e3d9-4c8f-8297-cbda2a2b3284]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:09.154 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[808b84af-9c38-4bea-8e37-638ec35ce0fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:09.175 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[1ab09079-14b9-4d36-8851-cf09e37fcaea]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501902, 'reachable_time': 18089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284359, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:09 np0005542249 systemd[1]: run-netns-ovnmeta\x2dacfaa8ac\x2d0b3c\x2d4cdd\x2da6b8\x2da70a713ae754.mount: Deactivated successfully.
Dec  2 06:27:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:09.180 164036 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 06:27:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:09.180 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[be8c555c-cbf8-46f5-923e-8b58a58de59d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:09 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:09.406 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:27:09 np0005542249 nova_compute[254900]: 2025-12-02 11:27:09.510 254904 INFO nova.virt.libvirt.driver [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Deleting instance files /var/lib/nova/instances/24f4facb-0cf2-4494-a321-eb07ca6288b7_del#033[00m
Dec  2 06:27:09 np0005542249 nova_compute[254900]: 2025-12-02 11:27:09.511 254904 INFO nova.virt.libvirt.driver [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Deletion of /var/lib/nova/instances/24f4facb-0cf2-4494-a321-eb07ca6288b7_del complete#033[00m
Dec  2 06:27:09 np0005542249 nova_compute[254900]: 2025-12-02 11:27:09.573 254904 INFO nova.compute.manager [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:27:09 np0005542249 nova_compute[254900]: 2025-12-02 11:27:09.574 254904 DEBUG oslo.service.loopingcall [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:27:09 np0005542249 nova_compute[254900]: 2025-12-02 11:27:09.574 254904 DEBUG nova.compute.manager [-] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:27:09 np0005542249 nova_compute[254900]: 2025-12-02 11:27:09.574 254904 DEBUG nova.network.neutron [-] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:27:10 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1476: 321 pgs: 321 active+clean; 134 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 30 KiB/s wr, 225 op/s
Dec  2 06:27:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e354 do_prune osdmap full prune enabled
Dec  2 06:27:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e355 e355: 3 total, 3 up, 3 in
Dec  2 06:27:10 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e355: 3 total, 3 up, 3 in
Dec  2 06:27:10 np0005542249 nova_compute[254900]: 2025-12-02 11:27:10.735 254904 DEBUG nova.compute.manager [req-a0a26514-80e3-435a-91b9-bc4743874c6f req-3e42d69f-319a-48d4-875d-babc641bf2ca 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Received event network-vif-unplugged-fd7c57f0-6cda-4a73-9695-895342d135db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:27:10 np0005542249 nova_compute[254900]: 2025-12-02 11:27:10.736 254904 DEBUG oslo_concurrency.lockutils [req-a0a26514-80e3-435a-91b9-bc4743874c6f req-3e42d69f-319a-48d4-875d-babc641bf2ca 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "24f4facb-0cf2-4494-a321-eb07ca6288b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:27:10 np0005542249 nova_compute[254900]: 2025-12-02 11:27:10.736 254904 DEBUG oslo_concurrency.lockutils [req-a0a26514-80e3-435a-91b9-bc4743874c6f req-3e42d69f-319a-48d4-875d-babc641bf2ca 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "24f4facb-0cf2-4494-a321-eb07ca6288b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:27:10 np0005542249 nova_compute[254900]: 2025-12-02 11:27:10.737 254904 DEBUG oslo_concurrency.lockutils [req-a0a26514-80e3-435a-91b9-bc4743874c6f req-3e42d69f-319a-48d4-875d-babc641bf2ca 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "24f4facb-0cf2-4494-a321-eb07ca6288b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:27:10 np0005542249 nova_compute[254900]: 2025-12-02 11:27:10.737 254904 DEBUG nova.compute.manager [req-a0a26514-80e3-435a-91b9-bc4743874c6f req-3e42d69f-319a-48d4-875d-babc641bf2ca 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] No waiting events found dispatching network-vif-unplugged-fd7c57f0-6cda-4a73-9695-895342d135db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:27:10 np0005542249 nova_compute[254900]: 2025-12-02 11:27:10.738 254904 DEBUG nova.compute.manager [req-a0a26514-80e3-435a-91b9-bc4743874c6f req-3e42d69f-319a-48d4-875d-babc641bf2ca 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Received event network-vif-unplugged-fd7c57f0-6cda-4a73-9695-895342d135db for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 06:27:10 np0005542249 nova_compute[254900]: 2025-12-02 11:27:10.738 254904 DEBUG nova.compute.manager [req-a0a26514-80e3-435a-91b9-bc4743874c6f req-3e42d69f-319a-48d4-875d-babc641bf2ca 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Received event network-vif-plugged-fd7c57f0-6cda-4a73-9695-895342d135db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:27:10 np0005542249 nova_compute[254900]: 2025-12-02 11:27:10.738 254904 DEBUG oslo_concurrency.lockutils [req-a0a26514-80e3-435a-91b9-bc4743874c6f req-3e42d69f-319a-48d4-875d-babc641bf2ca 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "24f4facb-0cf2-4494-a321-eb07ca6288b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:27:10 np0005542249 nova_compute[254900]: 2025-12-02 11:27:10.739 254904 DEBUG oslo_concurrency.lockutils [req-a0a26514-80e3-435a-91b9-bc4743874c6f req-3e42d69f-319a-48d4-875d-babc641bf2ca 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "24f4facb-0cf2-4494-a321-eb07ca6288b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:27:10 np0005542249 nova_compute[254900]: 2025-12-02 11:27:10.739 254904 DEBUG oslo_concurrency.lockutils [req-a0a26514-80e3-435a-91b9-bc4743874c6f req-3e42d69f-319a-48d4-875d-babc641bf2ca 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "24f4facb-0cf2-4494-a321-eb07ca6288b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:27:10 np0005542249 nova_compute[254900]: 2025-12-02 11:27:10.740 254904 DEBUG nova.compute.manager [req-a0a26514-80e3-435a-91b9-bc4743874c6f req-3e42d69f-319a-48d4-875d-babc641bf2ca 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] No waiting events found dispatching network-vif-plugged-fd7c57f0-6cda-4a73-9695-895342d135db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:27:10 np0005542249 nova_compute[254900]: 2025-12-02 11:27:10.740 254904 WARNING nova.compute.manager [req-a0a26514-80e3-435a-91b9-bc4743874c6f req-3e42d69f-319a-48d4-875d-babc641bf2ca 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Received unexpected event network-vif-plugged-fd7c57f0-6cda-4a73-9695-895342d135db for instance with vm_state active and task_state deleting.#033[00m
Dec  2 06:27:10 np0005542249 nova_compute[254900]: 2025-12-02 11:27:10.981 254904 DEBUG nova.network.neutron [-] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:27:11 np0005542249 nova_compute[254900]: 2025-12-02 11:27:11.002 254904 INFO nova.compute.manager [-] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Took 1.43 seconds to deallocate network for instance.#033[00m
Dec  2 06:27:11 np0005542249 nova_compute[254900]: 2025-12-02 11:27:11.054 254904 DEBUG nova.compute.manager [req-1fe5200d-40dc-4e8b-8ef7-d8bd65563f82 req-053a3559-9ca7-4c24-a803-5240a47f6b39 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Received event network-vif-deleted-fd7c57f0-6cda-4a73-9695-895342d135db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:27:11 np0005542249 podman[284365]: 2025-12-02 11:27:11.061554815 +0000 UTC m=+0.129291304 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  2 06:27:11 np0005542249 nova_compute[254900]: 2025-12-02 11:27:11.190 254904 INFO nova.compute.manager [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Took 0.19 seconds to detach 1 volumes for instance.#033[00m
Dec  2 06:27:11 np0005542249 nova_compute[254900]: 2025-12-02 11:27:11.193 254904 DEBUG nova.compute.manager [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Deleting volume: 45df0d82-3996-4a1e-a4b2-322511a14400 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Dec  2 06:27:11 np0005542249 nova_compute[254900]: 2025-12-02 11:27:11.412 254904 DEBUG oslo_concurrency.lockutils [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:27:11 np0005542249 nova_compute[254900]: 2025-12-02 11:27:11.414 254904 DEBUG oslo_concurrency.lockutils [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:27:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e355 do_prune osdmap full prune enabled
Dec  2 06:27:11 np0005542249 nova_compute[254900]: 2025-12-02 11:27:11.481 254904 DEBUG oslo_concurrency.processutils [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:27:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e356 e356: 3 total, 3 up, 3 in
Dec  2 06:27:11 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e356: 3 total, 3 up, 3 in
Dec  2 06:27:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:27:11 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1969889013' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:27:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:27:11 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1969889013' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:27:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:27:11 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3666736574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:27:11 np0005542249 nova_compute[254900]: 2025-12-02 11:27:11.957 254904 DEBUG oslo_concurrency.processutils [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:27:11 np0005542249 nova_compute[254900]: 2025-12-02 11:27:11.967 254904 DEBUG nova.compute.provider_tree [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:27:11 np0005542249 nova_compute[254900]: 2025-12-02 11:27:11.984 254904 DEBUG nova.scheduler.client.report [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:27:12 np0005542249 nova_compute[254900]: 2025-12-02 11:27:12.003 254904 DEBUG oslo_concurrency.lockutils [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:27:12 np0005542249 nova_compute[254900]: 2025-12-02 11:27:12.028 254904 INFO nova.scheduler.client.report [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Deleted allocations for instance 24f4facb-0cf2-4494-a321-eb07ca6288b7#033[00m
Dec  2 06:27:12 np0005542249 nova_compute[254900]: 2025-12-02 11:27:12.103 254904 DEBUG oslo_concurrency.lockutils [None req-a9074893-513b-4480-b0ba-6e8191e140ef 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "24f4facb-0cf2-4494-a321-eb07ca6288b7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.383s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:27:12 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1479: 321 pgs: 321 active+clean; 134 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 38 KiB/s wr, 317 op/s
Dec  2 06:27:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:27:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e356 do_prune osdmap full prune enabled
Dec  2 06:27:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e357 e357: 3 total, 3 up, 3 in
Dec  2 06:27:12 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e357: 3 total, 3 up, 3 in
Dec  2 06:27:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e357 do_prune osdmap full prune enabled
Dec  2 06:27:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e358 e358: 3 total, 3 up, 3 in
Dec  2 06:27:13 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e358: 3 total, 3 up, 3 in
Dec  2 06:27:13 np0005542249 nova_compute[254900]: 2025-12-02 11:27:13.638 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:27:13 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4117989376' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:27:13 np0005542249 nova_compute[254900]: 2025-12-02 11:27:13.998 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:14 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1482: 321 pgs: 321 active+clean; 134 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 6.9 KiB/s wr, 286 op/s
Dec  2 06:27:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e358 do_prune osdmap full prune enabled
Dec  2 06:27:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e359 e359: 3 total, 3 up, 3 in
Dec  2 06:27:14 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e359: 3 total, 3 up, 3 in
Dec  2 06:27:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:27:15 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2719792267' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:27:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:27:15 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2719792267' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:27:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:27:15 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/837599220' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:27:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:27:15 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/837599220' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:27:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e359 do_prune osdmap full prune enabled
Dec  2 06:27:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e360 e360: 3 total, 3 up, 3 in
Dec  2 06:27:15 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e360: 3 total, 3 up, 3 in
Dec  2 06:27:16 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1485: 321 pgs: 321 active+clean; 134 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 7.4 KiB/s wr, 248 op/s
Dec  2 06:27:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:27:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e360 do_prune osdmap full prune enabled
Dec  2 06:27:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e361 e361: 3 total, 3 up, 3 in
Dec  2 06:27:17 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e361: 3 total, 3 up, 3 in
Dec  2 06:27:18 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1487: 321 pgs: 321 active+clean; 99 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 193 KiB/s rd, 12 KiB/s wr, 260 op/s
Dec  2 06:27:18 np0005542249 nova_compute[254900]: 2025-12-02 11:27:18.688 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:27:18 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2461794901' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:27:19 np0005542249 nova_compute[254900]: 2025-12-02 11:27:19.000 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e361 do_prune osdmap full prune enabled
Dec  2 06:27:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e362 e362: 3 total, 3 up, 3 in
Dec  2 06:27:19 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e362: 3 total, 3 up, 3 in
Dec  2 06:27:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:19.842 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:27:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:19.843 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:27:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:19.843 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:27:20 np0005542249 podman[284407]: 2025-12-02 11:27:20.097931621 +0000 UTC m=+0.172987555 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:27:20 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1489: 321 pgs: 321 active+clean; 88 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 8.2 KiB/s wr, 162 op/s
Dec  2 06:27:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e362 do_prune osdmap full prune enabled
Dec  2 06:27:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e363 e363: 3 total, 3 up, 3 in
Dec  2 06:27:20 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e363: 3 total, 3 up, 3 in
Dec  2 06:27:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:27:21 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2187622986' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:27:22 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1491: 321 pgs: 321 active+clean; 88 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 9.8 KiB/s wr, 190 op/s
Dec  2 06:27:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:27:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e363 do_prune osdmap full prune enabled
Dec  2 06:27:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e364 e364: 3 total, 3 up, 3 in
Dec  2 06:27:22 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e364: 3 total, 3 up, 3 in
Dec  2 06:27:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:27:22 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3706833006' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:27:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:27:22 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3706833006' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:27:22 np0005542249 podman[284434]: 2025-12-02 11:27:22.981890661 +0000 UTC m=+0.053373024 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:27:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e364 do_prune osdmap full prune enabled
Dec  2 06:27:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e365 e365: 3 total, 3 up, 3 in
Dec  2 06:27:23 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e365: 3 total, 3 up, 3 in
Dec  2 06:27:23 np0005542249 nova_compute[254900]: 2025-12-02 11:27:23.690 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:23 np0005542249 nova_compute[254900]: 2025-12-02 11:27:23.971 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764674828.9701366, 24f4facb-0cf2-4494-a321-eb07ca6288b7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:27:23 np0005542249 nova_compute[254900]: 2025-12-02 11:27:23.971 254904 INFO nova.compute.manager [-] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:27:24 np0005542249 nova_compute[254900]: 2025-12-02 11:27:23.999 254904 DEBUG nova.compute.manager [None req-2482b94b-43dc-4e41-91f6-e0e0b8f52bf4 - - - - - -] [instance: 24f4facb-0cf2-4494-a321-eb07ca6288b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:27:24 np0005542249 nova_compute[254900]: 2025-12-02 11:27:24.002 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:24 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1494: 321 pgs: 321 active+clean; 88 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 6.2 KiB/s wr, 117 op/s
Dec  2 06:27:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:27:25 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/405227870' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:27:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e365 do_prune osdmap full prune enabled
Dec  2 06:27:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e366 e366: 3 total, 3 up, 3 in
Dec  2 06:27:25 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e366: 3 total, 3 up, 3 in
Dec  2 06:27:26 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1496: 321 pgs: 321 active+clean; 110 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 887 KiB/s rd, 2.4 MiB/s wr, 124 op/s
Dec  2 06:27:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:27:26
Dec  2 06:27:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:27:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:27:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', '.mgr', 'backups']
Dec  2 06:27:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:27:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:27:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:27:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:27:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:27:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:27:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:27:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e366 do_prune osdmap full prune enabled
Dec  2 06:27:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:27:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:27:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:27:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:27:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:27:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:27:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:27:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:27:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:27:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:27:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e367 e367: 3 total, 3 up, 3 in
Dec  2 06:27:26 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e367: 3 total, 3 up, 3 in
Dec  2 06:27:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:27:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e367 do_prune osdmap full prune enabled
Dec  2 06:27:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e368 e368: 3 total, 3 up, 3 in
Dec  2 06:27:27 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e368: 3 total, 3 up, 3 in
Dec  2 06:27:28 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1499: 321 pgs: 321 active+clean; 134 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 164 KiB/s rd, 4.6 MiB/s wr, 236 op/s
Dec  2 06:27:28 np0005542249 nova_compute[254900]: 2025-12-02 11:27:28.692 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:29 np0005542249 nova_compute[254900]: 2025-12-02 11:27:29.003 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:27:29 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/989737198' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:27:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:27:29 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/989737198' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:27:29 np0005542249 nova_compute[254900]: 2025-12-02 11:27:29.329 254904 DEBUG oslo_concurrency.lockutils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "66196772-8110-4d36-bdfa-d36400059313" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:27:29 np0005542249 nova_compute[254900]: 2025-12-02 11:27:29.331 254904 DEBUG oslo_concurrency.lockutils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "66196772-8110-4d36-bdfa-d36400059313" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:27:29 np0005542249 nova_compute[254900]: 2025-12-02 11:27:29.351 254904 DEBUG nova.compute.manager [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:27:29 np0005542249 nova_compute[254900]: 2025-12-02 11:27:29.432 254904 DEBUG oslo_concurrency.lockutils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:27:29 np0005542249 nova_compute[254900]: 2025-12-02 11:27:29.433 254904 DEBUG oslo_concurrency.lockutils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:27:29 np0005542249 nova_compute[254900]: 2025-12-02 11:27:29.441 254904 DEBUG nova.virt.hardware [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:27:29 np0005542249 nova_compute[254900]: 2025-12-02 11:27:29.442 254904 INFO nova.compute.claims [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:27:29 np0005542249 nova_compute[254900]: 2025-12-02 11:27:29.547 254904 DEBUG oslo_concurrency.processutils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:27:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:27:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1136010085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:27:30 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1500: 321 pgs: 321 active+clean; 134 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 3.6 MiB/s wr, 221 op/s
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.195 254904 DEBUG oslo_concurrency.processutils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.205 254904 DEBUG nova.compute.provider_tree [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.228 254904 DEBUG nova.scheduler.client.report [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.255 254904 DEBUG oslo_concurrency.lockutils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.256 254904 DEBUG nova.compute.manager [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.319 254904 DEBUG nova.compute.manager [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.319 254904 DEBUG nova.network.neutron [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.349 254904 INFO nova.virt.libvirt.driver [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.377 254904 DEBUG nova.compute.manager [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.426 254904 INFO nova.virt.block_device [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Booting with volume 0224a34f-a66a-4461-a7df-9ffc5df8f71a at /dev/vda#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.508 254904 DEBUG nova.policy [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6ccb73a613554d938221b4bf46d7ae83', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '625a6939c31646a4a83ea851774cf28c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.624 254904 DEBUG os_brick.utils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.625 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.637 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.638 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[38f6021b-5ca9-4766-a9bc-43fcf8777eab]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.639 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.652 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.653 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[09a0599c-d59d-492f-85ea-d936b59ffd24]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.654 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.663 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.664 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[f7b44d87-e8b7-4d88-a048-497388ed4983]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.665 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[f6b6f088-6f0a-4d23-abe6-60c01896e2a8]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.665 254904 DEBUG oslo_concurrency.processutils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.690 254904 DEBUG oslo_concurrency.processutils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.696 254904 DEBUG os_brick.initiator.connectors.lightos [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.696 254904 DEBUG os_brick.initiator.connectors.lightos [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.696 254904 DEBUG os_brick.initiator.connectors.lightos [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.697 254904 DEBUG os_brick.utils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] <== get_connector_properties: return (72ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:27:30 np0005542249 nova_compute[254900]: 2025-12-02 11:27:30.697 254904 DEBUG nova.virt.block_device [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Updating existing volume attachment record: f3f075cf-7947-4520-a6c6-0ef3e4ba36ea _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:27:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:27:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3238783682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:27:31 np0005542249 nova_compute[254900]: 2025-12-02 11:27:31.580 254904 DEBUG nova.network.neutron [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Successfully created port: d395b4fa-3166-499c-b9da-7d7d3574b4e3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:27:32 np0005542249 nova_compute[254900]: 2025-12-02 11:27:32.048 254904 DEBUG nova.compute.manager [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:27:32 np0005542249 nova_compute[254900]: 2025-12-02 11:27:32.050 254904 DEBUG nova.virt.libvirt.driver [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:27:32 np0005542249 nova_compute[254900]: 2025-12-02 11:27:32.051 254904 INFO nova.virt.libvirt.driver [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Creating image(s)#033[00m
Dec  2 06:27:32 np0005542249 nova_compute[254900]: 2025-12-02 11:27:32.051 254904 DEBUG nova.virt.libvirt.driver [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  2 06:27:32 np0005542249 nova_compute[254900]: 2025-12-02 11:27:32.052 254904 DEBUG nova.virt.libvirt.driver [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Ensure instance console log exists: /var/lib/nova/instances/66196772-8110-4d36-bdfa-d36400059313/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:27:32 np0005542249 nova_compute[254900]: 2025-12-02 11:27:32.052 254904 DEBUG oslo_concurrency.lockutils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:27:32 np0005542249 nova_compute[254900]: 2025-12-02 11:27:32.053 254904 DEBUG oslo_concurrency.lockutils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:27:32 np0005542249 nova_compute[254900]: 2025-12-02 11:27:32.053 254904 DEBUG oslo_concurrency.lockutils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:27:32 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1501: 321 pgs: 321 active+clean; 134 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 146 KiB/s rd, 1.2 MiB/s wr, 203 op/s
Dec  2 06:27:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e368 do_prune osdmap full prune enabled
Dec  2 06:27:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e369 e369: 3 total, 3 up, 3 in
Dec  2 06:27:32 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e369: 3 total, 3 up, 3 in
Dec  2 06:27:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:27:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e369 do_prune osdmap full prune enabled
Dec  2 06:27:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e370 e370: 3 total, 3 up, 3 in
Dec  2 06:27:32 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec  2 06:27:32 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e370: 3 total, 3 up, 3 in
Dec  2 06:27:32 np0005542249 nova_compute[254900]: 2025-12-02 11:27:32.649 254904 DEBUG nova.network.neutron [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Successfully updated port: d395b4fa-3166-499c-b9da-7d7d3574b4e3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:27:32 np0005542249 nova_compute[254900]: 2025-12-02 11:27:32.665 254904 DEBUG oslo_concurrency.lockutils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "refresh_cache-66196772-8110-4d36-bdfa-d36400059313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:27:32 np0005542249 nova_compute[254900]: 2025-12-02 11:27:32.666 254904 DEBUG oslo_concurrency.lockutils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquired lock "refresh_cache-66196772-8110-4d36-bdfa-d36400059313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:27:32 np0005542249 nova_compute[254900]: 2025-12-02 11:27:32.666 254904 DEBUG nova.network.neutron [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:27:32 np0005542249 nova_compute[254900]: 2025-12-02 11:27:32.797 254904 DEBUG nova.compute.manager [req-b3806d68-9204-4123-b9a4-665e43d1ee1e req-161c9c71-198d-4563-863f-e7f4e755ad35 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Received event network-changed-d395b4fa-3166-499c-b9da-7d7d3574b4e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:27:32 np0005542249 nova_compute[254900]: 2025-12-02 11:27:32.798 254904 DEBUG nova.compute.manager [req-b3806d68-9204-4123-b9a4-665e43d1ee1e req-161c9c71-198d-4563-863f-e7f4e755ad35 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Refreshing instance network info cache due to event network-changed-d395b4fa-3166-499c-b9da-7d7d3574b4e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:27:32 np0005542249 nova_compute[254900]: 2025-12-02 11:27:32.799 254904 DEBUG oslo_concurrency.lockutils [req-b3806d68-9204-4123-b9a4-665e43d1ee1e req-161c9c71-198d-4563-863f-e7f4e755ad35 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-66196772-8110-4d36-bdfa-d36400059313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:27:32 np0005542249 nova_compute[254900]: 2025-12-02 11:27:32.859 254904 DEBUG nova.network.neutron [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:27:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:27:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/701805291' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:27:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:27:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/701805291' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:27:33 np0005542249 nova_compute[254900]: 2025-12-02 11:27:33.694 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:33 np0005542249 nova_compute[254900]: 2025-12-02 11:27:33.994 254904 DEBUG nova.network.neutron [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Updating instance_info_cache with network_info: [{"id": "d395b4fa-3166-499c-b9da-7d7d3574b4e3", "address": "fa:16:3e:74:c6:b6", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd395b4fa-31", "ovs_interfaceid": "d395b4fa-3166-499c-b9da-7d7d3574b4e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.004 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.011 254904 DEBUG oslo_concurrency.lockutils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Releasing lock "refresh_cache-66196772-8110-4d36-bdfa-d36400059313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.011 254904 DEBUG nova.compute.manager [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Instance network_info: |[{"id": "d395b4fa-3166-499c-b9da-7d7d3574b4e3", "address": "fa:16:3e:74:c6:b6", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd395b4fa-31", "ovs_interfaceid": "d395b4fa-3166-499c-b9da-7d7d3574b4e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.011 254904 DEBUG oslo_concurrency.lockutils [req-b3806d68-9204-4123-b9a4-665e43d1ee1e req-161c9c71-198d-4563-863f-e7f4e755ad35 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-66196772-8110-4d36-bdfa-d36400059313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.012 254904 DEBUG nova.network.neutron [req-b3806d68-9204-4123-b9a4-665e43d1ee1e req-161c9c71-198d-4563-863f-e7f4e755ad35 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Refreshing network info cache for port d395b4fa-3166-499c-b9da-7d7d3574b4e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.014 254904 DEBUG nova.virt.libvirt.driver [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Start _get_guest_xml network_info=[{"id": "d395b4fa-3166-499c-b9da-7d7d3574b4e3", "address": "fa:16:3e:74:c6:b6", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd395b4fa-31", "ovs_interfaceid": "d395b4fa-3166-499c-b9da-7d7d3574b4e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-0224a34f-a66a-4461-a7df-9ffc5df8f71a', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '0224a34f-a66a-4461-a7df-9ffc5df8f71a', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '66196772-8110-4d36-bdfa-d36400059313', 'attached_at': '', 'detached_at': '', 'volume_id': '0224a34f-a66a-4461-a7df-9ffc5df8f71a', 'serial': '0224a34f-a66a-4461-a7df-9ffc5df8f71a'}, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'attachment_id': 'f3f075cf-7947-4520-a6c6-0ef3e4ba36ea', 'delete_on_termination': True, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.022 254904 WARNING nova.virt.libvirt.driver [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.032 254904 DEBUG nova.virt.libvirt.host [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.033 254904 DEBUG nova.virt.libvirt.host [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.037 254904 DEBUG nova.virt.libvirt.host [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.038 254904 DEBUG nova.virt.libvirt.host [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.038 254904 DEBUG nova.virt.libvirt.driver [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.038 254904 DEBUG nova.virt.hardware [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.039 254904 DEBUG nova.virt.hardware [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.039 254904 DEBUG nova.virt.hardware [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.039 254904 DEBUG nova.virt.hardware [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.039 254904 DEBUG nova.virt.hardware [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.040 254904 DEBUG nova.virt.hardware [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.040 254904 DEBUG nova.virt.hardware [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.040 254904 DEBUG nova.virt.hardware [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.040 254904 DEBUG nova.virt.hardware [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.041 254904 DEBUG nova.virt.hardware [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.041 254904 DEBUG nova.virt.hardware [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.071 254904 DEBUG nova.storage.rbd_utils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] rbd image 66196772-8110-4d36-bdfa-d36400059313_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.077 254904 DEBUG oslo_concurrency.processutils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:27:34 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1504: 321 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 312 active+clean; 134 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.6 KiB/s wr, 64 op/s
Dec  2 06:27:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:27:34 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/701466395' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.599 254904 DEBUG oslo_concurrency.processutils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.634 254904 DEBUG nova.virt.libvirt.vif [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:27:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-932115055',display_name='tempest-TestVolumeBootPattern-volume-backed-server-932115055',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-932115055',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM1+BNrSwiIfoSvboLgHMOgj8+ABI7GuXea7hzH0xzjkB5iJlfyMmtzxiEQGFuepDMbjl1J4yDHepNYnTxnOAAq5FpECzzb0WcjXs5+xWv+i5saSs4Cfct94h1yiQ+rLVQ==',key_name='tempest-keypair-1565797974',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='625a6939c31646a4a83ea851774cf28c',ramdisk_id='',reservation_id='r-ipf6jzv7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1396850361',owner_user_name='tempest-TestVolumeBootPattern-1396850361-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:27:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ccb73a613554d938221b4bf46d7ae83',uuid=66196772-8110-4d36-bdfa-d36400059313,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d395b4fa-3166-499c-b9da-7d7d3574b4e3", "address": "fa:16:3e:74:c6:b6", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd395b4fa-31", "ovs_interfaceid": "d395b4fa-3166-499c-b9da-7d7d3574b4e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.635 254904 DEBUG nova.network.os_vif_util [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converting VIF {"id": "d395b4fa-3166-499c-b9da-7d7d3574b4e3", "address": "fa:16:3e:74:c6:b6", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd395b4fa-31", "ovs_interfaceid": "d395b4fa-3166-499c-b9da-7d7d3574b4e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.636 254904 DEBUG nova.network.os_vif_util [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:c6:b6,bridge_name='br-int',has_traffic_filtering=True,id=d395b4fa-3166-499c-b9da-7d7d3574b4e3,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd395b4fa-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.638 254904 DEBUG nova.objects.instance [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lazy-loading 'pci_devices' on Instance uuid 66196772-8110-4d36-bdfa-d36400059313 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.655 254904 DEBUG nova.virt.libvirt.driver [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:27:34 np0005542249 nova_compute[254900]:  <uuid>66196772-8110-4d36-bdfa-d36400059313</uuid>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:  <name>instance-00000012</name>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <nova:name>tempest-TestVolumeBootPattern-volume-backed-server-932115055</nova:name>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:27:34</nova:creationTime>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:27:34 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:        <nova:user uuid="6ccb73a613554d938221b4bf46d7ae83">tempest-TestVolumeBootPattern-1396850361-project-member</nova:user>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:        <nova:project uuid="625a6939c31646a4a83ea851774cf28c">tempest-TestVolumeBootPattern-1396850361</nova:project>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:        <nova:port uuid="d395b4fa-3166-499c-b9da-7d7d3574b4e3">
Dec  2 06:27:34 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <entry name="serial">66196772-8110-4d36-bdfa-d36400059313</entry>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <entry name="uuid">66196772-8110-4d36-bdfa-d36400059313</entry>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/66196772-8110-4d36-bdfa-d36400059313_disk.config">
Dec  2 06:27:34 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:27:34 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="volumes/volume-0224a34f-a66a-4461-a7df-9ffc5df8f71a">
Dec  2 06:27:34 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:27:34 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <serial>0224a34f-a66a-4461-a7df-9ffc5df8f71a</serial>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:74:c6:b6"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <target dev="tapd395b4fa-31"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/66196772-8110-4d36-bdfa-d36400059313/console.log" append="off"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:27:34 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:27:34 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:27:34 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:27:34 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.657 254904 DEBUG nova.compute.manager [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Preparing to wait for external event network-vif-plugged-d395b4fa-3166-499c-b9da-7d7d3574b4e3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.657 254904 DEBUG oslo_concurrency.lockutils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "66196772-8110-4d36-bdfa-d36400059313-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.658 254904 DEBUG oslo_concurrency.lockutils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "66196772-8110-4d36-bdfa-d36400059313-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.658 254904 DEBUG oslo_concurrency.lockutils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "66196772-8110-4d36-bdfa-d36400059313-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.659 254904 DEBUG nova.virt.libvirt.vif [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:27:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-932115055',display_name='tempest-TestVolumeBootPattern-volume-backed-server-932115055',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-932115055',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM1+BNrSwiIfoSvboLgHMOgj8+ABI7GuXea7hzH0xzjkB5iJlfyMmtzxiEQGFuepDMbjl1J4yDHepNYnTxnOAAq5FpECzzb0WcjXs5+xWv+i5saSs4Cfct94h1yiQ+rLVQ==',key_name='tempest-keypair-1565797974',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='625a6939c31646a4a83ea851774cf28c',ramdisk_id='',reservation_id='r-ipf6jzv7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1396850361',owner_user_name='tempest-TestVolumeBootPattern-1396850361-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:27:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ccb73a613554d938221b4bf46d7ae83',uuid=66196772-8110-4d36-bdfa-d36400059313,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d395b4fa-3166-499c-b9da-7d7d3574b4e3", "address": "fa:16:3e:74:c6:b6", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd395b4fa-31", "ovs_interfaceid": "d395b4fa-3166-499c-b9da-7d7d3574b4e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.660 254904 DEBUG nova.network.os_vif_util [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converting VIF {"id": "d395b4fa-3166-499c-b9da-7d7d3574b4e3", "address": "fa:16:3e:74:c6:b6", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd395b4fa-31", "ovs_interfaceid": "d395b4fa-3166-499c-b9da-7d7d3574b4e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.661 254904 DEBUG nova.network.os_vif_util [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:c6:b6,bridge_name='br-int',has_traffic_filtering=True,id=d395b4fa-3166-499c-b9da-7d7d3574b4e3,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd395b4fa-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.661 254904 DEBUG os_vif [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:c6:b6,bridge_name='br-int',has_traffic_filtering=True,id=d395b4fa-3166-499c-b9da-7d7d3574b4e3,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd395b4fa-31') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.662 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.663 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.664 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.668 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.668 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd395b4fa-31, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.669 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd395b4fa-31, col_values=(('external_ids', {'iface-id': 'd395b4fa-3166-499c-b9da-7d7d3574b4e3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:74:c6:b6', 'vm-uuid': '66196772-8110-4d36-bdfa-d36400059313'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:27:34 np0005542249 NetworkManager[48987]: <info>  [1764674854.6731] manager: (tapd395b4fa-31): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.673 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.678 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.679 254904 INFO os_vif [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:c6:b6,bridge_name='br-int',has_traffic_filtering=True,id=d395b4fa-3166-499c-b9da-7d7d3574b4e3,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd395b4fa-31')#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.837 254904 DEBUG nova.virt.libvirt.driver [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.837 254904 DEBUG nova.virt.libvirt.driver [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.838 254904 DEBUG nova.virt.libvirt.driver [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] No VIF found with MAC fa:16:3e:74:c6:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.838 254904 INFO nova.virt.libvirt.driver [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Using config drive#033[00m
Dec  2 06:27:34 np0005542249 nova_compute[254900]: 2025-12-02 11:27:34.873 254904 DEBUG nova.storage.rbd_utils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] rbd image 66196772-8110-4d36-bdfa-d36400059313_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:27:35 np0005542249 nova_compute[254900]: 2025-12-02 11:27:35.542 254904 INFO nova.virt.libvirt.driver [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Creating config drive at /var/lib/nova/instances/66196772-8110-4d36-bdfa-d36400059313/disk.config#033[00m
Dec  2 06:27:35 np0005542249 nova_compute[254900]: 2025-12-02 11:27:35.555 254904 DEBUG oslo_concurrency.processutils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/66196772-8110-4d36-bdfa-d36400059313/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi94u2tr7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:27:35 np0005542249 nova_compute[254900]: 2025-12-02 11:27:35.705 254904 DEBUG oslo_concurrency.processutils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/66196772-8110-4d36-bdfa-d36400059313/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi94u2tr7" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:27:35 np0005542249 nova_compute[254900]: 2025-12-02 11:27:35.750 254904 DEBUG nova.storage.rbd_utils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] rbd image 66196772-8110-4d36-bdfa-d36400059313_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:27:35 np0005542249 nova_compute[254900]: 2025-12-02 11:27:35.757 254904 DEBUG oslo_concurrency.processutils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/66196772-8110-4d36-bdfa-d36400059313/disk.config 66196772-8110-4d36-bdfa-d36400059313_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:27:35 np0005542249 nova_compute[254900]: 2025-12-02 11:27:35.927 254904 DEBUG nova.network.neutron [req-b3806d68-9204-4123-b9a4-665e43d1ee1e req-161c9c71-198d-4563-863f-e7f4e755ad35 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Updated VIF entry in instance network info cache for port d395b4fa-3166-499c-b9da-7d7d3574b4e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:27:35 np0005542249 nova_compute[254900]: 2025-12-02 11:27:35.928 254904 DEBUG nova.network.neutron [req-b3806d68-9204-4123-b9a4-665e43d1ee1e req-161c9c71-198d-4563-863f-e7f4e755ad35 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Updating instance_info_cache with network_info: [{"id": "d395b4fa-3166-499c-b9da-7d7d3574b4e3", "address": "fa:16:3e:74:c6:b6", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd395b4fa-31", "ovs_interfaceid": "d395b4fa-3166-499c-b9da-7d7d3574b4e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:27:35 np0005542249 nova_compute[254900]: 2025-12-02 11:27:35.947 254904 DEBUG oslo_concurrency.lockutils [req-b3806d68-9204-4123-b9a4-665e43d1ee1e req-161c9c71-198d-4563-863f-e7f4e755ad35 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-66196772-8110-4d36-bdfa-d36400059313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:27:36 np0005542249 nova_compute[254900]: 2025-12-02 11:27:36.021 254904 DEBUG oslo_concurrency.processutils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/66196772-8110-4d36-bdfa-d36400059313/disk.config 66196772-8110-4d36-bdfa-d36400059313_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.264s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:27:36 np0005542249 nova_compute[254900]: 2025-12-02 11:27:36.022 254904 INFO nova.virt.libvirt.driver [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Deleting local config drive /var/lib/nova/instances/66196772-8110-4d36-bdfa-d36400059313/disk.config because it was imported into RBD.#033[00m
Dec  2 06:27:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:27:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:27:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:27:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:27:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  2 06:27:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:27:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0006935202594907859 of space, bias 1.0, pg target 0.2080560778472358 quantized to 32 (current 32)
Dec  2 06:27:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:27:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00017169491111545225 quantized to 32 (current 32)
Dec  2 06:27:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:27:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Dec  2 06:27:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:27:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:27:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:27:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:27:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:27:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:27:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:27:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:27:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:27:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:27:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:27:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:27:36 np0005542249 kernel: tapd395b4fa-31: entered promiscuous mode
Dec  2 06:27:36 np0005542249 NetworkManager[48987]: <info>  [1764674856.0884] manager: (tapd395b4fa-31): new Tun device (/org/freedesktop/NetworkManager/Devices/98)
Dec  2 06:27:36 np0005542249 ovn_controller[153849]: 2025-12-02T11:27:36Z|00177|binding|INFO|Claiming lport d395b4fa-3166-499c-b9da-7d7d3574b4e3 for this chassis.
Dec  2 06:27:36 np0005542249 ovn_controller[153849]: 2025-12-02T11:27:36Z|00178|binding|INFO|d395b4fa-3166-499c-b9da-7d7d3574b4e3: Claiming fa:16:3e:74:c6:b6 10.100.0.8
Dec  2 06:27:36 np0005542249 nova_compute[254900]: 2025-12-02 11:27:36.091 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.099 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:c6:b6 10.100.0.8'], port_security=['fa:16:3e:74:c6:b6 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '66196772-8110-4d36-bdfa-d36400059313', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '625a6939c31646a4a83ea851774cf28c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e85b987b-ee58-41cc-a0e3-ee2a2605694b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cc823ef0-3e69-4062-a488-a82f483f10bb, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=d395b4fa-3166-499c-b9da-7d7d3574b4e3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.102 163757 INFO neutron.agent.ovn.metadata.agent [-] Port d395b4fa-3166-499c-b9da-7d7d3574b4e3 in datapath acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 bound to our chassis#033[00m
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.104 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754#033[00m
Dec  2 06:27:36 np0005542249 nova_compute[254900]: 2025-12-02 11:27:36.115 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:36 np0005542249 ovn_controller[153849]: 2025-12-02T11:27:36Z|00179|binding|INFO|Setting lport d395b4fa-3166-499c-b9da-7d7d3574b4e3 ovn-installed in OVS
Dec  2 06:27:36 np0005542249 ovn_controller[153849]: 2025-12-02T11:27:36Z|00180|binding|INFO|Setting lport d395b4fa-3166-499c-b9da-7d7d3574b4e3 up in Southbound
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.118 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[a1198f1a-0cca-4ec5-87d3-e19e94a6001f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.119 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapacfaa8ac-01 in ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 06:27:36 np0005542249 nova_compute[254900]: 2025-12-02 11:27:36.119 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:36 np0005542249 nova_compute[254900]: 2025-12-02 11:27:36.121 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.122 262398 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapacfaa8ac-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.122 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[a96e5191-05c7-48e6-ae94-f26d6778abd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.124 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[8a847191-0102-4a0f-8a6a-4d93eed27461]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:36 np0005542249 systemd-machined[216222]: New machine qemu-18-instance-00000012.
Dec  2 06:27:36 np0005542249 systemd[1]: Started Virtual Machine qemu-18-instance-00000012.
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.145 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[7c01ccda-dd95-47fe-aae4-18d121445569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:36 np0005542249 systemd-udevd[284598]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:27:36 np0005542249 NetworkManager[48987]: <info>  [1764674856.1654] device (tapd395b4fa-31): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.164 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[aefc49fb-3c3d-4749-b5b2-58b8e3b1a788]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:36 np0005542249 NetworkManager[48987]: <info>  [1764674856.1678] device (tapd395b4fa-31): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.206 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[fb8e67b3-91dc-49b2-93ec-bb9dc788944d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.218 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[1f6fcbda-7112-473a-8919-bbc20818990a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:36 np0005542249 NetworkManager[48987]: <info>  [1764674856.2197] manager: (tapacfaa8ac-00): new Veth device (/org/freedesktop/NetworkManager/Devices/99)
Dec  2 06:27:36 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1505: 321 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 312 active+clean; 134 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.1 KiB/s wr, 55 op/s
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.257 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[df34b949-5fb7-4b83-b64c-7f6730e9031f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.263 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[5a846066-41f5-48fe-a067-9d5b86fe9c85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:27:36 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/853890294' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:27:36 np0005542249 NetworkManager[48987]: <info>  [1764674856.2990] device (tapacfaa8ac-00): carrier: link connected
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.304 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[70709112-b3f3-43d8-9c9a-7c97b64ba9eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:36 np0005542249 nova_compute[254900]: 2025-12-02 11:27:36.382 254904 DEBUG nova.compute.manager [req-1eef8fee-596b-42fe-880c-92243616f620 req-eaffd172-93f9-45a0-8f5d-f80709575736 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Received event network-vif-plugged-d395b4fa-3166-499c-b9da-7d7d3574b4e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:27:36 np0005542249 nova_compute[254900]: 2025-12-02 11:27:36.383 254904 DEBUG oslo_concurrency.lockutils [req-1eef8fee-596b-42fe-880c-92243616f620 req-eaffd172-93f9-45a0-8f5d-f80709575736 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "66196772-8110-4d36-bdfa-d36400059313-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.381 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[92536936-89cf-4556-a00d-300bbf9f793a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacfaa8ac-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:73:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504911, 'reachable_time': 21222, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284629, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:36 np0005542249 nova_compute[254900]: 2025-12-02 11:27:36.383 254904 DEBUG oslo_concurrency.lockutils [req-1eef8fee-596b-42fe-880c-92243616f620 req-eaffd172-93f9-45a0-8f5d-f80709575736 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "66196772-8110-4d36-bdfa-d36400059313-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:27:36 np0005542249 nova_compute[254900]: 2025-12-02 11:27:36.383 254904 DEBUG oslo_concurrency.lockutils [req-1eef8fee-596b-42fe-880c-92243616f620 req-eaffd172-93f9-45a0-8f5d-f80709575736 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "66196772-8110-4d36-bdfa-d36400059313-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:27:36 np0005542249 nova_compute[254900]: 2025-12-02 11:27:36.384 254904 DEBUG nova.compute.manager [req-1eef8fee-596b-42fe-880c-92243616f620 req-eaffd172-93f9-45a0-8f5d-f80709575736 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Processing event network-vif-plugged-d395b4fa-3166-499c-b9da-7d7d3574b4e3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.400 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[1ae64aa3-74f4-4a2d-967d-de5eaa6b8940]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fece:73a3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504911, 'tstamp': 504911}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284630, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.423 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[4068993d-85b2-4dec-9dbd-28566719d967]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacfaa8ac-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:73:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504911, 'reachable_time': 21222, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284631, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.462 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[afaa5c0c-8245-4b84-a16a-1f9eae7101fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.548 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[ed979eca-c67f-4001-9ed4-f005733b5101]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.550 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacfaa8ac-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.550 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.551 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapacfaa8ac-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:27:36 np0005542249 NetworkManager[48987]: <info>  [1764674856.5544] manager: (tapacfaa8ac-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Dec  2 06:27:36 np0005542249 nova_compute[254900]: 2025-12-02 11:27:36.554 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:36 np0005542249 kernel: tapacfaa8ac-00: entered promiscuous mode
Dec  2 06:27:36 np0005542249 nova_compute[254900]: 2025-12-02 11:27:36.556 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.558 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapacfaa8ac-00, col_values=(('external_ids', {'iface-id': '1636ad30-406d-4138-823e-abbe7f4d87ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:27:36 np0005542249 nova_compute[254900]: 2025-12-02 11:27:36.559 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:36 np0005542249 ovn_controller[153849]: 2025-12-02T11:27:36Z|00181|binding|INFO|Releasing lport 1636ad30-406d-4138-823e-abbe7f4d87ac from this chassis (sb_readonly=0)
Dec  2 06:27:36 np0005542249 nova_compute[254900]: 2025-12-02 11:27:36.591 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.592 163757 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.594 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[b3e4b7e4-902d-4f1a-97ba-2f957a313636]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.595 163757 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: global
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]:    log         /dev/log local0 debug
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]:    log-tag     haproxy-metadata-proxy-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]:    user        root
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]:    group       root
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]:    maxconn     1024
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]:    pidfile     /var/lib/neutron/external/pids/acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754.pid.haproxy
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]:    daemon
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: defaults
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]:    log global
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]:    mode http
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]:    option httplog
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]:    option dontlognull
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]:    option http-server-close
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]:    option forwardfor
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]:    retries                 3
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]:    timeout http-request    30s
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]:    timeout connect         30s
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]:    timeout client          32s
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]:    timeout server          32s
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]:    timeout http-keep-alive 30s
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: listen listener
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]:    bind 169.254.169.254:80
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]:    http-request add-header X-OVN-Network-ID acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 06:27:36 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:27:36.596 163757 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'env', 'PROCESS_TAG=haproxy-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 06:27:37 np0005542249 podman[284663]: 2025-12-02 11:27:37.071699461 +0000 UTC m=+0.109864010 container create c17f785326132e02a3ad4f05020157d0e467d5b4a41cbdbbcfe7dc2eeeee1b07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:27:37 np0005542249 podman[284663]: 2025-12-02 11:27:36.996662473 +0000 UTC m=+0.034827082 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:27:37 np0005542249 systemd[1]: Started libpod-conmon-c17f785326132e02a3ad4f05020157d0e467d5b4a41cbdbbcfe7dc2eeeee1b07.scope.
Dec  2 06:27:37 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:27:37 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17bfce662a46e281654b2d0304fe797236706ae2b860f845b8bd27566cc67495/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 06:27:37 np0005542249 podman[284663]: 2025-12-02 11:27:37.234734296 +0000 UTC m=+0.272898905 container init c17f785326132e02a3ad4f05020157d0e467d5b4a41cbdbbcfe7dc2eeeee1b07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  2 06:27:37 np0005542249 podman[284663]: 2025-12-02 11:27:37.242872096 +0000 UTC m=+0.281036655 container start c17f785326132e02a3ad4f05020157d0e467d5b4a41cbdbbcfe7dc2eeeee1b07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:27:37 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[284679]: [NOTICE]   (284683) : New worker (284685) forked
Dec  2 06:27:37 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[284679]: [NOTICE]   (284683) : Loading success.
Dec  2 06:27:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e370 do_prune osdmap full prune enabled
Dec  2 06:27:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e371 e371: 3 total, 3 up, 3 in
Dec  2 06:27:37 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e371: 3 total, 3 up, 3 in
Dec  2 06:27:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:27:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e371 do_prune osdmap full prune enabled
Dec  2 06:27:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e372 e372: 3 total, 3 up, 3 in
Dec  2 06:27:37 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e372: 3 total, 3 up, 3 in
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.586 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674857.5859766, 66196772-8110-4d36-bdfa-d36400059313 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.588 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 66196772-8110-4d36-bdfa-d36400059313] VM Started (Lifecycle Event)#033[00m
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.592 254904 DEBUG nova.compute.manager [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.598 254904 DEBUG nova.virt.libvirt.driver [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.603 254904 INFO nova.virt.libvirt.driver [-] [instance: 66196772-8110-4d36-bdfa-d36400059313] Instance spawned successfully.#033[00m
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.604 254904 DEBUG nova.virt.libvirt.driver [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.610 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 66196772-8110-4d36-bdfa-d36400059313] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.615 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 66196772-8110-4d36-bdfa-d36400059313] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.630 254904 DEBUG nova.virt.libvirt.driver [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.631 254904 DEBUG nova.virt.libvirt.driver [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.632 254904 DEBUG nova.virt.libvirt.driver [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.632 254904 DEBUG nova.virt.libvirt.driver [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.633 254904 DEBUG nova.virt.libvirt.driver [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.634 254904 DEBUG nova.virt.libvirt.driver [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.640 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 66196772-8110-4d36-bdfa-d36400059313] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.640 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674857.5862484, 66196772-8110-4d36-bdfa-d36400059313 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.640 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 66196772-8110-4d36-bdfa-d36400059313] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.665 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 66196772-8110-4d36-bdfa-d36400059313] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.671 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674857.5968683, 66196772-8110-4d36-bdfa-d36400059313 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.672 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 66196772-8110-4d36-bdfa-d36400059313] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.696 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 66196772-8110-4d36-bdfa-d36400059313] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.701 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 66196772-8110-4d36-bdfa-d36400059313] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.714 254904 INFO nova.compute.manager [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Took 5.66 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.721 254904 DEBUG nova.compute.manager [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.723 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 66196772-8110-4d36-bdfa-d36400059313] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.799 254904 INFO nova.compute.manager [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Took 8.40 seconds to build instance.#033[00m
Dec  2 06:27:37 np0005542249 nova_compute[254900]: 2025-12-02 11:27:37.817 254904 DEBUG oslo_concurrency.lockutils [None req-160aa00d-8dcf-436b-ba38-b547f00e4076 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "66196772-8110-4d36-bdfa-d36400059313" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:27:38 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1508: 321 pgs: 321 active+clean; 134 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 32 KiB/s wr, 61 op/s
Dec  2 06:27:38 np0005542249 nova_compute[254900]: 2025-12-02 11:27:38.617 254904 DEBUG nova.compute.manager [req-98910b47-589a-4a48-aa22-f5a67f4f6f16 req-09b645cd-d4e3-4865-b23d-dc6f63be74b1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Received event network-vif-plugged-d395b4fa-3166-499c-b9da-7d7d3574b4e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:27:38 np0005542249 nova_compute[254900]: 2025-12-02 11:27:38.619 254904 DEBUG oslo_concurrency.lockutils [req-98910b47-589a-4a48-aa22-f5a67f4f6f16 req-09b645cd-d4e3-4865-b23d-dc6f63be74b1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "66196772-8110-4d36-bdfa-d36400059313-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:27:38 np0005542249 nova_compute[254900]: 2025-12-02 11:27:38.619 254904 DEBUG oslo_concurrency.lockutils [req-98910b47-589a-4a48-aa22-f5a67f4f6f16 req-09b645cd-d4e3-4865-b23d-dc6f63be74b1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "66196772-8110-4d36-bdfa-d36400059313-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:27:38 np0005542249 nova_compute[254900]: 2025-12-02 11:27:38.620 254904 DEBUG oslo_concurrency.lockutils [req-98910b47-589a-4a48-aa22-f5a67f4f6f16 req-09b645cd-d4e3-4865-b23d-dc6f63be74b1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "66196772-8110-4d36-bdfa-d36400059313-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:27:38 np0005542249 nova_compute[254900]: 2025-12-02 11:27:38.620 254904 DEBUG nova.compute.manager [req-98910b47-589a-4a48-aa22-f5a67f4f6f16 req-09b645cd-d4e3-4865-b23d-dc6f63be74b1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] No waiting events found dispatching network-vif-plugged-d395b4fa-3166-499c-b9da-7d7d3574b4e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:27:38 np0005542249 nova_compute[254900]: 2025-12-02 11:27:38.620 254904 WARNING nova.compute.manager [req-98910b47-589a-4a48-aa22-f5a67f4f6f16 req-09b645cd-d4e3-4865-b23d-dc6f63be74b1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Received unexpected event network-vif-plugged-d395b4fa-3166-499c-b9da-7d7d3574b4e3 for instance with vm_state active and task_state None.#033[00m
Dec  2 06:27:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e372 do_prune osdmap full prune enabled
Dec  2 06:27:38 np0005542249 nova_compute[254900]: 2025-12-02 11:27:38.727 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e373 e373: 3 total, 3 up, 3 in
Dec  2 06:27:38 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e373: 3 total, 3 up, 3 in
Dec  2 06:27:39 np0005542249 nova_compute[254900]: 2025-12-02 11:27:39.673 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:40 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1510: 321 pgs: 321 active+clean; 134 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 31 KiB/s wr, 142 op/s
Dec  2 06:27:40 np0005542249 nova_compute[254900]: 2025-12-02 11:27:40.266 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:40 np0005542249 NetworkManager[48987]: <info>  [1764674860.2671] manager: (patch-provnet-236defd8-cd9f-40fb-be9d-c3108d5fe981-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Dec  2 06:27:40 np0005542249 NetworkManager[48987]: <info>  [1764674860.2677] manager: (patch-br-int-to-provnet-236defd8-cd9f-40fb-be9d-c3108d5fe981): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Dec  2 06:27:40 np0005542249 nova_compute[254900]: 2025-12-02 11:27:40.392 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:40 np0005542249 ovn_controller[153849]: 2025-12-02T11:27:40Z|00182|binding|INFO|Releasing lport 1636ad30-406d-4138-823e-abbe7f4d87ac from this chassis (sb_readonly=0)
Dec  2 06:27:40 np0005542249 nova_compute[254900]: 2025-12-02 11:27:40.401 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:40 np0005542249 nova_compute[254900]: 2025-12-02 11:27:40.730 254904 DEBUG nova.compute.manager [req-d9518dd3-d472-4ba6-8b44-33f11a808fd7 req-8daa6590-19ba-468f-bc2c-e158d5ebd2fb 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Received event network-changed-d395b4fa-3166-499c-b9da-7d7d3574b4e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:27:40 np0005542249 nova_compute[254900]: 2025-12-02 11:27:40.732 254904 DEBUG nova.compute.manager [req-d9518dd3-d472-4ba6-8b44-33f11a808fd7 req-8daa6590-19ba-468f-bc2c-e158d5ebd2fb 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Refreshing instance network info cache due to event network-changed-d395b4fa-3166-499c-b9da-7d7d3574b4e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:27:40 np0005542249 nova_compute[254900]: 2025-12-02 11:27:40.732 254904 DEBUG oslo_concurrency.lockutils [req-d9518dd3-d472-4ba6-8b44-33f11a808fd7 req-8daa6590-19ba-468f-bc2c-e158d5ebd2fb 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-66196772-8110-4d36-bdfa-d36400059313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:27:40 np0005542249 nova_compute[254900]: 2025-12-02 11:27:40.732 254904 DEBUG oslo_concurrency.lockutils [req-d9518dd3-d472-4ba6-8b44-33f11a808fd7 req-8daa6590-19ba-468f-bc2c-e158d5ebd2fb 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-66196772-8110-4d36-bdfa-d36400059313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:27:40 np0005542249 nova_compute[254900]: 2025-12-02 11:27:40.732 254904 DEBUG nova.network.neutron [req-d9518dd3-d472-4ba6-8b44-33f11a808fd7 req-8daa6590-19ba-468f-bc2c-e158d5ebd2fb 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Refreshing network info cache for port d395b4fa-3166-499c-b9da-7d7d3574b4e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:27:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e373 do_prune osdmap full prune enabled
Dec  2 06:27:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e374 e374: 3 total, 3 up, 3 in
Dec  2 06:27:40 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e374: 3 total, 3 up, 3 in
Dec  2 06:27:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:27:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/800471515' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:27:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:27:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/800471515' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:27:41 np0005542249 nova_compute[254900]: 2025-12-02 11:27:41.870 254904 DEBUG nova.network.neutron [req-d9518dd3-d472-4ba6-8b44-33f11a808fd7 req-8daa6590-19ba-468f-bc2c-e158d5ebd2fb 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Updated VIF entry in instance network info cache for port d395b4fa-3166-499c-b9da-7d7d3574b4e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:27:41 np0005542249 nova_compute[254900]: 2025-12-02 11:27:41.871 254904 DEBUG nova.network.neutron [req-d9518dd3-d472-4ba6-8b44-33f11a808fd7 req-8daa6590-19ba-468f-bc2c-e158d5ebd2fb 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Updating instance_info_cache with network_info: [{"id": "d395b4fa-3166-499c-b9da-7d7d3574b4e3", "address": "fa:16:3e:74:c6:b6", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd395b4fa-31", "ovs_interfaceid": "d395b4fa-3166-499c-b9da-7d7d3574b4e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:27:41 np0005542249 nova_compute[254900]: 2025-12-02 11:27:41.891 254904 DEBUG oslo_concurrency.lockutils [req-d9518dd3-d472-4ba6-8b44-33f11a808fd7 req-8daa6590-19ba-468f-bc2c-e158d5ebd2fb 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-66196772-8110-4d36-bdfa-d36400059313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:27:42 np0005542249 podman[284737]: 2025-12-02 11:27:42.037072113 +0000 UTC m=+0.105123001 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  2 06:27:42 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1512: 321 pgs: 321 active+clean; 134 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.9 KiB/s wr, 191 op/s
Dec  2 06:27:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:27:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e374 do_prune osdmap full prune enabled
Dec  2 06:27:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e375 e375: 3 total, 3 up, 3 in
Dec  2 06:27:42 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e375: 3 total, 3 up, 3 in
Dec  2 06:27:43 np0005542249 nova_compute[254900]: 2025-12-02 11:27:43.730 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:27:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1275147414' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:27:43 np0005542249 nova_compute[254900]: 2025-12-02 11:27:43.821 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:27:43 np0005542249 nova_compute[254900]: 2025-12-02 11:27:43.841 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:27:43 np0005542249 nova_compute[254900]: 2025-12-02 11:27:43.841 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:27:43 np0005542249 nova_compute[254900]: 2025-12-02 11:27:43.841 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 06:27:44 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1514: 321 pgs: 321 active+clean; 134 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.7 KiB/s wr, 190 op/s
Dec  2 06:27:44 np0005542249 nova_compute[254900]: 2025-12-02 11:27:44.420 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "refresh_cache-66196772-8110-4d36-bdfa-d36400059313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:27:44 np0005542249 nova_compute[254900]: 2025-12-02 11:27:44.420 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquired lock "refresh_cache-66196772-8110-4d36-bdfa-d36400059313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:27:44 np0005542249 nova_compute[254900]: 2025-12-02 11:27:44.420 254904 DEBUG nova.network.neutron [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: 66196772-8110-4d36-bdfa-d36400059313] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 06:27:44 np0005542249 nova_compute[254900]: 2025-12-02 11:27:44.421 254904 DEBUG nova.objects.instance [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 66196772-8110-4d36-bdfa-d36400059313 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:27:44 np0005542249 nova_compute[254900]: 2025-12-02 11:27:44.676 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e375 do_prune osdmap full prune enabled
Dec  2 06:27:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e376 e376: 3 total, 3 up, 3 in
Dec  2 06:27:44 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e376: 3 total, 3 up, 3 in
Dec  2 06:27:45 np0005542249 nova_compute[254900]: 2025-12-02 11:27:45.777 254904 DEBUG nova.network.neutron [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: 66196772-8110-4d36-bdfa-d36400059313] Updating instance_info_cache with network_info: [{"id": "d395b4fa-3166-499c-b9da-7d7d3574b4e3", "address": "fa:16:3e:74:c6:b6", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd395b4fa-31", "ovs_interfaceid": "d395b4fa-3166-499c-b9da-7d7d3574b4e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:27:45 np0005542249 nova_compute[254900]: 2025-12-02 11:27:45.803 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Releasing lock "refresh_cache-66196772-8110-4d36-bdfa-d36400059313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:27:45 np0005542249 nova_compute[254900]: 2025-12-02 11:27:45.804 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: 66196772-8110-4d36-bdfa-d36400059313] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 06:27:45 np0005542249 nova_compute[254900]: 2025-12-02 11:27:45.804 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:27:45 np0005542249 nova_compute[254900]: 2025-12-02 11:27:45.805 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:27:45 np0005542249 nova_compute[254900]: 2025-12-02 11:27:45.805 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:27:45 np0005542249 nova_compute[254900]: 2025-12-02 11:27:45.805 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:27:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:27:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:27:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:27:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:27:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:27:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:27:46 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 00a5a1a4-cb7b-4e87-be91-a9d4c98c712e does not exist
Dec  2 06:27:46 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 9c72911b-311c-4c60-9a11-79ca6fd2d717 does not exist
Dec  2 06:27:46 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 337e9ecf-897c-4bea-9120-0197c3ba7978 does not exist
Dec  2 06:27:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:27:46 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:27:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:27:46 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:27:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:27:46 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:27:46 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1516: 321 pgs: 321 active+clean; 134 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.5 KiB/s wr, 105 op/s
Dec  2 06:27:46 np0005542249 podman[285026]: 2025-12-02 11:27:46.647929354 +0000 UTC m=+0.053864359 container create 56419b4c5f5d0a9bf3747ee4204c43eeeec80a81c03fd862b8d18376a8a5041b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:27:46 np0005542249 systemd[1]: Started libpod-conmon-56419b4c5f5d0a9bf3747ee4204c43eeeec80a81c03fd862b8d18376a8a5041b.scope.
Dec  2 06:27:46 np0005542249 podman[285026]: 2025-12-02 11:27:46.616034411 +0000 UTC m=+0.021969456 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:27:46 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:27:46 np0005542249 podman[285026]: 2025-12-02 11:27:46.776660319 +0000 UTC m=+0.182595384 container init 56419b4c5f5d0a9bf3747ee4204c43eeeec80a81c03fd862b8d18376a8a5041b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_fermat, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  2 06:27:46 np0005542249 podman[285026]: 2025-12-02 11:27:46.78292906 +0000 UTC m=+0.188864055 container start 56419b4c5f5d0a9bf3747ee4204c43eeeec80a81c03fd862b8d18376a8a5041b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  2 06:27:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:27:46 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1783550980' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:27:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:27:46 np0005542249 podman[285026]: 2025-12-02 11:27:46.786486766 +0000 UTC m=+0.192421761 container attach 56419b4c5f5d0a9bf3747ee4204c43eeeec80a81c03fd862b8d18376a8a5041b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_fermat, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  2 06:27:46 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1783550980' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:27:46 np0005542249 kind_fermat[285042]: 167 167
Dec  2 06:27:46 np0005542249 systemd[1]: libpod-56419b4c5f5d0a9bf3747ee4204c43eeeec80a81c03fd862b8d18376a8a5041b.scope: Deactivated successfully.
Dec  2 06:27:46 np0005542249 conmon[285042]: conmon 56419b4c5f5d0a9bf374 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-56419b4c5f5d0a9bf3747ee4204c43eeeec80a81c03fd862b8d18376a8a5041b.scope/container/memory.events
Dec  2 06:27:46 np0005542249 podman[285026]: 2025-12-02 11:27:46.796134827 +0000 UTC m=+0.202069862 container died 56419b4c5f5d0a9bf3747ee4204c43eeeec80a81c03fd862b8d18376a8a5041b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  2 06:27:46 np0005542249 systemd[1]: var-lib-containers-storage-overlay-5273cf9f0a5fe434bf44641fde959bb902b0df612e3e5aae61bec9004b1fd510-merged.mount: Deactivated successfully.
Dec  2 06:27:46 np0005542249 podman[285026]: 2025-12-02 11:27:46.838221496 +0000 UTC m=+0.244156501 container remove 56419b4c5f5d0a9bf3747ee4204c43eeeec80a81c03fd862b8d18376a8a5041b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_fermat, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:27:46 np0005542249 systemd[1]: libpod-conmon-56419b4c5f5d0a9bf3747ee4204c43eeeec80a81c03fd862b8d18376a8a5041b.scope: Deactivated successfully.
Dec  2 06:27:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e376 do_prune osdmap full prune enabled
Dec  2 06:27:46 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:27:46 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:27:46 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:27:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e377 e377: 3 total, 3 up, 3 in
Dec  2 06:27:46 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e377: 3 total, 3 up, 3 in
Dec  2 06:27:47 np0005542249 podman[285066]: 2025-12-02 11:27:47.043460664 +0000 UTC m=+0.063367377 container create a1b163beb18f61721324099e70a7f406c631871b65c38ba9204205e17af62fd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:27:47 np0005542249 systemd[1]: Started libpod-conmon-a1b163beb18f61721324099e70a7f406c631871b65c38ba9204205e17af62fd9.scope.
Dec  2 06:27:47 np0005542249 podman[285066]: 2025-12-02 11:27:47.012688351 +0000 UTC m=+0.032595144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:27:47 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:27:47 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dbd640785c2d4adb0b5f87ba27bdaed99cffc92f8cc7643d3470a5e404076ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:27:47 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dbd640785c2d4adb0b5f87ba27bdaed99cffc92f8cc7643d3470a5e404076ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:27:47 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dbd640785c2d4adb0b5f87ba27bdaed99cffc92f8cc7643d3470a5e404076ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:27:47 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dbd640785c2d4adb0b5f87ba27bdaed99cffc92f8cc7643d3470a5e404076ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:27:47 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dbd640785c2d4adb0b5f87ba27bdaed99cffc92f8cc7643d3470a5e404076ac/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:27:47 np0005542249 podman[285066]: 2025-12-02 11:27:47.155917399 +0000 UTC m=+0.175824152 container init a1b163beb18f61721324099e70a7f406c631871b65c38ba9204205e17af62fd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shirley, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:27:47 np0005542249 podman[285066]: 2025-12-02 11:27:47.16261105 +0000 UTC m=+0.182517743 container start a1b163beb18f61721324099e70a7f406c631871b65c38ba9204205e17af62fd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shirley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  2 06:27:47 np0005542249 podman[285066]: 2025-12-02 11:27:47.166113454 +0000 UTC m=+0.186020187 container attach a1b163beb18f61721324099e70a7f406c631871b65c38ba9204205e17af62fd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shirley, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  2 06:27:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:27:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3090361320' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:27:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:27:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3090361320' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:27:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:27:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e377 do_prune osdmap full prune enabled
Dec  2 06:27:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e378 e378: 3 total, 3 up, 3 in
Dec  2 06:27:47 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e378: 3 total, 3 up, 3 in
Dec  2 06:27:48 np0005542249 jovial_shirley[285083]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:27:48 np0005542249 jovial_shirley[285083]: --> relative data size: 1.0
Dec  2 06:27:48 np0005542249 jovial_shirley[285083]: --> All data devices are unavailable
Dec  2 06:27:48 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1519: 321 pgs: 321 active+clean; 134 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 595 KiB/s rd, 5.3 KiB/s wr, 148 op/s
Dec  2 06:27:48 np0005542249 systemd[1]: libpod-a1b163beb18f61721324099e70a7f406c631871b65c38ba9204205e17af62fd9.scope: Deactivated successfully.
Dec  2 06:27:48 np0005542249 systemd[1]: libpod-a1b163beb18f61721324099e70a7f406c631871b65c38ba9204205e17af62fd9.scope: Consumed 1.006s CPU time.
Dec  2 06:27:48 np0005542249 conmon[285083]: conmon a1b163beb18f61721324 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a1b163beb18f61721324099e70a7f406c631871b65c38ba9204205e17af62fd9.scope/container/memory.events
Dec  2 06:27:48 np0005542249 podman[285066]: 2025-12-02 11:27:48.245552182 +0000 UTC m=+1.265458905 container died a1b163beb18f61721324099e70a7f406c631871b65c38ba9204205e17af62fd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shirley, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:27:48 np0005542249 systemd[1]: var-lib-containers-storage-overlay-5dbd640785c2d4adb0b5f87ba27bdaed99cffc92f8cc7643d3470a5e404076ac-merged.mount: Deactivated successfully.
Dec  2 06:27:48 np0005542249 podman[285066]: 2025-12-02 11:27:48.31232379 +0000 UTC m=+1.332230493 container remove a1b163beb18f61721324099e70a7f406c631871b65c38ba9204205e17af62fd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shirley, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:27:48 np0005542249 systemd[1]: libpod-conmon-a1b163beb18f61721324099e70a7f406c631871b65c38ba9204205e17af62fd9.scope: Deactivated successfully.
Dec  2 06:27:48 np0005542249 nova_compute[254900]: 2025-12-02 11:27:48.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:27:48 np0005542249 nova_compute[254900]: 2025-12-02 11:27:48.383 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:27:48 np0005542249 nova_compute[254900]: 2025-12-02 11:27:48.732 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e378 do_prune osdmap full prune enabled
Dec  2 06:27:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e379 e379: 3 total, 3 up, 3 in
Dec  2 06:27:48 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e379: 3 total, 3 up, 3 in
Dec  2 06:27:49 np0005542249 podman[285263]: 2025-12-02 11:27:49.059482021 +0000 UTC m=+0.077446099 container create 3105e7f144396f30a1a3493e7e4574f2a44b8fef6faa5b5eabb59bfc35c54b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_heyrovsky, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:27:49 np0005542249 podman[285263]: 2025-12-02 11:27:49.009346713 +0000 UTC m=+0.027310851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:27:49 np0005542249 systemd[1]: Started libpod-conmon-3105e7f144396f30a1a3493e7e4574f2a44b8fef6faa5b5eabb59bfc35c54b47.scope.
Dec  2 06:27:49 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:27:49 np0005542249 podman[285263]: 2025-12-02 11:27:49.161801321 +0000 UTC m=+0.179765419 container init 3105e7f144396f30a1a3493e7e4574f2a44b8fef6faa5b5eabb59bfc35c54b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_heyrovsky, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  2 06:27:49 np0005542249 podman[285263]: 2025-12-02 11:27:49.172467339 +0000 UTC m=+0.190431457 container start 3105e7f144396f30a1a3493e7e4574f2a44b8fef6faa5b5eabb59bfc35c54b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  2 06:27:49 np0005542249 podman[285263]: 2025-12-02 11:27:49.176058547 +0000 UTC m=+0.194022665 container attach 3105e7f144396f30a1a3493e7e4574f2a44b8fef6faa5b5eabb59bfc35c54b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:27:49 np0005542249 upbeat_heyrovsky[285280]: 167 167
Dec  2 06:27:49 np0005542249 systemd[1]: libpod-3105e7f144396f30a1a3493e7e4574f2a44b8fef6faa5b5eabb59bfc35c54b47.scope: Deactivated successfully.
Dec  2 06:27:49 np0005542249 podman[285263]: 2025-12-02 11:27:49.181089834 +0000 UTC m=+0.199053922 container died 3105e7f144396f30a1a3493e7e4574f2a44b8fef6faa5b5eabb59bfc35c54b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_heyrovsky, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:27:49 np0005542249 systemd[1]: var-lib-containers-storage-overlay-064c354124c2cc11d3718d27c90692eafcaacf803a286a986eabbbac94df5e09-merged.mount: Deactivated successfully.
Dec  2 06:27:49 np0005542249 podman[285263]: 2025-12-02 11:27:49.226795441 +0000 UTC m=+0.244759529 container remove 3105e7f144396f30a1a3493e7e4574f2a44b8fef6faa5b5eabb59bfc35c54b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:27:49 np0005542249 systemd[1]: libpod-conmon-3105e7f144396f30a1a3493e7e4574f2a44b8fef6faa5b5eabb59bfc35c54b47.scope: Deactivated successfully.
Dec  2 06:27:49 np0005542249 nova_compute[254900]: 2025-12-02 11:27:49.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:27:49 np0005542249 podman[285304]: 2025-12-02 11:27:49.393854355 +0000 UTC m=+0.044139837 container create 51a2cc71940f67f07d09dfc1afa1537e996e607725a43f7c8bbb13a6a4ee2fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_montalcini, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  2 06:27:49 np0005542249 systemd[1]: Started libpod-conmon-51a2cc71940f67f07d09dfc1afa1537e996e607725a43f7c8bbb13a6a4ee2fe1.scope.
Dec  2 06:27:49 np0005542249 podman[285304]: 2025-12-02 11:27:49.372932788 +0000 UTC m=+0.023218290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:27:49 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:27:49 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e764a1823b9b431afd829dad52a6136c5d2137353ed82250e849ce2717d1fed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:27:49 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e764a1823b9b431afd829dad52a6136c5d2137353ed82250e849ce2717d1fed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:27:49 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e764a1823b9b431afd829dad52a6136c5d2137353ed82250e849ce2717d1fed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:27:49 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e764a1823b9b431afd829dad52a6136c5d2137353ed82250e849ce2717d1fed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:27:49 np0005542249 podman[285304]: 2025-12-02 11:27:49.495956208 +0000 UTC m=+0.146241710 container init 51a2cc71940f67f07d09dfc1afa1537e996e607725a43f7c8bbb13a6a4ee2fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  2 06:27:49 np0005542249 podman[285304]: 2025-12-02 11:27:49.504705976 +0000 UTC m=+0.154991458 container start 51a2cc71940f67f07d09dfc1afa1537e996e607725a43f7c8bbb13a6a4ee2fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  2 06:27:49 np0005542249 podman[285304]: 2025-12-02 11:27:49.510113993 +0000 UTC m=+0.160399495 container attach 51a2cc71940f67f07d09dfc1afa1537e996e607725a43f7c8bbb13a6a4ee2fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec  2 06:27:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:27:49 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/107637853' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:27:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:27:49 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/107637853' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:27:49 np0005542249 nova_compute[254900]: 2025-12-02 11:27:49.678 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:50 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1521: 321 pgs: 321 active+clean; 134 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 124 KiB/s rd, 4.3 KiB/s wr, 142 op/s
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]: {
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:    "0": [
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:        {
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "devices": [
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "/dev/loop3"
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            ],
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "lv_name": "ceph_lv0",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "lv_size": "21470642176",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "name": "ceph_lv0",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "tags": {
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.cluster_name": "ceph",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.crush_device_class": "",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.encrypted": "0",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.osd_id": "0",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.type": "block",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.vdo": "0"
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            },
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "type": "block",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "vg_name": "ceph_vg0"
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:        }
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:    ],
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:    "1": [
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:        {
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "devices": [
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "/dev/loop4"
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            ],
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "lv_name": "ceph_lv1",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "lv_size": "21470642176",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "name": "ceph_lv1",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "tags": {
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.cluster_name": "ceph",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.crush_device_class": "",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.encrypted": "0",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.osd_id": "1",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.type": "block",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.vdo": "0"
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            },
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "type": "block",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "vg_name": "ceph_vg1"
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:        }
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:    ],
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:    "2": [
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:        {
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "devices": [
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "/dev/loop5"
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            ],
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "lv_name": "ceph_lv2",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "lv_size": "21470642176",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "name": "ceph_lv2",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "tags": {
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.cluster_name": "ceph",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.crush_device_class": "",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.encrypted": "0",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.osd_id": "2",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.type": "block",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:                "ceph.vdo": "0"
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            },
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "type": "block",
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:            "vg_name": "ceph_vg2"
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:        }
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]:    ]
Dec  2 06:27:50 np0005542249 clever_montalcini[285322]: }
Dec  2 06:27:50 np0005542249 systemd[1]: libpod-51a2cc71940f67f07d09dfc1afa1537e996e607725a43f7c8bbb13a6a4ee2fe1.scope: Deactivated successfully.
Dec  2 06:27:50 np0005542249 podman[285304]: 2025-12-02 11:27:50.30075939 +0000 UTC m=+0.951044902 container died 51a2cc71940f67f07d09dfc1afa1537e996e607725a43f7c8bbb13a6a4ee2fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_montalcini, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:27:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:27:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/573045498' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:27:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:27:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/573045498' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:27:50 np0005542249 nova_compute[254900]: 2025-12-02 11:27:50.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:27:50 np0005542249 nova_compute[254900]: 2025-12-02 11:27:50.410 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:27:50 np0005542249 nova_compute[254900]: 2025-12-02 11:27:50.410 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:27:50 np0005542249 nova_compute[254900]: 2025-12-02 11:27:50.411 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:27:50 np0005542249 nova_compute[254900]: 2025-12-02 11:27:50.411 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:27:50 np0005542249 nova_compute[254900]: 2025-12-02 11:27:50.411 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:27:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:27:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1231010749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:27:50 np0005542249 nova_compute[254900]: 2025-12-02 11:27:50.852 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:27:50 np0005542249 nova_compute[254900]: 2025-12-02 11:27:50.935 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:27:50 np0005542249 nova_compute[254900]: 2025-12-02 11:27:50.936 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:27:51 np0005542249 systemd[1]: var-lib-containers-storage-overlay-0e764a1823b9b431afd829dad52a6136c5d2137353ed82250e849ce2717d1fed-merged.mount: Deactivated successfully.
Dec  2 06:27:51 np0005542249 podman[285304]: 2025-12-02 11:27:51.102370445 +0000 UTC m=+1.752655927 container remove 51a2cc71940f67f07d09dfc1afa1537e996e607725a43f7c8bbb13a6a4ee2fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  2 06:27:51 np0005542249 systemd[1]: libpod-conmon-51a2cc71940f67f07d09dfc1afa1537e996e607725a43f7c8bbb13a6a4ee2fe1.scope: Deactivated successfully.
Dec  2 06:27:51 np0005542249 nova_compute[254900]: 2025-12-02 11:27:51.175 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:27:51 np0005542249 nova_compute[254900]: 2025-12-02 11:27:51.177 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4139MB free_disk=59.98810958862305GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:27:51 np0005542249 nova_compute[254900]: 2025-12-02 11:27:51.178 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:27:51 np0005542249 nova_compute[254900]: 2025-12-02 11:27:51.178 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:27:51 np0005542249 podman[285332]: 2025-12-02 11:27:51.259420147 +0000 UTC m=+0.930156516 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:27:51 np0005542249 nova_compute[254900]: 2025-12-02 11:27:51.266 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Instance 66196772-8110-4d36-bdfa-d36400059313 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 06:27:51 np0005542249 nova_compute[254900]: 2025-12-02 11:27:51.266 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:27:51 np0005542249 nova_compute[254900]: 2025-12-02 11:27:51.267 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:27:51 np0005542249 nova_compute[254900]: 2025-12-02 11:27:51.301 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:27:51 np0005542249 ovn_controller[153849]: 2025-12-02T11:27:51Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:74:c6:b6 10.100.0.8
Dec  2 06:27:51 np0005542249 ovn_controller[153849]: 2025-12-02T11:27:51Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:74:c6:b6 10.100.0.8
Dec  2 06:27:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:27:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1325406168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:27:51 np0005542249 nova_compute[254900]: 2025-12-02 11:27:51.768 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:27:51 np0005542249 nova_compute[254900]: 2025-12-02 11:27:51.773 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:27:51 np0005542249 nova_compute[254900]: 2025-12-02 11:27:51.796 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:27:51 np0005542249 nova_compute[254900]: 2025-12-02 11:27:51.822 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:27:51 np0005542249 nova_compute[254900]: 2025-12-02 11:27:51.822 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:27:51 np0005542249 podman[285554]: 2025-12-02 11:27:51.897794652 +0000 UTC m=+0.050672732 container create b17e7ba216aab1637b3688567a905244b731f2c4767b522aff462e7286c04344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  2 06:27:51 np0005542249 systemd[1]: Started libpod-conmon-b17e7ba216aab1637b3688567a905244b731f2c4767b522aff462e7286c04344.scope.
Dec  2 06:27:51 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:27:51 np0005542249 podman[285554]: 2025-12-02 11:27:51.880612417 +0000 UTC m=+0.033490527 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:27:51 np0005542249 podman[285554]: 2025-12-02 11:27:51.984722916 +0000 UTC m=+0.137601026 container init b17e7ba216aab1637b3688567a905244b731f2c4767b522aff462e7286c04344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:27:51 np0005542249 podman[285554]: 2025-12-02 11:27:51.990629286 +0000 UTC m=+0.143507376 container start b17e7ba216aab1637b3688567a905244b731f2c4767b522aff462e7286c04344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hawking, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:27:51 np0005542249 quizzical_hawking[285571]: 167 167
Dec  2 06:27:51 np0005542249 systemd[1]: libpod-b17e7ba216aab1637b3688567a905244b731f2c4767b522aff462e7286c04344.scope: Deactivated successfully.
Dec  2 06:27:51 np0005542249 podman[285554]: 2025-12-02 11:27:51.997271636 +0000 UTC m=+0.150149736 container attach b17e7ba216aab1637b3688567a905244b731f2c4767b522aff462e7286c04344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:27:51 np0005542249 podman[285554]: 2025-12-02 11:27:51.998457228 +0000 UTC m=+0.151335308 container died b17e7ba216aab1637b3688567a905244b731f2c4767b522aff462e7286c04344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hawking, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:27:52 np0005542249 systemd[1]: var-lib-containers-storage-overlay-5f1e186192a42cd55d714e767a9029e9a7a0663d0ad7903b114d1ff66c07b5d7-merged.mount: Deactivated successfully.
Dec  2 06:27:52 np0005542249 podman[285554]: 2025-12-02 11:27:52.041909245 +0000 UTC m=+0.194787335 container remove b17e7ba216aab1637b3688567a905244b731f2c4767b522aff462e7286c04344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hawking, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:27:52 np0005542249 systemd[1]: libpod-conmon-b17e7ba216aab1637b3688567a905244b731f2c4767b522aff462e7286c04344.scope: Deactivated successfully.
Dec  2 06:27:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e379 do_prune osdmap full prune enabled
Dec  2 06:27:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e380 e380: 3 total, 3 up, 3 in
Dec  2 06:27:52 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e380: 3 total, 3 up, 3 in
Dec  2 06:27:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:27:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/465094452' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:27:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:27:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/465094452' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:27:52 np0005542249 podman[285594]: 2025-12-02 11:27:52.224174259 +0000 UTC m=+0.054699612 container create ecc16c67e5d2ba8fbfc900f255f9c7d6298c9da117dc86fcb7ddb40dc2b13fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mcclintock, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:27:52 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1523: 321 pgs: 321 active+clean; 141 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 317 KiB/s rd, 1.3 MiB/s wr, 236 op/s
Dec  2 06:27:52 np0005542249 systemd[1]: Started libpod-conmon-ecc16c67e5d2ba8fbfc900f255f9c7d6298c9da117dc86fcb7ddb40dc2b13fb3.scope.
Dec  2 06:27:52 np0005542249 podman[285594]: 2025-12-02 11:27:52.202248415 +0000 UTC m=+0.032773808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:27:52 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:27:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2b98a5d71883231cb4a6a020cf3b87c565932c1dd64358a49574e87035f9486/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:27:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2b98a5d71883231cb4a6a020cf3b87c565932c1dd64358a49574e87035f9486/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:27:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2b98a5d71883231cb4a6a020cf3b87c565932c1dd64358a49574e87035f9486/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:27:52 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2b98a5d71883231cb4a6a020cf3b87c565932c1dd64358a49574e87035f9486/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:27:52 np0005542249 podman[285594]: 2025-12-02 11:27:52.334662011 +0000 UTC m=+0.165187364 container init ecc16c67e5d2ba8fbfc900f255f9c7d6298c9da117dc86fcb7ddb40dc2b13fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  2 06:27:52 np0005542249 podman[285594]: 2025-12-02 11:27:52.341330982 +0000 UTC m=+0.171856325 container start ecc16c67e5d2ba8fbfc900f255f9c7d6298c9da117dc86fcb7ddb40dc2b13fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec  2 06:27:52 np0005542249 podman[285594]: 2025-12-02 11:27:52.345284588 +0000 UTC m=+0.175809931 container attach ecc16c67e5d2ba8fbfc900f255f9c7d6298c9da117dc86fcb7ddb40dc2b13fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  2 06:27:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:27:52 np0005542249 nova_compute[254900]: 2025-12-02 11:27:52.824 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:27:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e380 do_prune osdmap full prune enabled
Dec  2 06:27:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e381 e381: 3 total, 3 up, 3 in
Dec  2 06:27:53 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e381: 3 total, 3 up, 3 in
Dec  2 06:27:53 np0005542249 mystifying_mcclintock[285610]: {
Dec  2 06:27:53 np0005542249 mystifying_mcclintock[285610]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:27:53 np0005542249 mystifying_mcclintock[285610]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:27:53 np0005542249 mystifying_mcclintock[285610]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:27:53 np0005542249 mystifying_mcclintock[285610]:        "osd_id": 0,
Dec  2 06:27:53 np0005542249 mystifying_mcclintock[285610]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:27:53 np0005542249 mystifying_mcclintock[285610]:        "type": "bluestore"
Dec  2 06:27:53 np0005542249 mystifying_mcclintock[285610]:    },
Dec  2 06:27:53 np0005542249 mystifying_mcclintock[285610]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:27:53 np0005542249 mystifying_mcclintock[285610]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:27:53 np0005542249 mystifying_mcclintock[285610]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:27:53 np0005542249 mystifying_mcclintock[285610]:        "osd_id": 2,
Dec  2 06:27:53 np0005542249 mystifying_mcclintock[285610]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:27:53 np0005542249 mystifying_mcclintock[285610]:        "type": "bluestore"
Dec  2 06:27:53 np0005542249 mystifying_mcclintock[285610]:    },
Dec  2 06:27:53 np0005542249 mystifying_mcclintock[285610]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:27:53 np0005542249 mystifying_mcclintock[285610]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:27:53 np0005542249 mystifying_mcclintock[285610]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:27:53 np0005542249 mystifying_mcclintock[285610]:        "osd_id": 1,
Dec  2 06:27:53 np0005542249 mystifying_mcclintock[285610]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:27:53 np0005542249 mystifying_mcclintock[285610]:        "type": "bluestore"
Dec  2 06:27:53 np0005542249 mystifying_mcclintock[285610]:    }
Dec  2 06:27:53 np0005542249 mystifying_mcclintock[285610]: }
Dec  2 06:27:53 np0005542249 systemd[1]: libpod-ecc16c67e5d2ba8fbfc900f255f9c7d6298c9da117dc86fcb7ddb40dc2b13fb3.scope: Deactivated successfully.
Dec  2 06:27:53 np0005542249 podman[285594]: 2025-12-02 11:27:53.41492388 +0000 UTC m=+1.245449293 container died ecc16c67e5d2ba8fbfc900f255f9c7d6298c9da117dc86fcb7ddb40dc2b13fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mcclintock, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  2 06:27:53 np0005542249 systemd[1]: libpod-ecc16c67e5d2ba8fbfc900f255f9c7d6298c9da117dc86fcb7ddb40dc2b13fb3.scope: Consumed 1.068s CPU time.
Dec  2 06:27:53 np0005542249 systemd[1]: var-lib-containers-storage-overlay-c2b98a5d71883231cb4a6a020cf3b87c565932c1dd64358a49574e87035f9486-merged.mount: Deactivated successfully.
Dec  2 06:27:53 np0005542249 podman[285594]: 2025-12-02 11:27:53.47731822 +0000 UTC m=+1.307843563 container remove ecc16c67e5d2ba8fbfc900f255f9c7d6298c9da117dc86fcb7ddb40dc2b13fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:27:53 np0005542249 systemd[1]: libpod-conmon-ecc16c67e5d2ba8fbfc900f255f9c7d6298c9da117dc86fcb7ddb40dc2b13fb3.scope: Deactivated successfully.
Dec  2 06:27:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:27:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:27:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:27:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:27:53 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev a50ceb61-1160-412e-b3eb-73d3cf1aaa2b does not exist
Dec  2 06:27:53 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 652c4f10-2319-488b-b314-089f95c7a488 does not exist
Dec  2 06:27:53 np0005542249 podman[285644]: 2025-12-02 11:27:53.570621847 +0000 UTC m=+0.112016464 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  2 06:27:53 np0005542249 nova_compute[254900]: 2025-12-02 11:27:53.774 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:54 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:27:54 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:27:54 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1525: 321 pgs: 321 active+clean; 157 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.7 MiB/s wr, 199 op/s
Dec  2 06:27:54 np0005542249 nova_compute[254900]: 2025-12-02 11:27:54.679 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e381 do_prune osdmap full prune enabled
Dec  2 06:27:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e382 e382: 3 total, 3 up, 3 in
Dec  2 06:27:55 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e382: 3 total, 3 up, 3 in
Dec  2 06:27:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:27:55 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1616562664' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:27:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:27:55 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1616562664' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:27:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e382 do_prune osdmap full prune enabled
Dec  2 06:27:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e383 e383: 3 total, 3 up, 3 in
Dec  2 06:27:56 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e383: 3 total, 3 up, 3 in
Dec  2 06:27:56 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1528: 321 pgs: 321 active+clean; 167 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 795 KiB/s rd, 4.4 MiB/s wr, 224 op/s
Dec  2 06:27:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:27:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:27:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:27:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:27:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:27:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:27:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:27:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2795349308' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:27:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:27:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2795349308' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:27:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:27:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e383 do_prune osdmap full prune enabled
Dec  2 06:27:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e384 e384: 3 total, 3 up, 3 in
Dec  2 06:27:58 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1529: 321 pgs: 321 active+clean; 167 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 629 KiB/s rd, 3.1 MiB/s wr, 256 op/s
Dec  2 06:27:58 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e384: 3 total, 3 up, 3 in
Dec  2 06:27:58 np0005542249 nova_compute[254900]: 2025-12-02 11:27:58.778 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:27:59 np0005542249 nova_compute[254900]: 2025-12-02 11:27:59.682 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:00 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1531: 321 pgs: 321 active+clean; 167 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 526 KiB/s rd, 1.6 MiB/s wr, 194 op/s
Dec  2 06:28:01 np0005542249 nova_compute[254900]: 2025-12-02 11:28:01.737 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:01 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:28:01.737 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:23:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:25:d0:a1:75:ed'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:28:01 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:28:01.739 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 06:28:02 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1532: 321 pgs: 321 active+clean; 167 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 25 KiB/s wr, 90 op/s
Dec  2 06:28:02 np0005542249 nova_compute[254900]: 2025-12-02 11:28:02.315 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:28:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e384 do_prune osdmap full prune enabled
Dec  2 06:28:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e385 e385: 3 total, 3 up, 3 in
Dec  2 06:28:02 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e385: 3 total, 3 up, 3 in
Dec  2 06:28:03 np0005542249 nova_compute[254900]: 2025-12-02 11:28:03.781 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:04 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1534: 321 pgs: 321 active+clean; 167 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 22 KiB/s wr, 79 op/s
Dec  2 06:28:04 np0005542249 nova_compute[254900]: 2025-12-02 11:28:04.571 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:28:04 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/877583994' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:28:04 np0005542249 nova_compute[254900]: 2025-12-02 11:28:04.684 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e385 do_prune osdmap full prune enabled
Dec  2 06:28:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e386 e386: 3 total, 3 up, 3 in
Dec  2 06:28:05 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e386: 3 total, 3 up, 3 in
Dec  2 06:28:06 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1536: 321 pgs: 321 active+clean; 167 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1.9 KiB/s wr, 4 op/s
Dec  2 06:28:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e386 do_prune osdmap full prune enabled
Dec  2 06:28:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e387 e387: 3 total, 3 up, 3 in
Dec  2 06:28:06 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e387: 3 total, 3 up, 3 in
Dec  2 06:28:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:28:08 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1538: 321 pgs: 321 active+clean; 167 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 3.8 KiB/s wr, 11 op/s
Dec  2 06:28:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e387 do_prune osdmap full prune enabled
Dec  2 06:28:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e388 e388: 3 total, 3 up, 3 in
Dec  2 06:28:08 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e388: 3 total, 3 up, 3 in
Dec  2 06:28:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:28:08.742 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:28:08 np0005542249 nova_compute[254900]: 2025-12-02 11:28:08.783 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e388 do_prune osdmap full prune enabled
Dec  2 06:28:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e389 e389: 3 total, 3 up, 3 in
Dec  2 06:28:09 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e389: 3 total, 3 up, 3 in
Dec  2 06:28:09 np0005542249 nova_compute[254900]: 2025-12-02 11:28:09.687 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:28:09 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/317254827' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:28:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:28:09 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/317254827' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:28:09 np0005542249 nova_compute[254900]: 2025-12-02 11:28:09.859 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:10 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1541: 321 pgs: 321 active+clean; 167 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 5.7 KiB/s wr, 52 op/s
Dec  2 06:28:12 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1542: 321 pgs: 321 active+clean; 167 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 7.3 KiB/s wr, 76 op/s
Dec  2 06:28:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:28:13 np0005542249 podman[285725]: 2025-12-02 11:28:13.004175408 +0000 UTC m=+0.085658170 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  2 06:28:13 np0005542249 nova_compute[254900]: 2025-12-02 11:28:13.616 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:13 np0005542249 nova_compute[254900]: 2025-12-02 11:28:13.785 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:14 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1543: 321 pgs: 321 active+clean; 167 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 6.3 KiB/s wr, 61 op/s
Dec  2 06:28:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e389 do_prune osdmap full prune enabled
Dec  2 06:28:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e390 e390: 3 total, 3 up, 3 in
Dec  2 06:28:14 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e390: 3 total, 3 up, 3 in
Dec  2 06:28:14 np0005542249 nova_compute[254900]: 2025-12-02 11:28:14.713 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e390 do_prune osdmap full prune enabled
Dec  2 06:28:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e391 e391: 3 total, 3 up, 3 in
Dec  2 06:28:15 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e391: 3 total, 3 up, 3 in
Dec  2 06:28:16 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1546: 321 pgs: 321 active+clean; 167 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 3.3 KiB/s wr, 53 op/s
Dec  2 06:28:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e391 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:28:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e391 do_prune osdmap full prune enabled
Dec  2 06:28:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e392 e392: 3 total, 3 up, 3 in
Dec  2 06:28:17 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e392: 3 total, 3 up, 3 in
Dec  2 06:28:18 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1548: 321 pgs: 321 active+clean; 167 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 3.0 KiB/s wr, 54 op/s
Dec  2 06:28:18 np0005542249 nova_compute[254900]: 2025-12-02 11:28:18.787 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:19 np0005542249 nova_compute[254900]: 2025-12-02 11:28:19.715 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:28:19.842 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:28:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:28:19.843 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:28:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:28:19.843 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:28:19 np0005542249 nova_compute[254900]: 2025-12-02 11:28:19.906 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:20 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1549: 321 pgs: 321 active+clean; 167 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.3 KiB/s wr, 50 op/s
Dec  2 06:28:22 np0005542249 podman[285746]: 2025-12-02 11:28:22.064936945 +0000 UTC m=+0.142451059 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec  2 06:28:22 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1550: 321 pgs: 321 active+clean; 167 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 4.2 KiB/s wr, 21 op/s
Dec  2 06:28:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:28:23 np0005542249 nova_compute[254900]: 2025-12-02 11:28:23.200 254904 DEBUG oslo_concurrency.lockutils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "fabff5b9-969a-4502-91d0-6adacfa56156" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:28:23 np0005542249 nova_compute[254900]: 2025-12-02 11:28:23.200 254904 DEBUG oslo_concurrency.lockutils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "fabff5b9-969a-4502-91d0-6adacfa56156" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:28:23 np0005542249 nova_compute[254900]: 2025-12-02 11:28:23.224 254904 DEBUG nova.compute.manager [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:28:23 np0005542249 nova_compute[254900]: 2025-12-02 11:28:23.358 254904 DEBUG oslo_concurrency.lockutils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:28:23 np0005542249 nova_compute[254900]: 2025-12-02 11:28:23.359 254904 DEBUG oslo_concurrency.lockutils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:28:23 np0005542249 nova_compute[254900]: 2025-12-02 11:28:23.368 254904 DEBUG nova.virt.hardware [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:28:23 np0005542249 nova_compute[254900]: 2025-12-02 11:28:23.369 254904 INFO nova.compute.claims [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:28:23 np0005542249 nova_compute[254900]: 2025-12-02 11:28:23.713 254904 DEBUG oslo_concurrency.processutils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:28:23 np0005542249 nova_compute[254900]: 2025-12-02 11:28:23.742 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:23 np0005542249 nova_compute[254900]: 2025-12-02 11:28:23.788 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:23 np0005542249 podman[285793]: 2025-12-02 11:28:23.972395352 +0000 UTC m=+0.050489788 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:28:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:28:24 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2353976523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:28:24 np0005542249 nova_compute[254900]: 2025-12-02 11:28:24.153 254904 DEBUG oslo_concurrency.processutils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:28:24 np0005542249 nova_compute[254900]: 2025-12-02 11:28:24.159 254904 DEBUG nova.compute.provider_tree [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:28:24 np0005542249 nova_compute[254900]: 2025-12-02 11:28:24.175 254904 DEBUG nova.scheduler.client.report [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:28:24 np0005542249 nova_compute[254900]: 2025-12-02 11:28:24.197 254904 DEBUG oslo_concurrency.lockutils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:28:24 np0005542249 nova_compute[254900]: 2025-12-02 11:28:24.198 254904 DEBUG nova.compute.manager [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:28:24 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1551: 321 pgs: 321 active+clean; 167 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 4.6 KiB/s wr, 25 op/s
Dec  2 06:28:24 np0005542249 nova_compute[254900]: 2025-12-02 11:28:24.257 254904 INFO nova.virt.libvirt.driver [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:28:24 np0005542249 nova_compute[254900]: 2025-12-02 11:28:24.261 254904 DEBUG nova.compute.manager [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:28:24 np0005542249 nova_compute[254900]: 2025-12-02 11:28:24.261 254904 DEBUG nova.network.neutron [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:28:24 np0005542249 nova_compute[254900]: 2025-12-02 11:28:24.294 254904 DEBUG nova.compute.manager [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:28:24 np0005542249 nova_compute[254900]: 2025-12-02 11:28:24.347 254904 INFO nova.virt.block_device [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Booting with volume snapshot ad926b12-7ee3-46d1-9bb0-de1d7d21fb3c at /dev/vda#033[00m
Dec  2 06:28:24 np0005542249 nova_compute[254900]: 2025-12-02 11:28:24.497 254904 DEBUG nova.policy [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6ccb73a613554d938221b4bf46d7ae83', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '625a6939c31646a4a83ea851774cf28c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:28:24 np0005542249 nova_compute[254900]: 2025-12-02 11:28:24.761 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:25 np0005542249 nova_compute[254900]: 2025-12-02 11:28:25.465 254904 DEBUG nova.network.neutron [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Successfully created port: a1e6fdfb-25f9-43b9-997f-77ee16acf923 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:28:26 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1552: 321 pgs: 321 active+clean; 167 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 6.1 KiB/s wr, 22 op/s
Dec  2 06:28:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:28:26
Dec  2 06:28:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:28:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:28:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'vms', '.mgr', 'backups']
Dec  2 06:28:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:28:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:28:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:28:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:28:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:28:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:28:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:28:26 np0005542249 nova_compute[254900]: 2025-12-02 11:28:26.523 254904 DEBUG nova.network.neutron [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Successfully updated port: a1e6fdfb-25f9-43b9-997f-77ee16acf923 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:28:26 np0005542249 nova_compute[254900]: 2025-12-02 11:28:26.542 254904 DEBUG oslo_concurrency.lockutils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "refresh_cache-fabff5b9-969a-4502-91d0-6adacfa56156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:28:26 np0005542249 nova_compute[254900]: 2025-12-02 11:28:26.543 254904 DEBUG oslo_concurrency.lockutils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquired lock "refresh_cache-fabff5b9-969a-4502-91d0-6adacfa56156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:28:26 np0005542249 nova_compute[254900]: 2025-12-02 11:28:26.543 254904 DEBUG nova.network.neutron [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:28:26 np0005542249 nova_compute[254900]: 2025-12-02 11:28:26.658 254904 DEBUG nova.compute.manager [req-6d2d7894-15b1-4365-a889-01616717f320 req-a26a3db8-8ef6-4213-8ce4-4c071528ee15 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Received event network-changed-a1e6fdfb-25f9-43b9-997f-77ee16acf923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:28:26 np0005542249 nova_compute[254900]: 2025-12-02 11:28:26.659 254904 DEBUG nova.compute.manager [req-6d2d7894-15b1-4365-a889-01616717f320 req-a26a3db8-8ef6-4213-8ce4-4c071528ee15 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Refreshing instance network info cache due to event network-changed-a1e6fdfb-25f9-43b9-997f-77ee16acf923. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:28:26 np0005542249 nova_compute[254900]: 2025-12-02 11:28:26.659 254904 DEBUG oslo_concurrency.lockutils [req-6d2d7894-15b1-4365-a889-01616717f320 req-a26a3db8-8ef6-4213-8ce4-4c071528ee15 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-fabff5b9-969a-4502-91d0-6adacfa56156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:28:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:28:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:28:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:28:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:28:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:28:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:28:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:28:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:28:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:28:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:28:26 np0005542249 nova_compute[254900]: 2025-12-02 11:28:26.757 254904 DEBUG nova.network.neutron [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:28:27 np0005542249 nova_compute[254900]: 2025-12-02 11:28:27.556 254904 DEBUG nova.network.neutron [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Updating instance_info_cache with network_info: [{"id": "a1e6fdfb-25f9-43b9-997f-77ee16acf923", "address": "fa:16:3e:d6:07:e1", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1e6fdfb-25", "ovs_interfaceid": "a1e6fdfb-25f9-43b9-997f-77ee16acf923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:28:27 np0005542249 nova_compute[254900]: 2025-12-02 11:28:27.576 254904 DEBUG oslo_concurrency.lockutils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Releasing lock "refresh_cache-fabff5b9-969a-4502-91d0-6adacfa56156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:28:27 np0005542249 nova_compute[254900]: 2025-12-02 11:28:27.576 254904 DEBUG nova.compute.manager [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Instance network_info: |[{"id": "a1e6fdfb-25f9-43b9-997f-77ee16acf923", "address": "fa:16:3e:d6:07:e1", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1e6fdfb-25", "ovs_interfaceid": "a1e6fdfb-25f9-43b9-997f-77ee16acf923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:28:27 np0005542249 nova_compute[254900]: 2025-12-02 11:28:27.577 254904 DEBUG oslo_concurrency.lockutils [req-6d2d7894-15b1-4365-a889-01616717f320 req-a26a3db8-8ef6-4213-8ce4-4c071528ee15 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-fabff5b9-969a-4502-91d0-6adacfa56156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:28:27 np0005542249 nova_compute[254900]: 2025-12-02 11:28:27.577 254904 DEBUG nova.network.neutron [req-6d2d7894-15b1-4365-a889-01616717f320 req-a26a3db8-8ef6-4213-8ce4-4c071528ee15 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Refreshing network info cache for port a1e6fdfb-25f9-43b9-997f-77ee16acf923 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:28:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:28:28 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1553: 321 pgs: 321 active+clean; 167 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 6.2 KiB/s wr, 15 op/s
Dec  2 06:28:28 np0005542249 nova_compute[254900]: 2025-12-02 11:28:28.791 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:29 np0005542249 nova_compute[254900]: 2025-12-02 11:28:29.163 254904 DEBUG nova.network.neutron [req-6d2d7894-15b1-4365-a889-01616717f320 req-a26a3db8-8ef6-4213-8ce4-4c071528ee15 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Updated VIF entry in instance network info cache for port a1e6fdfb-25f9-43b9-997f-77ee16acf923. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:28:29 np0005542249 nova_compute[254900]: 2025-12-02 11:28:29.164 254904 DEBUG nova.network.neutron [req-6d2d7894-15b1-4365-a889-01616717f320 req-a26a3db8-8ef6-4213-8ce4-4c071528ee15 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Updating instance_info_cache with network_info: [{"id": "a1e6fdfb-25f9-43b9-997f-77ee16acf923", "address": "fa:16:3e:d6:07:e1", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1e6fdfb-25", "ovs_interfaceid": "a1e6fdfb-25f9-43b9-997f-77ee16acf923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:28:29 np0005542249 nova_compute[254900]: 2025-12-02 11:28:29.372 254904 DEBUG oslo_concurrency.lockutils [req-6d2d7894-15b1-4365-a889-01616717f320 req-a26a3db8-8ef6-4213-8ce4-4c071528ee15 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-fabff5b9-969a-4502-91d0-6adacfa56156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:28:29 np0005542249 nova_compute[254900]: 2025-12-02 11:28:29.383 254904 DEBUG os_brick.utils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:28:29 np0005542249 nova_compute[254900]: 2025-12-02 11:28:29.385 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:28:29 np0005542249 nova_compute[254900]: 2025-12-02 11:28:29.403 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:28:29 np0005542249 nova_compute[254900]: 2025-12-02 11:28:29.404 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[88419f6a-f870-4939-8caf-ce5cd1baafb8]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:28:29 np0005542249 nova_compute[254900]: 2025-12-02 11:28:29.405 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:28:29 np0005542249 nova_compute[254900]: 2025-12-02 11:28:29.420 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:28:29 np0005542249 nova_compute[254900]: 2025-12-02 11:28:29.420 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[409431c6-5831-4e5f-9600-d0a2dea9dc1e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:28:29 np0005542249 nova_compute[254900]: 2025-12-02 11:28:29.423 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:28:29 np0005542249 nova_compute[254900]: 2025-12-02 11:28:29.439 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:28:29 np0005542249 nova_compute[254900]: 2025-12-02 11:28:29.440 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[29c418b6-65bd-4ab8-969d-bdd3fd7fdc87]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:28:29 np0005542249 nova_compute[254900]: 2025-12-02 11:28:29.441 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[c3e9b719-f8cc-437e-b063-c8f17cf04073]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:28:29 np0005542249 nova_compute[254900]: 2025-12-02 11:28:29.442 254904 DEBUG oslo_concurrency.processutils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:28:29 np0005542249 nova_compute[254900]: 2025-12-02 11:28:29.477 254904 DEBUG oslo_concurrency.processutils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "nvme version" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:28:29 np0005542249 nova_compute[254900]: 2025-12-02 11:28:29.481 254904 DEBUG os_brick.initiator.connectors.lightos [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:28:29 np0005542249 nova_compute[254900]: 2025-12-02 11:28:29.482 254904 DEBUG os_brick.initiator.connectors.lightos [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:28:29 np0005542249 nova_compute[254900]: 2025-12-02 11:28:29.482 254904 DEBUG os_brick.initiator.connectors.lightos [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:28:29 np0005542249 nova_compute[254900]: 2025-12-02 11:28:29.483 254904 DEBUG os_brick.utils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] <== get_connector_properties: return (98ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:28:29 np0005542249 nova_compute[254900]: 2025-12-02 11:28:29.484 254904 DEBUG nova.virt.block_device [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Updating existing volume attachment record: 2d6b6d32-af73-42f0-aa96-cee2b164ae8f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:28:29 np0005542249 nova_compute[254900]: 2025-12-02 11:28:29.805 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:30 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:28:30 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1183280219' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:28:30 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1554: 321 pgs: 321 active+clean; 167 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 5.7 KiB/s wr, 16 op/s
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.049 254904 DEBUG nova.compute.manager [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.052 254904 DEBUG nova.virt.libvirt.driver [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.052 254904 INFO nova.virt.libvirt.driver [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Creating image(s)#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.053 254904 DEBUG nova.virt.libvirt.driver [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.054 254904 DEBUG nova.virt.libvirt.driver [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Ensure instance console log exists: /var/lib/nova/instances/fabff5b9-969a-4502-91d0-6adacfa56156/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.055 254904 DEBUG oslo_concurrency.lockutils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.055 254904 DEBUG oslo_concurrency.lockutils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.056 254904 DEBUG oslo_concurrency.lockutils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.061 254904 DEBUG nova.virt.libvirt.driver [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Start _get_guest_xml network_info=[{"id": "a1e6fdfb-25f9-43b9-997f-77ee16acf923", "address": "fa:16:3e:d6:07:e1", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1e6fdfb-25", "ovs_interfaceid": "a1e6fdfb-25f9-43b9-997f-77ee16acf923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-12-02T11:28:14Z,direct_url=<?>,disk_format='qcow2',id=a9e4f7b6-847e-46ce-bfbd-ed73825e90e7,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-1760612029',owner='625a6939c31646a4a83ea851774cf28c',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-12-02T11:28:15Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-871b1127-6cb5-4828-aa75-235b62061769', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '871b1127-6cb5-4828-aa75-235b62061769', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'fabff5b9-969a-4502-91d0-6adacfa56156', 'attached_at': '', 'detached_at': '', 'volume_id': '871b1127-6cb5-4828-aa75-235b62061769', 'serial': '871b1127-6cb5-4828-aa75-235b62061769'}, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'attachment_id': '2d6b6d32-af73-42f0-aa96-cee2b164ae8f', 'delete_on_termination': True, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.067 254904 WARNING nova.virt.libvirt.driver [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.073 254904 DEBUG nova.virt.libvirt.host [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.074 254904 DEBUG nova.virt.libvirt.host [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.078 254904 DEBUG nova.virt.libvirt.host [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.079 254904 DEBUG nova.virt.libvirt.host [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.079 254904 DEBUG nova.virt.libvirt.driver [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.080 254904 DEBUG nova.virt.hardware [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-12-02T11:28:14Z,direct_url=<?>,disk_format='qcow2',id=a9e4f7b6-847e-46ce-bfbd-ed73825e90e7,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-1760612029',owner='625a6939c31646a4a83ea851774cf28c',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-12-02T11:28:15Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.081 254904 DEBUG nova.virt.hardware [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.081 254904 DEBUG nova.virt.hardware [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.082 254904 DEBUG nova.virt.hardware [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.082 254904 DEBUG nova.virt.hardware [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.082 254904 DEBUG nova.virt.hardware [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.083 254904 DEBUG nova.virt.hardware [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.084 254904 DEBUG nova.virt.hardware [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.084 254904 DEBUG nova.virt.hardware [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.085 254904 DEBUG nova.virt.hardware [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.086 254904 DEBUG nova.virt.hardware [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.127 254904 DEBUG nova.storage.rbd_utils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] rbd image fabff5b9-969a-4502-91d0-6adacfa56156_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.134 254904 DEBUG oslo_concurrency.processutils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:28:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:28:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3985886731' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.618 254904 DEBUG oslo_concurrency.processutils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.650 254904 DEBUG nova.virt.libvirt.vif [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:28:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-2136046997',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-2136046997',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-2136046997',id=19,image_ref='a9e4f7b6-847e-46ce-bfbd-ed73825e90e7',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGq4u+zRVpMlSzv0GDh/hXtBiwrSpv2j/V7PltvrfOY2RYVPhY+lN2UU9H4FENMDN9eQ3hIm2t0KuoZAjx/ZJZ14cN3dZpLgEgF269wySXXtGqVNOYY82t2wxmzUz11CuQ==',key_name='tempest-keypair-722203544',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='625a6939c31646a4a83ea851774cf28c',ramdisk_id='',reservation_id='r-vv379f7b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1396850361',image_owner_user_name='tempest-TestVolumeBootPattern-1396850361-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1396850361',owner_user_name='tempest-TestVolumeBootPattern-1396850361-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:28:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ccb73a613554d938221b4bf46d7ae83',uuid=fabff5b9-969a-4502-91d0-6adacfa56156,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a1e6fdfb-25f9-43b9-997f-77ee16acf923", "address": "fa:16:3e:d6:07:e1", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1e6fdfb-25", "ovs_interfaceid": "a1e6fdfb-25f9-43b9-997f-77ee16acf923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.650 254904 DEBUG nova.network.os_vif_util [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converting VIF {"id": "a1e6fdfb-25f9-43b9-997f-77ee16acf923", "address": "fa:16:3e:d6:07:e1", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1e6fdfb-25", "ovs_interfaceid": "a1e6fdfb-25f9-43b9-997f-77ee16acf923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.652 254904 DEBUG nova.network.os_vif_util [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:07:e1,bridge_name='br-int',has_traffic_filtering=True,id=a1e6fdfb-25f9-43b9-997f-77ee16acf923,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1e6fdfb-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.654 254904 DEBUG nova.objects.instance [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lazy-loading 'pci_devices' on Instance uuid fabff5b9-969a-4502-91d0-6adacfa56156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.670 254904 DEBUG nova.virt.libvirt.driver [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:28:31 np0005542249 nova_compute[254900]:  <uuid>fabff5b9-969a-4502-91d0-6adacfa56156</uuid>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:  <name>instance-00000013</name>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <nova:name>tempest-TestVolumeBootPattern-image-snapshot-server-2136046997</nova:name>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:28:31</nova:creationTime>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:28:31 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:        <nova:user uuid="6ccb73a613554d938221b4bf46d7ae83">tempest-TestVolumeBootPattern-1396850361-project-member</nova:user>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:        <nova:project uuid="625a6939c31646a4a83ea851774cf28c">tempest-TestVolumeBootPattern-1396850361</nova:project>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <nova:root type="image" uuid="a9e4f7b6-847e-46ce-bfbd-ed73825e90e7"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:        <nova:port uuid="a1e6fdfb-25f9-43b9-997f-77ee16acf923">
Dec  2 06:28:31 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <entry name="serial">fabff5b9-969a-4502-91d0-6adacfa56156</entry>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <entry name="uuid">fabff5b9-969a-4502-91d0-6adacfa56156</entry>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/fabff5b9-969a-4502-91d0-6adacfa56156_disk.config">
Dec  2 06:28:31 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:28:31 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="volumes/volume-871b1127-6cb5-4828-aa75-235b62061769">
Dec  2 06:28:31 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:28:31 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <serial>871b1127-6cb5-4828-aa75-235b62061769</serial>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:d6:07:e1"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <target dev="tapa1e6fdfb-25"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/fabff5b9-969a-4502-91d0-6adacfa56156/console.log" append="off"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <input type="keyboard" bus="usb"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:28:31 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:28:31 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:28:31 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:28:31 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.672 254904 DEBUG nova.compute.manager [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Preparing to wait for external event network-vif-plugged-a1e6fdfb-25f9-43b9-997f-77ee16acf923 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.672 254904 DEBUG oslo_concurrency.lockutils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "fabff5b9-969a-4502-91d0-6adacfa56156-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.672 254904 DEBUG oslo_concurrency.lockutils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "fabff5b9-969a-4502-91d0-6adacfa56156-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.673 254904 DEBUG oslo_concurrency.lockutils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "fabff5b9-969a-4502-91d0-6adacfa56156-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.673 254904 DEBUG nova.virt.libvirt.vif [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:28:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-2136046997',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-2136046997',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-2136046997',id=19,image_ref='a9e4f7b6-847e-46ce-bfbd-ed73825e90e7',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGq4u+zRVpMlSzv0GDh/hXtBiwrSpv2j/V7PltvrfOY2RYVPhY+lN2UU9H4FENMDN9eQ3hIm2t0KuoZAjx/ZJZ14cN3dZpLgEgF269wySXXtGqVNOYY82t2wxmzUz11CuQ==',key_name='tempest-keypair-722203544',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='625a6939c31646a4a83ea851774cf28c',ramdisk_id='',reservation_id='r-vv379f7b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1396850361',image_owner_user_name='tempest-TestVolumeBootPattern-1396850361-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1396850361',owner_user_name='tempest-TestVolumeBootPattern-1396850361-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:28:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ccb73a613554d938221b4bf46d7ae83',uuid=fabff5b9-969a-4502-91d0-6adacfa56156,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a1e6fdfb-25f9-43b9-997f-77ee16acf923", "address": "fa:16:3e:d6:07:e1", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1e6fdfb-25", "ovs_interfaceid": "a1e6fdfb-25f9-43b9-997f-77ee16acf923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.674 254904 DEBUG nova.network.os_vif_util [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converting VIF {"id": "a1e6fdfb-25f9-43b9-997f-77ee16acf923", "address": "fa:16:3e:d6:07:e1", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1e6fdfb-25", "ovs_interfaceid": "a1e6fdfb-25f9-43b9-997f-77ee16acf923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.674 254904 DEBUG nova.network.os_vif_util [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:07:e1,bridge_name='br-int',has_traffic_filtering=True,id=a1e6fdfb-25f9-43b9-997f-77ee16acf923,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1e6fdfb-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.675 254904 DEBUG os_vif [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:07:e1,bridge_name='br-int',has_traffic_filtering=True,id=a1e6fdfb-25f9-43b9-997f-77ee16acf923,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1e6fdfb-25') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.676 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.676 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.677 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.681 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.682 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1e6fdfb-25, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.683 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa1e6fdfb-25, col_values=(('external_ids', {'iface-id': 'a1e6fdfb-25f9-43b9-997f-77ee16acf923', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:07:e1', 'vm-uuid': 'fabff5b9-969a-4502-91d0-6adacfa56156'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.685 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:31 np0005542249 NetworkManager[48987]: <info>  [1764674911.6868] manager: (tapa1e6fdfb-25): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.690 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.696 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.697 254904 INFO os_vif [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:07:e1,bridge_name='br-int',has_traffic_filtering=True,id=a1e6fdfb-25f9-43b9-997f-77ee16acf923,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1e6fdfb-25')#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.751 254904 DEBUG nova.virt.libvirt.driver [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.752 254904 DEBUG nova.virt.libvirt.driver [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.752 254904 DEBUG nova.virt.libvirt.driver [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] No VIF found with MAC fa:16:3e:d6:07:e1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.752 254904 INFO nova.virt.libvirt.driver [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Using config drive#033[00m
Dec  2 06:28:31 np0005542249 nova_compute[254900]: 2025-12-02 11:28:31.777 254904 DEBUG nova.storage.rbd_utils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] rbd image fabff5b9-969a-4502-91d0-6adacfa56156_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:28:32 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1555: 321 pgs: 321 active+clean; 167 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 5.6 KiB/s wr, 17 op/s
Dec  2 06:28:32 np0005542249 nova_compute[254900]: 2025-12-02 11:28:32.520 254904 INFO nova.virt.libvirt.driver [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Creating config drive at /var/lib/nova/instances/fabff5b9-969a-4502-91d0-6adacfa56156/disk.config#033[00m
Dec  2 06:28:32 np0005542249 nova_compute[254900]: 2025-12-02 11:28:32.531 254904 DEBUG oslo_concurrency.processutils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fabff5b9-969a-4502-91d0-6adacfa56156/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm96b9njo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:28:32 np0005542249 nova_compute[254900]: 2025-12-02 11:28:32.678 254904 DEBUG oslo_concurrency.processutils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fabff5b9-969a-4502-91d0-6adacfa56156/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm96b9njo" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:28:32 np0005542249 nova_compute[254900]: 2025-12-02 11:28:32.709 254904 DEBUG nova.storage.rbd_utils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] rbd image fabff5b9-969a-4502-91d0-6adacfa56156_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:28:32 np0005542249 nova_compute[254900]: 2025-12-02 11:28:32.712 254904 DEBUG oslo_concurrency.processutils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fabff5b9-969a-4502-91d0-6adacfa56156/disk.config fabff5b9-969a-4502-91d0-6adacfa56156_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:28:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:28:32 np0005542249 nova_compute[254900]: 2025-12-02 11:28:32.893 254904 DEBUG oslo_concurrency.processutils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fabff5b9-969a-4502-91d0-6adacfa56156/disk.config fabff5b9-969a-4502-91d0-6adacfa56156_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:28:32 np0005542249 nova_compute[254900]: 2025-12-02 11:28:32.894 254904 INFO nova.virt.libvirt.driver [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Deleting local config drive /var/lib/nova/instances/fabff5b9-969a-4502-91d0-6adacfa56156/disk.config because it was imported into RBD.#033[00m
Dec  2 06:28:32 np0005542249 kernel: tapa1e6fdfb-25: entered promiscuous mode
Dec  2 06:28:32 np0005542249 NetworkManager[48987]: <info>  [1764674912.9710] manager: (tapa1e6fdfb-25): new Tun device (/org/freedesktop/NetworkManager/Devices/104)
Dec  2 06:28:32 np0005542249 ovn_controller[153849]: 2025-12-02T11:28:32Z|00183|binding|INFO|Claiming lport a1e6fdfb-25f9-43b9-997f-77ee16acf923 for this chassis.
Dec  2 06:28:32 np0005542249 nova_compute[254900]: 2025-12-02 11:28:32.972 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:32 np0005542249 ovn_controller[153849]: 2025-12-02T11:28:32Z|00184|binding|INFO|a1e6fdfb-25f9-43b9-997f-77ee16acf923: Claiming fa:16:3e:d6:07:e1 10.100.0.11
Dec  2 06:28:32 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:28:32.986 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:07:e1 10.100.0.11'], port_security=['fa:16:3e:d6:07:e1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'fabff5b9-969a-4502-91d0-6adacfa56156', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '625a6939c31646a4a83ea851774cf28c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a2145909-363d-45af-b384-4ef6e931b803', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cc823ef0-3e69-4062-a488-a82f483f10bb, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=a1e6fdfb-25f9-43b9-997f-77ee16acf923) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:28:32 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:28:32.989 163757 INFO neutron.agent.ovn.metadata.agent [-] Port a1e6fdfb-25f9-43b9-997f-77ee16acf923 in datapath acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 bound to our chassis#033[00m
Dec  2 06:28:32 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:28:32.992 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754#033[00m
Dec  2 06:28:33 np0005542249 ovn_controller[153849]: 2025-12-02T11:28:33Z|00185|binding|INFO|Setting lport a1e6fdfb-25f9-43b9-997f-77ee16acf923 ovn-installed in OVS
Dec  2 06:28:33 np0005542249 ovn_controller[153849]: 2025-12-02T11:28:33Z|00186|binding|INFO|Setting lport a1e6fdfb-25f9-43b9-997f-77ee16acf923 up in Southbound
Dec  2 06:28:33 np0005542249 nova_compute[254900]: 2025-12-02 11:28:33.017 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:33 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:28:33.016 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[a8a1471d-9ab5-4306-ab2f-75ce9eb6926d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:28:33 np0005542249 systemd-machined[216222]: New machine qemu-19-instance-00000013.
Dec  2 06:28:33 np0005542249 nova_compute[254900]: 2025-12-02 11:28:33.026 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:33 np0005542249 systemd[1]: Started Virtual Machine qemu-19-instance-00000013.
Dec  2 06:28:33 np0005542249 systemd-udevd[285942]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:28:33 np0005542249 NetworkManager[48987]: <info>  [1764674913.0589] device (tapa1e6fdfb-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:28:33 np0005542249 NetworkManager[48987]: <info>  [1764674913.0602] device (tapa1e6fdfb-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:28:33 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:28:33.060 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[19990439-0995-4156-aa4b-47b1ad4370b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:28:33 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:28:33.063 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[0e73f322-40c0-428b-aba1-81bcd0cda861]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:28:33 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:28:33.092 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[885ac425-d0f7-40f9-99fa-4b866e2a69f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:28:33 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:28:33.120 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[f3a97065-38e6-4967-b66c-8f77a1cc375a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacfaa8ac-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:73:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504911, 'reachable_time': 21222, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285952, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:28:33 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:28:33.141 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[28a45061-3c42-42da-9f08-14a905f9b603]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapacfaa8ac-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504932, 'tstamp': 504932}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285954, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapacfaa8ac-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504936, 'tstamp': 504936}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285954, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:28:33 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:28:33.145 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacfaa8ac-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:28:33 np0005542249 nova_compute[254900]: 2025-12-02 11:28:33.147 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:33 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:28:33.150 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapacfaa8ac-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:28:33 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:28:33.150 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:28:33 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:28:33.152 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapacfaa8ac-00, col_values=(('external_ids', {'iface-id': '1636ad30-406d-4138-823e-abbe7f4d87ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:28:33 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:28:33.152 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:28:33 np0005542249 nova_compute[254900]: 2025-12-02 11:28:33.366 254904 DEBUG nova.compute.manager [req-6f1f2f88-b371-4767-b240-50a0ecf44a73 req-7ce0d399-63a3-44d8-b389-c8d99eb19b81 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Received event network-vif-plugged-a1e6fdfb-25f9-43b9-997f-77ee16acf923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:28:33 np0005542249 nova_compute[254900]: 2025-12-02 11:28:33.366 254904 DEBUG oslo_concurrency.lockutils [req-6f1f2f88-b371-4767-b240-50a0ecf44a73 req-7ce0d399-63a3-44d8-b389-c8d99eb19b81 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "fabff5b9-969a-4502-91d0-6adacfa56156-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:28:33 np0005542249 nova_compute[254900]: 2025-12-02 11:28:33.366 254904 DEBUG oslo_concurrency.lockutils [req-6f1f2f88-b371-4767-b240-50a0ecf44a73 req-7ce0d399-63a3-44d8-b389-c8d99eb19b81 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "fabff5b9-969a-4502-91d0-6adacfa56156-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:28:33 np0005542249 nova_compute[254900]: 2025-12-02 11:28:33.367 254904 DEBUG oslo_concurrency.lockutils [req-6f1f2f88-b371-4767-b240-50a0ecf44a73 req-7ce0d399-63a3-44d8-b389-c8d99eb19b81 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "fabff5b9-969a-4502-91d0-6adacfa56156-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:28:33 np0005542249 nova_compute[254900]: 2025-12-02 11:28:33.367 254904 DEBUG nova.compute.manager [req-6f1f2f88-b371-4767-b240-50a0ecf44a73 req-7ce0d399-63a3-44d8-b389-c8d99eb19b81 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Processing event network-vif-plugged-a1e6fdfb-25f9-43b9-997f-77ee16acf923 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:28:33 np0005542249 nova_compute[254900]: 2025-12-02 11:28:33.795 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:34 np0005542249 nova_compute[254900]: 2025-12-02 11:28:34.157 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674914.1567552, fabff5b9-969a-4502-91d0-6adacfa56156 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:28:34 np0005542249 nova_compute[254900]: 2025-12-02 11:28:34.157 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] VM Started (Lifecycle Event)#033[00m
Dec  2 06:28:34 np0005542249 nova_compute[254900]: 2025-12-02 11:28:34.160 254904 DEBUG nova.compute.manager [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:28:34 np0005542249 nova_compute[254900]: 2025-12-02 11:28:34.163 254904 DEBUG nova.virt.libvirt.driver [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:28:34 np0005542249 nova_compute[254900]: 2025-12-02 11:28:34.166 254904 INFO nova.virt.libvirt.driver [-] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Instance spawned successfully.#033[00m
Dec  2 06:28:34 np0005542249 nova_compute[254900]: 2025-12-02 11:28:34.166 254904 INFO nova.compute.manager [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Took 3.12 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:28:34 np0005542249 nova_compute[254900]: 2025-12-02 11:28:34.167 254904 DEBUG nova.compute.manager [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:28:34 np0005542249 nova_compute[254900]: 2025-12-02 11:28:34.178 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:28:34 np0005542249 nova_compute[254900]: 2025-12-02 11:28:34.180 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:28:34 np0005542249 nova_compute[254900]: 2025-12-02 11:28:34.208 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:28:34 np0005542249 nova_compute[254900]: 2025-12-02 11:28:34.208 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674914.15689, fabff5b9-969a-4502-91d0-6adacfa56156 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:28:34 np0005542249 nova_compute[254900]: 2025-12-02 11:28:34.208 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:28:34 np0005542249 nova_compute[254900]: 2025-12-02 11:28:34.229 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:28:34 np0005542249 nova_compute[254900]: 2025-12-02 11:28:34.232 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674914.1629505, fabff5b9-969a-4502-91d0-6adacfa56156 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:28:34 np0005542249 nova_compute[254900]: 2025-12-02 11:28:34.232 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:28:34 np0005542249 nova_compute[254900]: 2025-12-02 11:28:34.252 254904 INFO nova.compute.manager [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Took 10.96 seconds to build instance.#033[00m
Dec  2 06:28:34 np0005542249 nova_compute[254900]: 2025-12-02 11:28:34.253 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:28:34 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1556: 321 pgs: 321 active+clean; 167 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 3.9 KiB/s wr, 17 op/s
Dec  2 06:28:34 np0005542249 nova_compute[254900]: 2025-12-02 11:28:34.256 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:28:34 np0005542249 nova_compute[254900]: 2025-12-02 11:28:34.280 254904 DEBUG oslo_concurrency.lockutils [None req-03e08769-bdc4-45b2-856b-544fb1e8e321 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "fabff5b9-969a-4502-91d0-6adacfa56156" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:28:35 np0005542249 nova_compute[254900]: 2025-12-02 11:28:35.513 254904 DEBUG nova.compute.manager [req-1e28415b-075e-4b62-8021-a6c7c45c1878 req-ac2810d9-8b63-45bc-98c0-fa6a356c26f5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Received event network-vif-plugged-a1e6fdfb-25f9-43b9-997f-77ee16acf923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:28:35 np0005542249 nova_compute[254900]: 2025-12-02 11:28:35.514 254904 DEBUG oslo_concurrency.lockutils [req-1e28415b-075e-4b62-8021-a6c7c45c1878 req-ac2810d9-8b63-45bc-98c0-fa6a356c26f5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "fabff5b9-969a-4502-91d0-6adacfa56156-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:28:35 np0005542249 nova_compute[254900]: 2025-12-02 11:28:35.514 254904 DEBUG oslo_concurrency.lockutils [req-1e28415b-075e-4b62-8021-a6c7c45c1878 req-ac2810d9-8b63-45bc-98c0-fa6a356c26f5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "fabff5b9-969a-4502-91d0-6adacfa56156-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:28:35 np0005542249 nova_compute[254900]: 2025-12-02 11:28:35.514 254904 DEBUG oslo_concurrency.lockutils [req-1e28415b-075e-4b62-8021-a6c7c45c1878 req-ac2810d9-8b63-45bc-98c0-fa6a356c26f5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "fabff5b9-969a-4502-91d0-6adacfa56156-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:28:35 np0005542249 nova_compute[254900]: 2025-12-02 11:28:35.515 254904 DEBUG nova.compute.manager [req-1e28415b-075e-4b62-8021-a6c7c45c1878 req-ac2810d9-8b63-45bc-98c0-fa6a356c26f5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] No waiting events found dispatching network-vif-plugged-a1e6fdfb-25f9-43b9-997f-77ee16acf923 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:28:35 np0005542249 nova_compute[254900]: 2025-12-02 11:28:35.515 254904 WARNING nova.compute.manager [req-1e28415b-075e-4b62-8021-a6c7c45c1878 req-ac2810d9-8b63-45bc-98c0-fa6a356c26f5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Received unexpected event network-vif-plugged-a1e6fdfb-25f9-43b9-997f-77ee16acf923 for instance with vm_state active and task_state None.#033[00m
Dec  2 06:28:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:28:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:28:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:28:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:28:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.8615818519242038e-06 of space, bias 1.0, pg target 0.0008584745555772611 quantized to 32 (current 32)
Dec  2 06:28:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:28:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0011052700926287686 of space, bias 1.0, pg target 0.3315810277886306 quantized to 32 (current 32)
Dec  2 06:28:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:28:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  2 06:28:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:28:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006663670272514163 of space, bias 1.0, pg target 0.19991010817542487 quantized to 32 (current 32)
Dec  2 06:28:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:28:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:28:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:28:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:28:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:28:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:28:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:28:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:28:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:28:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:28:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:28:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:28:36 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1557: 321 pgs: 321 active+clean; 167 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 17 KiB/s wr, 20 op/s
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:28:36.321059) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674916321109, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2785, "num_deletes": 538, "total_data_size": 3463495, "memory_usage": 3539128, "flush_reason": "Manual Compaction"}
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674916348405, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3401116, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29026, "largest_seqno": 31810, "table_properties": {"data_size": 3388612, "index_size": 7846, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3589, "raw_key_size": 30044, "raw_average_key_size": 20, "raw_value_size": 3361324, "raw_average_value_size": 2345, "num_data_blocks": 338, "num_entries": 1433, "num_filter_entries": 1433, "num_deletions": 538, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764674767, "oldest_key_time": 1764674767, "file_creation_time": 1764674916, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 27420 microseconds, and 14234 cpu microseconds.
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:28:36.348477) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3401116 bytes OK
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:28:36.348508) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:28:36.352098) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:28:36.352194) EVENT_LOG_v1 {"time_micros": 1764674916352153, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:28:36.352230) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3450314, prev total WAL file size 3450314, number of live WAL files 2.
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:28:36.353435) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3321KB)], [62(8906KB)]
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674916353499, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 12521386, "oldest_snapshot_seqno": -1}
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6105 keys, 10643925 bytes, temperature: kUnknown
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674916433592, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10643925, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10596614, "index_size": 30945, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15301, "raw_key_size": 153844, "raw_average_key_size": 25, "raw_value_size": 10480330, "raw_average_value_size": 1716, "num_data_blocks": 1246, "num_entries": 6105, "num_filter_entries": 6105, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672515, "oldest_key_time": 0, "file_creation_time": 1764674916, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:28:36.433940) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10643925 bytes
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:28:36.435744) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.1 rd, 132.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.7 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(6.8) write-amplify(3.1) OK, records in: 7170, records dropped: 1065 output_compression: NoCompression
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:28:36.435777) EVENT_LOG_v1 {"time_micros": 1764674916435762, "job": 34, "event": "compaction_finished", "compaction_time_micros": 80198, "compaction_time_cpu_micros": 46904, "output_level": 6, "num_output_files": 1, "total_output_size": 10643925, "num_input_records": 7170, "num_output_records": 6105, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674916437101, "job": 34, "event": "table_file_deletion", "file_number": 64}
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764674916439801, "job": 34, "event": "table_file_deletion", "file_number": 62}
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:28:36.353294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:28:36.439960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:28:36.439968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:28:36.439970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:28:36.439972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:28:36 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:28:36.439974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:28:36 np0005542249 nova_compute[254900]: 2025-12-02 11:28:36.686 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:37 np0005542249 nova_compute[254900]: 2025-12-02 11:28:37.687 254904 DEBUG nova.compute.manager [req-f6fa2f9f-066d-4ec7-9f39-181a11c11733 req-3f2ef326-5565-448e-8408-1d51b20747b7 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Received event network-changed-a1e6fdfb-25f9-43b9-997f-77ee16acf923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:28:37 np0005542249 nova_compute[254900]: 2025-12-02 11:28:37.687 254904 DEBUG nova.compute.manager [req-f6fa2f9f-066d-4ec7-9f39-181a11c11733 req-3f2ef326-5565-448e-8408-1d51b20747b7 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Refreshing instance network info cache due to event network-changed-a1e6fdfb-25f9-43b9-997f-77ee16acf923. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:28:37 np0005542249 nova_compute[254900]: 2025-12-02 11:28:37.687 254904 DEBUG oslo_concurrency.lockutils [req-f6fa2f9f-066d-4ec7-9f39-181a11c11733 req-3f2ef326-5565-448e-8408-1d51b20747b7 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-fabff5b9-969a-4502-91d0-6adacfa56156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:28:37 np0005542249 nova_compute[254900]: 2025-12-02 11:28:37.688 254904 DEBUG oslo_concurrency.lockutils [req-f6fa2f9f-066d-4ec7-9f39-181a11c11733 req-3f2ef326-5565-448e-8408-1d51b20747b7 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-fabff5b9-969a-4502-91d0-6adacfa56156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:28:37 np0005542249 nova_compute[254900]: 2025-12-02 11:28:37.688 254904 DEBUG nova.network.neutron [req-f6fa2f9f-066d-4ec7-9f39-181a11c11733 req-3f2ef326-5565-448e-8408-1d51b20747b7 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Refreshing network info cache for port a1e6fdfb-25f9-43b9-997f-77ee16acf923 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:28:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:28:38 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1558: 321 pgs: 321 active+clean; 167 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 16 KiB/s wr, 53 op/s
Dec  2 06:28:38 np0005542249 nova_compute[254900]: 2025-12-02 11:28:38.818 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:39 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  2 06:28:39 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 6907 writes, 31K keys, 6907 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 6907 writes, 6907 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2072 writes, 9891 keys, 2072 commit groups, 1.0 writes per commit group, ingest: 12.14 MB, 0.02 MB/s#012Interval WAL: 2072 writes, 2072 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    119.1      0.30              0.13        17    0.018       0      0       0.0       0.0#012  L6      1/0   10.15 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5    156.6    129.8      0.96              0.45        16    0.060     79K   9428       0.0       0.0#012 Sum      1/0   10.15 MB   0.0      0.1     0.0      0.1       0.2      0.0       0.0   4.5    119.2    127.2      1.27              0.59        33    0.038     79K   9428       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   4.7    120.5    126.5      0.48              0.23        10    0.048     31K   3726       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    156.6    129.8      0.96              0.45        16    0.060     79K   9428       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    120.0      0.30              0.13        16    0.019       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.035, interval 0.013#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.16 GB write, 0.07 MB/s write, 0.15 GB read, 0.06 MB/s read, 1.3 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560e2b4e71f0#2 capacity: 304.00 MB usage: 17.41 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000217 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1173,16.75 MB,5.51006%) FilterBlock(34,231.67 KB,0.0744217%) IndexBlock(34,445.36 KB,0.143066%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  2 06:28:39 np0005542249 nova_compute[254900]: 2025-12-02 11:28:39.597 254904 DEBUG nova.network.neutron [req-f6fa2f9f-066d-4ec7-9f39-181a11c11733 req-3f2ef326-5565-448e-8408-1d51b20747b7 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Updated VIF entry in instance network info cache for port a1e6fdfb-25f9-43b9-997f-77ee16acf923. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:28:39 np0005542249 nova_compute[254900]: 2025-12-02 11:28:39.597 254904 DEBUG nova.network.neutron [req-f6fa2f9f-066d-4ec7-9f39-181a11c11733 req-3f2ef326-5565-448e-8408-1d51b20747b7 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Updating instance_info_cache with network_info: [{"id": "a1e6fdfb-25f9-43b9-997f-77ee16acf923", "address": "fa:16:3e:d6:07:e1", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1e6fdfb-25", "ovs_interfaceid": "a1e6fdfb-25f9-43b9-997f-77ee16acf923", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:28:39 np0005542249 nova_compute[254900]: 2025-12-02 11:28:39.621 254904 DEBUG oslo_concurrency.lockutils [req-f6fa2f9f-066d-4ec7-9f39-181a11c11733 req-3f2ef326-5565-448e-8408-1d51b20747b7 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-fabff5b9-969a-4502-91d0-6adacfa56156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:28:40 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1559: 321 pgs: 321 active+clean; 167 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 24 KiB/s wr, 89 op/s
Dec  2 06:28:41 np0005542249 nova_compute[254900]: 2025-12-02 11:28:41.688 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:42 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1560: 321 pgs: 321 active+clean; 167 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 24 KiB/s wr, 88 op/s
Dec  2 06:28:42 np0005542249 nova_compute[254900]: 2025-12-02 11:28:42.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:28:42 np0005542249 nova_compute[254900]: 2025-12-02 11:28:42.383 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:28:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:28:43 np0005542249 nova_compute[254900]: 2025-12-02 11:28:43.383 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:28:43 np0005542249 nova_compute[254900]: 2025-12-02 11:28:43.385 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:28:43 np0005542249 nova_compute[254900]: 2025-12-02 11:28:43.385 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 06:28:43 np0005542249 nova_compute[254900]: 2025-12-02 11:28:43.760 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "refresh_cache-66196772-8110-4d36-bdfa-d36400059313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:28:43 np0005542249 nova_compute[254900]: 2025-12-02 11:28:43.761 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquired lock "refresh_cache-66196772-8110-4d36-bdfa-d36400059313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:28:43 np0005542249 nova_compute[254900]: 2025-12-02 11:28:43.761 254904 DEBUG nova.network.neutron [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: 66196772-8110-4d36-bdfa-d36400059313] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 06:28:43 np0005542249 nova_compute[254900]: 2025-12-02 11:28:43.762 254904 DEBUG nova.objects.instance [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 66196772-8110-4d36-bdfa-d36400059313 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:28:43 np0005542249 nova_compute[254900]: 2025-12-02 11:28:43.819 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:44 np0005542249 podman[285997]: 2025-12-02 11:28:44.013918037 +0000 UTC m=+0.073095540 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  2 06:28:44 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1561: 321 pgs: 321 active+clean; 168 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 45 KiB/s wr, 89 op/s
Dec  2 06:28:46 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1562: 321 pgs: 321 active+clean; 168 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 45 KiB/s wr, 89 op/s
Dec  2 06:28:46 np0005542249 nova_compute[254900]: 2025-12-02 11:28:46.690 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:47 np0005542249 nova_compute[254900]: 2025-12-02 11:28:47.596 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:47 np0005542249 nova_compute[254900]: 2025-12-02 11:28:47.703 254904 DEBUG nova.network.neutron [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: 66196772-8110-4d36-bdfa-d36400059313] Updating instance_info_cache with network_info: [{"id": "d395b4fa-3166-499c-b9da-7d7d3574b4e3", "address": "fa:16:3e:74:c6:b6", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd395b4fa-31", "ovs_interfaceid": "d395b4fa-3166-499c-b9da-7d7d3574b4e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:28:47 np0005542249 nova_compute[254900]: 2025-12-02 11:28:47.722 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Releasing lock "refresh_cache-66196772-8110-4d36-bdfa-d36400059313" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:28:47 np0005542249 nova_compute[254900]: 2025-12-02 11:28:47.723 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: 66196772-8110-4d36-bdfa-d36400059313] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 06:28:47 np0005542249 nova_compute[254900]: 2025-12-02 11:28:47.723 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:28:47 np0005542249 nova_compute[254900]: 2025-12-02 11:28:47.723 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:28:47 np0005542249 ovn_controller[153849]: 2025-12-02T11:28:47Z|00032|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.8 does not match offer 10.100.0.11
Dec  2 06:28:47 np0005542249 ovn_controller[153849]: 2025-12-02T11:28:47Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:d6:07:e1 10.100.0.11
Dec  2 06:28:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:28:48 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1563: 321 pgs: 321 active+clean; 178 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 267 KiB/s wr, 103 op/s
Dec  2 06:28:48 np0005542249 nova_compute[254900]: 2025-12-02 11:28:48.820 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:50 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1564: 321 pgs: 321 active+clean; 182 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 526 KiB/s wr, 97 op/s
Dec  2 06:28:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:28:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1179784139' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:28:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:28:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1179784139' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:28:50 np0005542249 nova_compute[254900]: 2025-12-02 11:28:50.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:28:50 np0005542249 nova_compute[254900]: 2025-12-02 11:28:50.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:28:50 np0005542249 nova_compute[254900]: 2025-12-02 11:28:50.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:28:50 np0005542249 nova_compute[254900]: 2025-12-02 11:28:50.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:28:50 np0005542249 nova_compute[254900]: 2025-12-02 11:28:50.424 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:28:50 np0005542249 nova_compute[254900]: 2025-12-02 11:28:50.424 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:28:50 np0005542249 nova_compute[254900]: 2025-12-02 11:28:50.424 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:28:50 np0005542249 nova_compute[254900]: 2025-12-02 11:28:50.425 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:28:50 np0005542249 nova_compute[254900]: 2025-12-02 11:28:50.425 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:28:50 np0005542249 nova_compute[254900]: 2025-12-02 11:28:50.490 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:28:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/677042122' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:28:50 np0005542249 nova_compute[254900]: 2025-12-02 11:28:50.893 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:28:50 np0005542249 nova_compute[254900]: 2025-12-02 11:28:50.974 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:28:50 np0005542249 nova_compute[254900]: 2025-12-02 11:28:50.976 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:28:50 np0005542249 nova_compute[254900]: 2025-12-02 11:28:50.981 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:28:50 np0005542249 nova_compute[254900]: 2025-12-02 11:28:50.981 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:28:51 np0005542249 nova_compute[254900]: 2025-12-02 11:28:51.154 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:28:51 np0005542249 nova_compute[254900]: 2025-12-02 11:28:51.155 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4111MB free_disk=59.98794174194336GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:28:51 np0005542249 nova_compute[254900]: 2025-12-02 11:28:51.155 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:28:51 np0005542249 nova_compute[254900]: 2025-12-02 11:28:51.156 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:28:51 np0005542249 nova_compute[254900]: 2025-12-02 11:28:51.252 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Instance 66196772-8110-4d36-bdfa-d36400059313 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 06:28:51 np0005542249 nova_compute[254900]: 2025-12-02 11:28:51.252 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Instance fabff5b9-969a-4502-91d0-6adacfa56156 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 06:28:51 np0005542249 nova_compute[254900]: 2025-12-02 11:28:51.253 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:28:51 np0005542249 nova_compute[254900]: 2025-12-02 11:28:51.253 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:28:51 np0005542249 nova_compute[254900]: 2025-12-02 11:28:51.322 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:28:51 np0005542249 nova_compute[254900]: 2025-12-02 11:28:51.692 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:28:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3914848194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:28:51 np0005542249 nova_compute[254900]: 2025-12-02 11:28:51.766 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:28:51 np0005542249 nova_compute[254900]: 2025-12-02 11:28:51.772 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:28:51 np0005542249 nova_compute[254900]: 2025-12-02 11:28:51.846 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:28:51 np0005542249 nova_compute[254900]: 2025-12-02 11:28:51.875 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:28:51 np0005542249 nova_compute[254900]: 2025-12-02 11:28:51.875 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:28:52 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1565: 321 pgs: 321 active+clean; 182 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 518 KiB/s wr, 62 op/s
Dec  2 06:28:52 np0005542249 ovn_controller[153849]: 2025-12-02T11:28:52Z|00034|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.8 does not match offer 10.100.0.11
Dec  2 06:28:52 np0005542249 ovn_controller[153849]: 2025-12-02T11:28:52Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:d6:07:e1 10.100.0.11
Dec  2 06:28:52 np0005542249 ovn_controller[153849]: 2025-12-02T11:28:52Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d6:07:e1 10.100.0.11
Dec  2 06:28:52 np0005542249 ovn_controller[153849]: 2025-12-02T11:28:52Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:07:e1 10.100.0.11
Dec  2 06:28:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:28:53 np0005542249 podman[286065]: 2025-12-02 11:28:53.071579688 +0000 UTC m=+0.139732895 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:28:53 np0005542249 nova_compute[254900]: 2025-12-02 11:28:53.824 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:54 np0005542249 podman[286165]: 2025-12-02 11:28:54.167306066 +0000 UTC m=+0.100716117 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  2 06:28:54 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1566: 321 pgs: 321 active+clean; 221 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.6 MiB/s wr, 93 op/s
Dec  2 06:28:54 np0005542249 nova_compute[254900]: 2025-12-02 11:28:54.876 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:28:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:28:54 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:28:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:28:54 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:28:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:28:54 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:28:54 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 736e32cd-ee81-45e9-b21f-684351ce6f45 does not exist
Dec  2 06:28:54 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev c140bce6-2204-46c2-8312-49a867f40271 does not exist
Dec  2 06:28:54 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 86d30658-b4e9-4cbe-a532-0db4987b0548 does not exist
Dec  2 06:28:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:28:54 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:28:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:28:54 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:28:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:28:54 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:28:55 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:28:55 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:28:55 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:28:55 np0005542249 podman[286383]: 2025-12-02 11:28:55.63769038 +0000 UTC m=+0.047208580 container create d030494e5d0f3f5e79dafb684d7f43772b116228f7fce36f92fef1fd47973eca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leakey, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:28:55 np0005542249 systemd[1]: Started libpod-conmon-d030494e5d0f3f5e79dafb684d7f43772b116228f7fce36f92fef1fd47973eca.scope.
Dec  2 06:28:55 np0005542249 podman[286383]: 2025-12-02 11:28:55.614740998 +0000 UTC m=+0.024259178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:28:55 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:28:55 np0005542249 podman[286383]: 2025-12-02 11:28:55.776851787 +0000 UTC m=+0.186370007 container init d030494e5d0f3f5e79dafb684d7f43772b116228f7fce36f92fef1fd47973eca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:28:55 np0005542249 podman[286383]: 2025-12-02 11:28:55.787733892 +0000 UTC m=+0.197252052 container start d030494e5d0f3f5e79dafb684d7f43772b116228f7fce36f92fef1fd47973eca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leakey, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:28:55 np0005542249 podman[286383]: 2025-12-02 11:28:55.793407926 +0000 UTC m=+0.202926146 container attach d030494e5d0f3f5e79dafb684d7f43772b116228f7fce36f92fef1fd47973eca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leakey, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:28:55 np0005542249 systemd[1]: libpod-d030494e5d0f3f5e79dafb684d7f43772b116228f7fce36f92fef1fd47973eca.scope: Deactivated successfully.
Dec  2 06:28:55 np0005542249 amazing_leakey[286399]: 167 167
Dec  2 06:28:55 np0005542249 conmon[286399]: conmon d030494e5d0f3f5e79da <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d030494e5d0f3f5e79dafb684d7f43772b116228f7fce36f92fef1fd47973eca.scope/container/memory.events
Dec  2 06:28:55 np0005542249 podman[286383]: 2025-12-02 11:28:55.798533725 +0000 UTC m=+0.208051905 container died d030494e5d0f3f5e79dafb684d7f43772b116228f7fce36f92fef1fd47973eca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 06:28:55 np0005542249 systemd[1]: var-lib-containers-storage-overlay-af348694d96952a5c5d1690eae8ee62e04e835052796654bd1bf5e56c13bd55f-merged.mount: Deactivated successfully.
Dec  2 06:28:55 np0005542249 podman[286383]: 2025-12-02 11:28:55.849931056 +0000 UTC m=+0.259449216 container remove d030494e5d0f3f5e79dafb684d7f43772b116228f7fce36f92fef1fd47973eca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leakey, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:28:55 np0005542249 systemd[1]: libpod-conmon-d030494e5d0f3f5e79dafb684d7f43772b116228f7fce36f92fef1fd47973eca.scope: Deactivated successfully.
Dec  2 06:28:56 np0005542249 podman[286423]: 2025-12-02 11:28:56.06984935 +0000 UTC m=+0.058966487 container create c2113e805fbe65655d51c8802c7cfca90b7e74543a524de2d2b0db725cd33a73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  2 06:28:56 np0005542249 systemd[1]: Started libpod-conmon-c2113e805fbe65655d51c8802c7cfca90b7e74543a524de2d2b0db725cd33a73.scope.
Dec  2 06:28:56 np0005542249 podman[286423]: 2025-12-02 11:28:56.045265435 +0000 UTC m=+0.034382552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:28:56 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:28:56 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4263c3d446b3a7a5457ec786f7359d7b96a7347cf8607bc819510e79141b39b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:28:56 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4263c3d446b3a7a5457ec786f7359d7b96a7347cf8607bc819510e79141b39b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:28:56 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4263c3d446b3a7a5457ec786f7359d7b96a7347cf8607bc819510e79141b39b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:28:56 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4263c3d446b3a7a5457ec786f7359d7b96a7347cf8607bc819510e79141b39b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:28:56 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4263c3d446b3a7a5457ec786f7359d7b96a7347cf8607bc819510e79141b39b2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:28:56 np0005542249 podman[286423]: 2025-12-02 11:28:56.199656675 +0000 UTC m=+0.188773792 container init c2113e805fbe65655d51c8802c7cfca90b7e74543a524de2d2b0db725cd33a73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:28:56 np0005542249 podman[286423]: 2025-12-02 11:28:56.206778248 +0000 UTC m=+0.195895375 container start c2113e805fbe65655d51c8802c7cfca90b7e74543a524de2d2b0db725cd33a73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:28:56 np0005542249 podman[286423]: 2025-12-02 11:28:56.258067927 +0000 UTC m=+0.247185054 container attach c2113e805fbe65655d51c8802c7cfca90b7e74543a524de2d2b0db725cd33a73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  2 06:28:56 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1567: 321 pgs: 321 active+clean; 263 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 7.0 MiB/s wr, 91 op/s
Dec  2 06:28:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:28:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:28:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:28:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:28:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:28:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:28:56 np0005542249 nova_compute[254900]: 2025-12-02 11:28:56.565 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:56 np0005542249 nova_compute[254900]: 2025-12-02 11:28:56.695 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:57 np0005542249 focused_merkle[286440]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:28:57 np0005542249 focused_merkle[286440]: --> relative data size: 1.0
Dec  2 06:28:57 np0005542249 focused_merkle[286440]: --> All data devices are unavailable
Dec  2 06:28:57 np0005542249 systemd[1]: libpod-c2113e805fbe65655d51c8802c7cfca90b7e74543a524de2d2b0db725cd33a73.scope: Deactivated successfully.
Dec  2 06:28:57 np0005542249 systemd[1]: libpod-c2113e805fbe65655d51c8802c7cfca90b7e74543a524de2d2b0db725cd33a73.scope: Consumed 1.250s CPU time.
Dec  2 06:28:57 np0005542249 podman[286423]: 2025-12-02 11:28:57.528081865 +0000 UTC m=+1.517199002 container died c2113e805fbe65655d51c8802c7cfca90b7e74543a524de2d2b0db725cd33a73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:28:57 np0005542249 systemd[1]: var-lib-containers-storage-overlay-4263c3d446b3a7a5457ec786f7359d7b96a7347cf8607bc819510e79141b39b2-merged.mount: Deactivated successfully.
Dec  2 06:28:57 np0005542249 podman[286423]: 2025-12-02 11:28:57.610763354 +0000 UTC m=+1.599880451 container remove c2113e805fbe65655d51c8802c7cfca90b7e74543a524de2d2b0db725cd33a73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_merkle, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:28:57 np0005542249 systemd[1]: libpod-conmon-c2113e805fbe65655d51c8802c7cfca90b7e74543a524de2d2b0db725cd33a73.scope: Deactivated successfully.
Dec  2 06:28:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:28:58 np0005542249 nova_compute[254900]: 2025-12-02 11:28:58.018 254904 DEBUG oslo_concurrency.lockutils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "c2b160a2-030e-4625-b36f-060da406de08" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:28:58 np0005542249 nova_compute[254900]: 2025-12-02 11:28:58.019 254904 DEBUG oslo_concurrency.lockutils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "c2b160a2-030e-4625-b36f-060da406de08" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:28:58 np0005542249 nova_compute[254900]: 2025-12-02 11:28:58.035 254904 DEBUG nova.compute.manager [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:28:58 np0005542249 nova_compute[254900]: 2025-12-02 11:28:58.115 254904 DEBUG oslo_concurrency.lockutils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:28:58 np0005542249 nova_compute[254900]: 2025-12-02 11:28:58.116 254904 DEBUG oslo_concurrency.lockutils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:28:58 np0005542249 nova_compute[254900]: 2025-12-02 11:28:58.124 254904 DEBUG nova.virt.hardware [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:28:58 np0005542249 nova_compute[254900]: 2025-12-02 11:28:58.124 254904 INFO nova.compute.claims [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:28:58 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1568: 321 pgs: 321 active+clean; 299 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 9.9 MiB/s wr, 100 op/s
Dec  2 06:28:58 np0005542249 nova_compute[254900]: 2025-12-02 11:28:58.272 254904 DEBUG oslo_concurrency.processutils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:28:58 np0005542249 podman[286624]: 2025-12-02 11:28:58.425567556 +0000 UTC m=+0.041796363 container create b346a172c7ad52282dcb9ce1dc4524ff05a6f68c44e93dc15a15c95f6e9722eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  2 06:28:58 np0005542249 systemd[1]: Started libpod-conmon-b346a172c7ad52282dcb9ce1dc4524ff05a6f68c44e93dc15a15c95f6e9722eb.scope.
Dec  2 06:28:58 np0005542249 podman[286624]: 2025-12-02 11:28:58.407717862 +0000 UTC m=+0.023946699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:28:58 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:28:58 np0005542249 podman[286624]: 2025-12-02 11:28:58.526362785 +0000 UTC m=+0.142591602 container init b346a172c7ad52282dcb9ce1dc4524ff05a6f68c44e93dc15a15c95f6e9722eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_heisenberg, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  2 06:28:58 np0005542249 podman[286624]: 2025-12-02 11:28:58.537404174 +0000 UTC m=+0.153632981 container start b346a172c7ad52282dcb9ce1dc4524ff05a6f68c44e93dc15a15c95f6e9722eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_heisenberg, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  2 06:28:58 np0005542249 podman[286624]: 2025-12-02 11:28:58.542312826 +0000 UTC m=+0.158541673 container attach b346a172c7ad52282dcb9ce1dc4524ff05a6f68c44e93dc15a15c95f6e9722eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_heisenberg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Dec  2 06:28:58 np0005542249 quirky_heisenberg[286659]: 167 167
Dec  2 06:28:58 np0005542249 systemd[1]: libpod-b346a172c7ad52282dcb9ce1dc4524ff05a6f68c44e93dc15a15c95f6e9722eb.scope: Deactivated successfully.
Dec  2 06:28:58 np0005542249 podman[286624]: 2025-12-02 11:28:58.54504928 +0000 UTC m=+0.161278097 container died b346a172c7ad52282dcb9ce1dc4524ff05a6f68c44e93dc15a15c95f6e9722eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_heisenberg, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:28:58 np0005542249 systemd[1]: var-lib-containers-storage-overlay-8d5e78b6fbb2b03d436f24c30bf63abad5ef1cfab5686321a271f83dc43d664a-merged.mount: Deactivated successfully.
Dec  2 06:28:58 np0005542249 podman[286624]: 2025-12-02 11:28:58.584316854 +0000 UTC m=+0.200545661 container remove b346a172c7ad52282dcb9ce1dc4524ff05a6f68c44e93dc15a15c95f6e9722eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_heisenberg, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  2 06:28:58 np0005542249 systemd[1]: libpod-conmon-b346a172c7ad52282dcb9ce1dc4524ff05a6f68c44e93dc15a15c95f6e9722eb.scope: Deactivated successfully.
Dec  2 06:28:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:28:58 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2500685096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:28:58 np0005542249 nova_compute[254900]: 2025-12-02 11:28:58.727 254904 DEBUG oslo_concurrency.processutils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:28:58 np0005542249 nova_compute[254900]: 2025-12-02 11:28:58.738 254904 DEBUG nova.compute.provider_tree [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:28:58 np0005542249 nova_compute[254900]: 2025-12-02 11:28:58.755 254904 DEBUG nova.scheduler.client.report [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:28:58 np0005542249 nova_compute[254900]: 2025-12-02 11:28:58.786 254904 DEBUG oslo_concurrency.lockutils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:28:58 np0005542249 nova_compute[254900]: 2025-12-02 11:28:58.786 254904 DEBUG nova.compute.manager [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:28:58 np0005542249 podman[286684]: 2025-12-02 11:28:58.805912013 +0000 UTC m=+0.072969206 container create 1df17c3d8627fc19ed2ce2cd587590b9bc00cad1e7b8015c0e055932832a0ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_nash, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:28:58 np0005542249 nova_compute[254900]: 2025-12-02 11:28:58.827 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:28:58 np0005542249 nova_compute[254900]: 2025-12-02 11:28:58.848 254904 DEBUG nova.compute.manager [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:28:58 np0005542249 nova_compute[254900]: 2025-12-02 11:28:58.848 254904 DEBUG nova.network.neutron [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:28:58 np0005542249 systemd[1]: Started libpod-conmon-1df17c3d8627fc19ed2ce2cd587590b9bc00cad1e7b8015c0e055932832a0ea6.scope.
Dec  2 06:28:58 np0005542249 podman[286684]: 2025-12-02 11:28:58.778983495 +0000 UTC m=+0.046040738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:28:58 np0005542249 nova_compute[254900]: 2025-12-02 11:28:58.884 254904 INFO nova.virt.libvirt.driver [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:28:58 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:28:58 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce4299588320fb0eead02d6500f7e28684b507c5f84b01f29080bb7affef7b75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:28:58 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce4299588320fb0eead02d6500f7e28684b507c5f84b01f29080bb7affef7b75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:28:58 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce4299588320fb0eead02d6500f7e28684b507c5f84b01f29080bb7affef7b75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:28:58 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce4299588320fb0eead02d6500f7e28684b507c5f84b01f29080bb7affef7b75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:28:58 np0005542249 nova_compute[254900]: 2025-12-02 11:28:58.910 254904 DEBUG nova.compute.manager [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:28:58 np0005542249 podman[286684]: 2025-12-02 11:28:58.915868411 +0000 UTC m=+0.182925644 container init 1df17c3d8627fc19ed2ce2cd587590b9bc00cad1e7b8015c0e055932832a0ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  2 06:28:58 np0005542249 podman[286684]: 2025-12-02 11:28:58.927118726 +0000 UTC m=+0.194175919 container start 1df17c3d8627fc19ed2ce2cd587590b9bc00cad1e7b8015c0e055932832a0ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  2 06:28:58 np0005542249 podman[286684]: 2025-12-02 11:28:58.930867498 +0000 UTC m=+0.197924691 container attach 1df17c3d8627fc19ed2ce2cd587590b9bc00cad1e7b8015c0e055932832a0ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:28:58 np0005542249 nova_compute[254900]: 2025-12-02 11:28:58.964 254904 INFO nova.virt.block_device [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Booting with volume 88f19573-e013-4d95-9327-f6a5bc06f0d0 at /dev/vda#033[00m
Dec  2 06:28:59 np0005542249 nova_compute[254900]: 2025-12-02 11:28:59.059 254904 DEBUG nova.policy [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1caa62e7ee8b42be98bc34780a7197f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a893d0c223f746328e706d7491d73b20', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:28:59 np0005542249 nova_compute[254900]: 2025-12-02 11:28:59.147 254904 DEBUG os_brick.utils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:28:59 np0005542249 nova_compute[254900]: 2025-12-02 11:28:59.150 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:28:59 np0005542249 nova_compute[254900]: 2025-12-02 11:28:59.177 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:28:59 np0005542249 nova_compute[254900]: 2025-12-02 11:28:59.177 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[6e64cd0b-2d4c-46a7-a640-1422bb22d806]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:28:59 np0005542249 nova_compute[254900]: 2025-12-02 11:28:59.179 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:28:59 np0005542249 nova_compute[254900]: 2025-12-02 11:28:59.192 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:28:59 np0005542249 nova_compute[254900]: 2025-12-02 11:28:59.193 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[f2864af1-dd1f-4514-90ea-ef1a0b9fbd18]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:28:59 np0005542249 nova_compute[254900]: 2025-12-02 11:28:59.194 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:28:59 np0005542249 nova_compute[254900]: 2025-12-02 11:28:59.210 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:28:59 np0005542249 nova_compute[254900]: 2025-12-02 11:28:59.210 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[4c729110-37cc-4a57-9d90-4e280989e792]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:28:59 np0005542249 nova_compute[254900]: 2025-12-02 11:28:59.212 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[5af0a3dc-d10a-4b37-9cb1-e06509f8d155]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:28:59 np0005542249 nova_compute[254900]: 2025-12-02 11:28:59.213 254904 DEBUG oslo_concurrency.processutils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:28:59 np0005542249 nova_compute[254900]: 2025-12-02 11:28:59.238 254904 DEBUG oslo_concurrency.processutils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:28:59 np0005542249 nova_compute[254900]: 2025-12-02 11:28:59.242 254904 DEBUG os_brick.initiator.connectors.lightos [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:28:59 np0005542249 nova_compute[254900]: 2025-12-02 11:28:59.243 254904 DEBUG os_brick.initiator.connectors.lightos [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:28:59 np0005542249 nova_compute[254900]: 2025-12-02 11:28:59.243 254904 DEBUG os_brick.initiator.connectors.lightos [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:28:59 np0005542249 nova_compute[254900]: 2025-12-02 11:28:59.244 254904 DEBUG os_brick.utils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] <== get_connector_properties: return (95ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:28:59 np0005542249 nova_compute[254900]: 2025-12-02 11:28:59.245 254904 DEBUG nova.virt.block_device [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Updating existing volume attachment record: 9d4ca2f0-dd73-4836-a7ae-017797ef7275 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:28:59 np0005542249 eager_nash[286701]: {
Dec  2 06:28:59 np0005542249 eager_nash[286701]:    "0": [
Dec  2 06:28:59 np0005542249 eager_nash[286701]:        {
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "devices": [
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "/dev/loop3"
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            ],
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "lv_name": "ceph_lv0",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "lv_size": "21470642176",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "name": "ceph_lv0",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "tags": {
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.cluster_name": "ceph",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.crush_device_class": "",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.encrypted": "0",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.osd_id": "0",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.type": "block",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.vdo": "0"
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            },
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "type": "block",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "vg_name": "ceph_vg0"
Dec  2 06:28:59 np0005542249 eager_nash[286701]:        }
Dec  2 06:28:59 np0005542249 eager_nash[286701]:    ],
Dec  2 06:28:59 np0005542249 eager_nash[286701]:    "1": [
Dec  2 06:28:59 np0005542249 eager_nash[286701]:        {
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "devices": [
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "/dev/loop4"
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            ],
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "lv_name": "ceph_lv1",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "lv_size": "21470642176",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "name": "ceph_lv1",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "tags": {
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.cluster_name": "ceph",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.crush_device_class": "",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.encrypted": "0",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.osd_id": "1",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.type": "block",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.vdo": "0"
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            },
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "type": "block",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "vg_name": "ceph_vg1"
Dec  2 06:28:59 np0005542249 eager_nash[286701]:        }
Dec  2 06:28:59 np0005542249 eager_nash[286701]:    ],
Dec  2 06:28:59 np0005542249 eager_nash[286701]:    "2": [
Dec  2 06:28:59 np0005542249 eager_nash[286701]:        {
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "devices": [
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "/dev/loop5"
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            ],
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "lv_name": "ceph_lv2",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "lv_size": "21470642176",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "name": "ceph_lv2",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "tags": {
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.cluster_name": "ceph",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.crush_device_class": "",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.encrypted": "0",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.osd_id": "2",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.type": "block",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:                "ceph.vdo": "0"
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            },
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "type": "block",
Dec  2 06:28:59 np0005542249 eager_nash[286701]:            "vg_name": "ceph_vg2"
Dec  2 06:28:59 np0005542249 eager_nash[286701]:        }
Dec  2 06:28:59 np0005542249 eager_nash[286701]:    ]
Dec  2 06:28:59 np0005542249 eager_nash[286701]: }
Dec  2 06:28:59 np0005542249 systemd[1]: libpod-1df17c3d8627fc19ed2ce2cd587590b9bc00cad1e7b8015c0e055932832a0ea6.scope: Deactivated successfully.
Dec  2 06:28:59 np0005542249 podman[286684]: 2025-12-02 11:28:59.847594299 +0000 UTC m=+1.114651532 container died 1df17c3d8627fc19ed2ce2cd587590b9bc00cad1e7b8015c0e055932832a0ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:28:59 np0005542249 systemd[1]: var-lib-containers-storage-overlay-ce4299588320fb0eead02d6500f7e28684b507c5f84b01f29080bb7affef7b75-merged.mount: Deactivated successfully.
Dec  2 06:28:59 np0005542249 podman[286684]: 2025-12-02 11:28:59.917624155 +0000 UTC m=+1.184681338 container remove 1df17c3d8627fc19ed2ce2cd587590b9bc00cad1e7b8015c0e055932832a0ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  2 06:28:59 np0005542249 systemd[1]: libpod-conmon-1df17c3d8627fc19ed2ce2cd587590b9bc00cad1e7b8015c0e055932832a0ea6.scope: Deactivated successfully.
Dec  2 06:28:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:28:59 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4263295314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:29:00 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1569: 321 pgs: 321 active+clean; 299 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 737 KiB/s rd, 9.7 MiB/s wr, 78 op/s
Dec  2 06:29:00 np0005542249 nova_compute[254900]: 2025-12-02 11:29:00.690 254904 DEBUG nova.compute.manager [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:29:00 np0005542249 nova_compute[254900]: 2025-12-02 11:29:00.692 254904 DEBUG nova.virt.libvirt.driver [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:29:00 np0005542249 nova_compute[254900]: 2025-12-02 11:29:00.692 254904 INFO nova.virt.libvirt.driver [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Creating image(s)#033[00m
Dec  2 06:29:00 np0005542249 nova_compute[254900]: 2025-12-02 11:29:00.692 254904 DEBUG nova.virt.libvirt.driver [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  2 06:29:00 np0005542249 nova_compute[254900]: 2025-12-02 11:29:00.693 254904 DEBUG nova.virt.libvirt.driver [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Ensure instance console log exists: /var/lib/nova/instances/c2b160a2-030e-4625-b36f-060da406de08/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:29:00 np0005542249 nova_compute[254900]: 2025-12-02 11:29:00.693 254904 DEBUG oslo_concurrency.lockutils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:00 np0005542249 nova_compute[254900]: 2025-12-02 11:29:00.693 254904 DEBUG oslo_concurrency.lockutils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:00 np0005542249 nova_compute[254900]: 2025-12-02 11:29:00.693 254904 DEBUG oslo_concurrency.lockutils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:00 np0005542249 podman[286870]: 2025-12-02 11:29:00.717616666 +0000 UTC m=+0.052829021 container create 7e9abaa2fcd12f4c850573d7eebbd77626fba1a85b5378700c39935d0c6fd4f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  2 06:29:00 np0005542249 systemd[1]: Started libpod-conmon-7e9abaa2fcd12f4c850573d7eebbd77626fba1a85b5378700c39935d0c6fd4f0.scope.
Dec  2 06:29:00 np0005542249 podman[286870]: 2025-12-02 11:29:00.694353406 +0000 UTC m=+0.029565741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:29:00 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:29:00 np0005542249 podman[286870]: 2025-12-02 11:29:00.822319881 +0000 UTC m=+0.157532216 container init 7e9abaa2fcd12f4c850573d7eebbd77626fba1a85b5378700c39935d0c6fd4f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:29:00 np0005542249 podman[286870]: 2025-12-02 11:29:00.832557918 +0000 UTC m=+0.167770243 container start 7e9abaa2fcd12f4c850573d7eebbd77626fba1a85b5378700c39935d0c6fd4f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bose, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:29:00 np0005542249 podman[286870]: 2025-12-02 11:29:00.836587968 +0000 UTC m=+0.171800303 container attach 7e9abaa2fcd12f4c850573d7eebbd77626fba1a85b5378700c39935d0c6fd4f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  2 06:29:00 np0005542249 nifty_bose[286885]: 167 167
Dec  2 06:29:00 np0005542249 systemd[1]: libpod-7e9abaa2fcd12f4c850573d7eebbd77626fba1a85b5378700c39935d0c6fd4f0.scope: Deactivated successfully.
Dec  2 06:29:00 np0005542249 podman[286870]: 2025-12-02 11:29:00.842952649 +0000 UTC m=+0.178164984 container died 7e9abaa2fcd12f4c850573d7eebbd77626fba1a85b5378700c39935d0c6fd4f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  2 06:29:00 np0005542249 systemd[1]: var-lib-containers-storage-overlay-d4620743e4495662ee6a6f3c654c191518f075b06d3d70439a4607859cce9f01-merged.mount: Deactivated successfully.
Dec  2 06:29:00 np0005542249 nova_compute[254900]: 2025-12-02 11:29:00.879 254904 DEBUG nova.network.neutron [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Successfully created port: cf68cb85-2ba9-4bcb-acd6-30526d17ca18 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:29:00 np0005542249 podman[286870]: 2025-12-02 11:29:00.889515611 +0000 UTC m=+0.224727946 container remove 7e9abaa2fcd12f4c850573d7eebbd77626fba1a85b5378700c39935d0c6fd4f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  2 06:29:00 np0005542249 systemd[1]: libpod-conmon-7e9abaa2fcd12f4c850573d7eebbd77626fba1a85b5378700c39935d0c6fd4f0.scope: Deactivated successfully.
Dec  2 06:29:01 np0005542249 podman[286908]: 2025-12-02 11:29:01.102065915 +0000 UTC m=+0.072974997 container create 663fb7a87881d83deba3191f450e2bfc3d01009fb4b66b381f249a12383c5a14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 06:29:01 np0005542249 systemd[1]: Started libpod-conmon-663fb7a87881d83deba3191f450e2bfc3d01009fb4b66b381f249a12383c5a14.scope.
Dec  2 06:29:01 np0005542249 podman[286908]: 2025-12-02 11:29:01.078312212 +0000 UTC m=+0.049221374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:29:01 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:29:01 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045979756a51734717fee0bd6ea13779552220ebfb484586034be4d657c461fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:29:01 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045979756a51734717fee0bd6ea13779552220ebfb484586034be4d657c461fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:29:01 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045979756a51734717fee0bd6ea13779552220ebfb484586034be4d657c461fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:29:01 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045979756a51734717fee0bd6ea13779552220ebfb484586034be4d657c461fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:29:01 np0005542249 podman[286908]: 2025-12-02 11:29:01.226365571 +0000 UTC m=+0.197274673 container init 663fb7a87881d83deba3191f450e2bfc3d01009fb4b66b381f249a12383c5a14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_rubin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:29:01 np0005542249 podman[286908]: 2025-12-02 11:29:01.239093666 +0000 UTC m=+0.210002758 container start 663fb7a87881d83deba3191f450e2bfc3d01009fb4b66b381f249a12383c5a14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_rubin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:29:01 np0005542249 podman[286908]: 2025-12-02 11:29:01.242840137 +0000 UTC m=+0.213749319 container attach 663fb7a87881d83deba3191f450e2bfc3d01009fb4b66b381f249a12383c5a14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_rubin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  2 06:29:01 np0005542249 nova_compute[254900]: 2025-12-02 11:29:01.681 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:01 np0005542249 nova_compute[254900]: 2025-12-02 11:29:01.698 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:01 np0005542249 nova_compute[254900]: 2025-12-02 11:29:01.750 254904 DEBUG nova.network.neutron [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Successfully updated port: cf68cb85-2ba9-4bcb-acd6-30526d17ca18 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:29:01 np0005542249 nova_compute[254900]: 2025-12-02 11:29:01.770 254904 DEBUG oslo_concurrency.lockutils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "refresh_cache-c2b160a2-030e-4625-b36f-060da406de08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:29:01 np0005542249 nova_compute[254900]: 2025-12-02 11:29:01.771 254904 DEBUG oslo_concurrency.lockutils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquired lock "refresh_cache-c2b160a2-030e-4625-b36f-060da406de08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:29:01 np0005542249 nova_compute[254900]: 2025-12-02 11:29:01.771 254904 DEBUG nova.network.neutron [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:29:01 np0005542249 nova_compute[254900]: 2025-12-02 11:29:01.883 254904 DEBUG nova.compute.manager [req-84a87107-a155-4fe9-aba3-9bbef32ce3af req-2080d91c-3f9c-45cc-a94d-51c7e8404319 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Received event network-changed-cf68cb85-2ba9-4bcb-acd6-30526d17ca18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:29:01 np0005542249 nova_compute[254900]: 2025-12-02 11:29:01.884 254904 DEBUG nova.compute.manager [req-84a87107-a155-4fe9-aba3-9bbef32ce3af req-2080d91c-3f9c-45cc-a94d-51c7e8404319 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Refreshing instance network info cache due to event network-changed-cf68cb85-2ba9-4bcb-acd6-30526d17ca18. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:29:01 np0005542249 nova_compute[254900]: 2025-12-02 11:29:01.884 254904 DEBUG oslo_concurrency.lockutils [req-84a87107-a155-4fe9-aba3-9bbef32ce3af req-2080d91c-3f9c-45cc-a94d-51c7e8404319 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-c2b160a2-030e-4625-b36f-060da406de08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:29:01 np0005542249 nova_compute[254900]: 2025-12-02 11:29:01.914 254904 DEBUG nova.network.neutron [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:29:02 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1570: 321 pgs: 321 active+clean; 299 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 106 KiB/s rd, 9.4 MiB/s wr, 50 op/s
Dec  2 06:29:02 np0005542249 exciting_rubin[286924]: {
Dec  2 06:29:02 np0005542249 exciting_rubin[286924]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:29:02 np0005542249 exciting_rubin[286924]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:29:02 np0005542249 exciting_rubin[286924]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:29:02 np0005542249 exciting_rubin[286924]:        "osd_id": 0,
Dec  2 06:29:02 np0005542249 exciting_rubin[286924]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:29:02 np0005542249 exciting_rubin[286924]:        "type": "bluestore"
Dec  2 06:29:02 np0005542249 exciting_rubin[286924]:    },
Dec  2 06:29:02 np0005542249 exciting_rubin[286924]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:29:02 np0005542249 exciting_rubin[286924]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:29:02 np0005542249 exciting_rubin[286924]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:29:02 np0005542249 exciting_rubin[286924]:        "osd_id": 2,
Dec  2 06:29:02 np0005542249 exciting_rubin[286924]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:29:02 np0005542249 exciting_rubin[286924]:        "type": "bluestore"
Dec  2 06:29:02 np0005542249 exciting_rubin[286924]:    },
Dec  2 06:29:02 np0005542249 exciting_rubin[286924]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:29:02 np0005542249 exciting_rubin[286924]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:29:02 np0005542249 exciting_rubin[286924]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:29:02 np0005542249 exciting_rubin[286924]:        "osd_id": 1,
Dec  2 06:29:02 np0005542249 exciting_rubin[286924]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:29:02 np0005542249 exciting_rubin[286924]:        "type": "bluestore"
Dec  2 06:29:02 np0005542249 exciting_rubin[286924]:    }
Dec  2 06:29:02 np0005542249 exciting_rubin[286924]: }
Dec  2 06:29:02 np0005542249 systemd[1]: libpod-663fb7a87881d83deba3191f450e2bfc3d01009fb4b66b381f249a12383c5a14.scope: Deactivated successfully.
Dec  2 06:29:02 np0005542249 systemd[1]: libpod-663fb7a87881d83deba3191f450e2bfc3d01009fb4b66b381f249a12383c5a14.scope: Consumed 1.088s CPU time.
Dec  2 06:29:02 np0005542249 podman[286957]: 2025-12-02 11:29:02.379749941 +0000 UTC m=+0.039064629 container died 663fb7a87881d83deba3191f450e2bfc3d01009fb4b66b381f249a12383c5a14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_rubin, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  2 06:29:02 np0005542249 systemd[1]: var-lib-containers-storage-overlay-045979756a51734717fee0bd6ea13779552220ebfb484586034be4d657c461fc-merged.mount: Deactivated successfully.
Dec  2 06:29:02 np0005542249 podman[286957]: 2025-12-02 11:29:02.454356681 +0000 UTC m=+0.113671329 container remove 663fb7a87881d83deba3191f450e2bfc3d01009fb4b66b381f249a12383c5a14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:29:02 np0005542249 systemd[1]: libpod-conmon-663fb7a87881d83deba3191f450e2bfc3d01009fb4b66b381f249a12383c5a14.scope: Deactivated successfully.
Dec  2 06:29:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:29:02 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:29:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:29:02 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:29:02 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 1be6ff65-4778-4a57-bb67-5a7f83a1e95e does not exist
Dec  2 06:29:02 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 300a10ff-c524-406f-9354-a857245db384 does not exist
Dec  2 06:29:02 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:29:02 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:29:02 np0005542249 nova_compute[254900]: 2025-12-02 11:29:02.671 254904 DEBUG nova.network.neutron [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Updating instance_info_cache with network_info: [{"id": "cf68cb85-2ba9-4bcb-acd6-30526d17ca18", "address": "fa:16:3e:dd:1b:59", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf68cb85-2b", "ovs_interfaceid": "cf68cb85-2ba9-4bcb-acd6-30526d17ca18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:29:02 np0005542249 nova_compute[254900]: 2025-12-02 11:29:02.690 254904 DEBUG oslo_concurrency.lockutils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Releasing lock "refresh_cache-c2b160a2-030e-4625-b36f-060da406de08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:29:02 np0005542249 nova_compute[254900]: 2025-12-02 11:29:02.690 254904 DEBUG nova.compute.manager [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Instance network_info: |[{"id": "cf68cb85-2ba9-4bcb-acd6-30526d17ca18", "address": "fa:16:3e:dd:1b:59", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf68cb85-2b", "ovs_interfaceid": "cf68cb85-2ba9-4bcb-acd6-30526d17ca18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:29:02 np0005542249 nova_compute[254900]: 2025-12-02 11:29:02.690 254904 DEBUG oslo_concurrency.lockutils [req-84a87107-a155-4fe9-aba3-9bbef32ce3af req-2080d91c-3f9c-45cc-a94d-51c7e8404319 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-c2b160a2-030e-4625-b36f-060da406de08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:29:02 np0005542249 nova_compute[254900]: 2025-12-02 11:29:02.691 254904 DEBUG nova.network.neutron [req-84a87107-a155-4fe9-aba3-9bbef32ce3af req-2080d91c-3f9c-45cc-a94d-51c7e8404319 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Refreshing network info cache for port cf68cb85-2ba9-4bcb-acd6-30526d17ca18 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:29:02 np0005542249 nova_compute[254900]: 2025-12-02 11:29:02.695 254904 DEBUG nova.virt.libvirt.driver [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Start _get_guest_xml network_info=[{"id": "cf68cb85-2ba9-4bcb-acd6-30526d17ca18", "address": "fa:16:3e:dd:1b:59", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf68cb85-2b", "ovs_interfaceid": "cf68cb85-2ba9-4bcb-acd6-30526d17ca18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-88f19573-e013-4d95-9327-f6a5bc06f0d0', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '88f19573-e013-4d95-9327-f6a5bc06f0d0', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'c2b160a2-030e-4625-b36f-060da406de08', 'attached_at': '', 'detached_at': '', 'volume_id': '88f19573-e013-4d95-9327-f6a5bc06f0d0', 'serial': '88f19573-e013-4d95-9327-f6a5bc06f0d0'}, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'attachment_id': '9d4ca2f0-dd73-4836-a7ae-017797ef7275', 'delete_on_termination': False, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:29:02 np0005542249 nova_compute[254900]: 2025-12-02 11:29:02.701 254904 WARNING nova.virt.libvirt.driver [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:29:02 np0005542249 nova_compute[254900]: 2025-12-02 11:29:02.703 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:02 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:02.702 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:23:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:25:d0:a1:75:ed'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:29:02 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:02.703 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 06:29:02 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:02.704 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:02 np0005542249 nova_compute[254900]: 2025-12-02 11:29:02.707 254904 DEBUG nova.virt.libvirt.host [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:29:02 np0005542249 nova_compute[254900]: 2025-12-02 11:29:02.708 254904 DEBUG nova.virt.libvirt.host [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:29:02 np0005542249 nova_compute[254900]: 2025-12-02 11:29:02.711 254904 DEBUG nova.virt.libvirt.host [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:29:02 np0005542249 nova_compute[254900]: 2025-12-02 11:29:02.712 254904 DEBUG nova.virt.libvirt.host [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:29:02 np0005542249 nova_compute[254900]: 2025-12-02 11:29:02.712 254904 DEBUG nova.virt.libvirt.driver [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:29:02 np0005542249 nova_compute[254900]: 2025-12-02 11:29:02.713 254904 DEBUG nova.virt.hardware [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:29:02 np0005542249 nova_compute[254900]: 2025-12-02 11:29:02.713 254904 DEBUG nova.virt.hardware [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:29:02 np0005542249 nova_compute[254900]: 2025-12-02 11:29:02.714 254904 DEBUG nova.virt.hardware [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:29:02 np0005542249 nova_compute[254900]: 2025-12-02 11:29:02.714 254904 DEBUG nova.virt.hardware [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:29:02 np0005542249 nova_compute[254900]: 2025-12-02 11:29:02.714 254904 DEBUG nova.virt.hardware [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:29:02 np0005542249 nova_compute[254900]: 2025-12-02 11:29:02.714 254904 DEBUG nova.virt.hardware [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:29:02 np0005542249 nova_compute[254900]: 2025-12-02 11:29:02.715 254904 DEBUG nova.virt.hardware [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:29:02 np0005542249 nova_compute[254900]: 2025-12-02 11:29:02.715 254904 DEBUG nova.virt.hardware [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:29:02 np0005542249 nova_compute[254900]: 2025-12-02 11:29:02.715 254904 DEBUG nova.virt.hardware [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:29:02 np0005542249 nova_compute[254900]: 2025-12-02 11:29:02.715 254904 DEBUG nova.virt.hardware [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:29:02 np0005542249 nova_compute[254900]: 2025-12-02 11:29:02.716 254904 DEBUG nova.virt.hardware [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:29:02 np0005542249 nova_compute[254900]: 2025-12-02 11:29:02.747 254904 DEBUG nova.storage.rbd_utils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] rbd image c2b160a2-030e-4625-b36f-060da406de08_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:29:02 np0005542249 nova_compute[254900]: 2025-12-02 11:29:02.752 254904 DEBUG oslo_concurrency.processutils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:29:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:29:03 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/17396675' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.274 254904 DEBUG oslo_concurrency.processutils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.421 254904 DEBUG os_brick.encryptors [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Using volume encryption metadata '{'encryption_key_id': '8dcda367-e6d0-43b1-b4fe-f315fa93d5e0', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-88f19573-e013-4d95-9327-f6a5bc06f0d0', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '88f19573-e013-4d95-9327-f6a5bc06f0d0', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'c2b160a2-030e-4625-b36f-060da406de08', 'attached_at': '', 'detached_at': '', 'volume_id': '88f19573-e013-4d95-9327-f6a5bc06f0d0', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.423 254904 DEBUG barbicanclient.client [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.437 254904 DEBUG barbicanclient.v1.secrets [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/8dcda367-e6d0-43b1-b4fe-f315fa93d5e0 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.438 254904 INFO barbicanclient.base [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/8dcda367-e6d0-43b1-b4fe-f315fa93d5e0#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.462 254904 DEBUG barbicanclient.client [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.463 254904 INFO barbicanclient.base [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/8dcda367-e6d0-43b1-b4fe-f315fa93d5e0#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.493 254904 DEBUG barbicanclient.client [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.493 254904 INFO barbicanclient.base [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/8dcda367-e6d0-43b1-b4fe-f315fa93d5e0#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.524 254904 DEBUG barbicanclient.client [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.524 254904 INFO barbicanclient.base [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/8dcda367-e6d0-43b1-b4fe-f315fa93d5e0#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.559 254904 DEBUG barbicanclient.client [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.560 254904 INFO barbicanclient.base [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/8dcda367-e6d0-43b1-b4fe-f315fa93d5e0#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.586 254904 DEBUG barbicanclient.client [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.587 254904 INFO barbicanclient.base [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/8dcda367-e6d0-43b1-b4fe-f315fa93d5e0#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.613 254904 DEBUG barbicanclient.client [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.613 254904 INFO barbicanclient.base [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/8dcda367-e6d0-43b1-b4fe-f315fa93d5e0#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.649 254904 DEBUG barbicanclient.client [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.649 254904 INFO barbicanclient.base [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/8dcda367-e6d0-43b1-b4fe-f315fa93d5e0#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.678 254904 DEBUG barbicanclient.client [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.679 254904 INFO barbicanclient.base [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/8dcda367-e6d0-43b1-b4fe-f315fa93d5e0#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.705 254904 DEBUG barbicanclient.client [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.706 254904 INFO barbicanclient.base [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/8dcda367-e6d0-43b1-b4fe-f315fa93d5e0#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.870 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.937 254904 DEBUG barbicanclient.client [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.938 254904 INFO barbicanclient.base [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/8dcda367-e6d0-43b1-b4fe-f315fa93d5e0#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.983 254904 DEBUG barbicanclient.client [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:03 np0005542249 nova_compute[254900]: 2025-12-02 11:29:03.983 254904 INFO barbicanclient.base [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/8dcda367-e6d0-43b1-b4fe-f315fa93d5e0#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.029 254904 DEBUG barbicanclient.client [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.030 254904 INFO barbicanclient.base [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/8dcda367-e6d0-43b1-b4fe-f315fa93d5e0#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.054 254904 DEBUG barbicanclient.client [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.055 254904 INFO barbicanclient.base [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/8dcda367-e6d0-43b1-b4fe-f315fa93d5e0#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.084 254904 DEBUG barbicanclient.client [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.084 254904 INFO barbicanclient.base [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/8dcda367-e6d0-43b1-b4fe-f315fa93d5e0#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.108 254904 DEBUG barbicanclient.client [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.109 254904 DEBUG nova.virt.libvirt.host [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Secret XML: <secret ephemeral="no" private="no">
Dec  2 06:29:04 np0005542249 nova_compute[254900]:  <usage type="volume">
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <volume>88f19573-e013-4d95-9327-f6a5bc06f0d0</volume>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:  </usage>
Dec  2 06:29:04 np0005542249 nova_compute[254900]: </secret>
Dec  2 06:29:04 np0005542249 nova_compute[254900]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.153 254904 DEBUG nova.virt.libvirt.vif [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:28:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1955822252',display_name='tempest-TransferEncryptedVolumeTest-server-1955822252',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1955822252',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBPd/5CNJJJVCM7bF71nyziMenMlWa5ulXBeejobfPYAVvOOigTWuMR262ZOPGYLdqIJzc7AMWApUvqaDK/XzxMpH8d3L2DeOAIkexGDCsfnTgIhEJIcaLGeYmajYRiu/w==',key_name='tempest-TransferEncryptedVolumeTest-1560588090',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a893d0c223f746328e706d7491d73b20',ramdisk_id='',reservation_id='r-wgv2eazc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1499588457',owner_user_name='tempest-TransferEncryptedVolumeTest-1499588457-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:28:58Z,user_data=None,user_id='1caa62e7ee8b42be98bc34780a7197f9',uuid=c2b160a2-030e-4625-b36f-060da406de08,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cf68cb85-2ba9-4bcb-acd6-30526d17ca18", "address": "fa:16:3e:dd:1b:59", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf68cb85-2b", "ovs_interfaceid": "cf68cb85-2ba9-4bcb-acd6-30526d17ca18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.154 254904 DEBUG nova.network.os_vif_util [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Converting VIF {"id": "cf68cb85-2ba9-4bcb-acd6-30526d17ca18", "address": "fa:16:3e:dd:1b:59", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf68cb85-2b", "ovs_interfaceid": "cf68cb85-2ba9-4bcb-acd6-30526d17ca18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.155 254904 DEBUG nova.network.os_vif_util [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:1b:59,bridge_name='br-int',has_traffic_filtering=True,id=cf68cb85-2ba9-4bcb-acd6-30526d17ca18,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf68cb85-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.158 254904 DEBUG nova.objects.instance [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lazy-loading 'pci_devices' on Instance uuid c2b160a2-030e-4625-b36f-060da406de08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.181 254904 DEBUG nova.virt.libvirt.driver [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:29:04 np0005542249 nova_compute[254900]:  <uuid>c2b160a2-030e-4625-b36f-060da406de08</uuid>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:  <name>instance-00000014</name>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-1955822252</nova:name>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:29:02</nova:creationTime>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:29:04 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:        <nova:user uuid="1caa62e7ee8b42be98bc34780a7197f9">tempest-TransferEncryptedVolumeTest-1499588457-project-member</nova:user>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:        <nova:project uuid="a893d0c223f746328e706d7491d73b20">tempest-TransferEncryptedVolumeTest-1499588457</nova:project>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:        <nova:port uuid="cf68cb85-2ba9-4bcb-acd6-30526d17ca18">
Dec  2 06:29:04 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <entry name="serial">c2b160a2-030e-4625-b36f-060da406de08</entry>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <entry name="uuid">c2b160a2-030e-4625-b36f-060da406de08</entry>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/c2b160a2-030e-4625-b36f-060da406de08_disk.config">
Dec  2 06:29:04 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:29:04 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="volumes/volume-88f19573-e013-4d95-9327-f6a5bc06f0d0">
Dec  2 06:29:04 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:29:04 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <serial>88f19573-e013-4d95-9327-f6a5bc06f0d0</serial>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <encryption format="luks">
Dec  2 06:29:04 np0005542249 nova_compute[254900]:        <secret type="passphrase" uuid="ad2d246a-4f69-4bc4-8e9c-438aa42cb09a"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      </encryption>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:dd:1b:59"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <target dev="tapcf68cb85-2b"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/c2b160a2-030e-4625-b36f-060da406de08/console.log" append="off"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:29:04 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:29:04 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:29:04 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:29:04 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.182 254904 DEBUG nova.compute.manager [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Preparing to wait for external event network-vif-plugged-cf68cb85-2ba9-4bcb-acd6-30526d17ca18 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.183 254904 DEBUG oslo_concurrency.lockutils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "c2b160a2-030e-4625-b36f-060da406de08-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.183 254904 DEBUG oslo_concurrency.lockutils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "c2b160a2-030e-4625-b36f-060da406de08-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.184 254904 DEBUG oslo_concurrency.lockutils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "c2b160a2-030e-4625-b36f-060da406de08-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.185 254904 DEBUG nova.virt.libvirt.vif [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:28:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1955822252',display_name='tempest-TransferEncryptedVolumeTest-server-1955822252',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1955822252',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBPd/5CNJJJVCM7bF71nyziMenMlWa5ulXBeejobfPYAVvOOigTWuMR262ZOPGYLdqIJzc7AMWApUvqaDK/XzxMpH8d3L2DeOAIkexGDCsfnTgIhEJIcaLGeYmajYRiu/w==',key_name='tempest-TransferEncryptedVolumeTest-1560588090',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a893d0c223f746328e706d7491d73b20',ramdisk_id='',reservation_id='r-wgv2eazc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1499588457',owner_user_name='tempest-TransferEncryptedVolumeTest-1499588457-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:28:58Z,user_data=None,user_id='1caa62e7ee8b42be98bc34780a7197f9',uuid=c2b160a2-030e-4625-b36f-060da406de08,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cf68cb85-2ba9-4bcb-acd6-30526d17ca18", "address": "fa:16:3e:dd:1b:59", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf68cb85-2b", "ovs_interfaceid": "cf68cb85-2ba9-4bcb-acd6-30526d17ca18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.186 254904 DEBUG nova.network.os_vif_util [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Converting VIF {"id": "cf68cb85-2ba9-4bcb-acd6-30526d17ca18", "address": "fa:16:3e:dd:1b:59", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf68cb85-2b", "ovs_interfaceid": "cf68cb85-2ba9-4bcb-acd6-30526d17ca18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.187 254904 DEBUG nova.network.os_vif_util [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:1b:59,bridge_name='br-int',has_traffic_filtering=True,id=cf68cb85-2ba9-4bcb-acd6-30526d17ca18,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf68cb85-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.189 254904 DEBUG os_vif [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:1b:59,bridge_name='br-int',has_traffic_filtering=True,id=cf68cb85-2ba9-4bcb-acd6-30526d17ca18,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf68cb85-2b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.190 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.191 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.191 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.195 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.196 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf68cb85-2b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.196 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcf68cb85-2b, col_values=(('external_ids', {'iface-id': 'cf68cb85-2ba9-4bcb-acd6-30526d17ca18', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:1b:59', 'vm-uuid': 'c2b160a2-030e-4625-b36f-060da406de08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:04 np0005542249 NetworkManager[48987]: <info>  [1764674944.2016] manager: (tapcf68cb85-2b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.207 254904 DEBUG nova.network.neutron [req-84a87107-a155-4fe9-aba3-9bbef32ce3af req-2080d91c-3f9c-45cc-a94d-51c7e8404319 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Updated VIF entry in instance network info cache for port cf68cb85-2ba9-4bcb-acd6-30526d17ca18. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.208 254904 DEBUG nova.network.neutron [req-84a87107-a155-4fe9-aba3-9bbef32ce3af req-2080d91c-3f9c-45cc-a94d-51c7e8404319 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Updating instance_info_cache with network_info: [{"id": "cf68cb85-2ba9-4bcb-acd6-30526d17ca18", "address": "fa:16:3e:dd:1b:59", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf68cb85-2b", "ovs_interfaceid": "cf68cb85-2ba9-4bcb-acd6-30526d17ca18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.210 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.211 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.213 254904 INFO os_vif [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:1b:59,bridge_name='br-int',has_traffic_filtering=True,id=cf68cb85-2ba9-4bcb-acd6-30526d17ca18,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf68cb85-2b')#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.231 254904 DEBUG oslo_concurrency.lockutils [req-84a87107-a155-4fe9-aba3-9bbef32ce3af req-2080d91c-3f9c-45cc-a94d-51c7e8404319 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-c2b160a2-030e-4625-b36f-060da406de08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:29:04 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1571: 321 pgs: 321 active+clean; 299 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 9.4 MiB/s wr, 43 op/s
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.282 254904 DEBUG nova.virt.libvirt.driver [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.282 254904 DEBUG nova.virt.libvirt.driver [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.283 254904 DEBUG nova.virt.libvirt.driver [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] No VIF found with MAC fa:16:3e:dd:1b:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.283 254904 INFO nova.virt.libvirt.driver [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Using config drive#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.320 254904 DEBUG nova.storage.rbd_utils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] rbd image c2b160a2-030e-4625-b36f-060da406de08_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.649 254904 INFO nova.virt.libvirt.driver [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Creating config drive at /var/lib/nova/instances/c2b160a2-030e-4625-b36f-060da406de08/disk.config#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.658 254904 DEBUG oslo_concurrency.processutils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c2b160a2-030e-4625-b36f-060da406de08/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzsxhxpk2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.813 254904 DEBUG oslo_concurrency.processutils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c2b160a2-030e-4625-b36f-060da406de08/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzsxhxpk2" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.860 254904 DEBUG nova.storage.rbd_utils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] rbd image c2b160a2-030e-4625-b36f-060da406de08_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:29:04 np0005542249 nova_compute[254900]: 2025-12-02 11:29:04.867 254904 DEBUG oslo_concurrency.processutils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c2b160a2-030e-4625-b36f-060da406de08/disk.config c2b160a2-030e-4625-b36f-060da406de08_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:05 np0005542249 nova_compute[254900]: 2025-12-02 11:29:05.067 254904 DEBUG oslo_concurrency.processutils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c2b160a2-030e-4625-b36f-060da406de08/disk.config c2b160a2-030e-4625-b36f-060da406de08_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:05 np0005542249 nova_compute[254900]: 2025-12-02 11:29:05.069 254904 INFO nova.virt.libvirt.driver [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Deleting local config drive /var/lib/nova/instances/c2b160a2-030e-4625-b36f-060da406de08/disk.config because it was imported into RBD.#033[00m
Dec  2 06:29:05 np0005542249 kernel: tapcf68cb85-2b: entered promiscuous mode
Dec  2 06:29:05 np0005542249 NetworkManager[48987]: <info>  [1764674945.1448] manager: (tapcf68cb85-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/106)
Dec  2 06:29:05 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:05Z|00187|binding|INFO|Claiming lport cf68cb85-2ba9-4bcb-acd6-30526d17ca18 for this chassis.
Dec  2 06:29:05 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:05Z|00188|binding|INFO|cf68cb85-2ba9-4bcb-acd6-30526d17ca18: Claiming fa:16:3e:dd:1b:59 10.100.0.10
Dec  2 06:29:05 np0005542249 nova_compute[254900]: 2025-12-02 11:29:05.147 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.155 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:1b:59 10.100.0.10'], port_security=['fa:16:3e:dd:1b:59 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c2b160a2-030e-4625-b36f-060da406de08', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a893d0c223f746328e706d7491d73b20', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5f414683-a8ce-4d9a-ad73-51d7a84d1e7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9a246c4-d9fe-402e-8fa6-6099b55c4866, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=cf68cb85-2ba9-4bcb-acd6-30526d17ca18) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.156 163757 INFO neutron.agent.ovn.metadata.agent [-] Port cf68cb85-2ba9-4bcb-acd6-30526d17ca18 in datapath 4f9f73cb-9730-4829-ae15-1f03b97e60f8 bound to our chassis#033[00m
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.158 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4f9f73cb-9730-4829-ae15-1f03b97e60f8#033[00m
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.177 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[7fa083ab-b819-41bc-ae08-6bdab8e51d6b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.178 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4f9f73cb-91 in ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.180 262398 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4f9f73cb-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.181 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[4a2eae57-a8f8-4509-8a5d-557b24fa48ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.181 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[f7c42862-4cb3-45ae-8fb7-b233c5e5117e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:05 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:05Z|00189|binding|INFO|Setting lport cf68cb85-2ba9-4bcb-acd6-30526d17ca18 up in Southbound
Dec  2 06:29:05 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:05Z|00190|binding|INFO|Setting lport cf68cb85-2ba9-4bcb-acd6-30526d17ca18 ovn-installed in OVS
Dec  2 06:29:05 np0005542249 nova_compute[254900]: 2025-12-02 11:29:05.186 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:05 np0005542249 nova_compute[254900]: 2025-12-02 11:29:05.189 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:05 np0005542249 systemd-machined[216222]: New machine qemu-20-instance-00000014.
Dec  2 06:29:05 np0005542249 nova_compute[254900]: 2025-12-02 11:29:05.198 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.204 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[93de487e-eb67-4250-bcde-84aab297fac6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:05 np0005542249 systemd[1]: Started Virtual Machine qemu-20-instance-00000014.
Dec  2 06:29:05 np0005542249 systemd-udevd[287137]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.228 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[bb97e975-26e8-4fa7-83a4-572057c1f3be]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:05 np0005542249 NetworkManager[48987]: <info>  [1764674945.2483] device (tapcf68cb85-2b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:29:05 np0005542249 NetworkManager[48987]: <info>  [1764674945.2504] device (tapcf68cb85-2b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.270 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[d6fccf09-5210-4262-b373-c949f0f3d288]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:05 np0005542249 systemd-udevd[287141]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.276 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[10854412-d16b-41d0-a17e-d6ffd9f20edb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:05 np0005542249 NetworkManager[48987]: <info>  [1764674945.2801] manager: (tap4f9f73cb-90): new Veth device (/org/freedesktop/NetworkManager/Devices/107)
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.329 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[17ea9070-7d31-415c-98c9-6035756e4cc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.334 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[35b8bfe2-045e-4a66-b7d8-403a4a2d8796]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:05 np0005542249 NetworkManager[48987]: <info>  [1764674945.3686] device (tap4f9f73cb-90): carrier: link connected
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.377 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[93cf3d71-f49a-4a88-b317-9915ce62cf04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.406 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[66858828-ad58-4937-bab0-4bf6676ab3f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f9f73cb-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:ed:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 513819, 'reachable_time': 25099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287167, 'error': None, 'target': 'ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.430 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[de92e1ea-5032-4d28-b4a2-73d2c9b3ad62]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:edbf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 513819, 'tstamp': 513819}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287168, 'error': None, 'target': 'ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.457 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[a3c51e76-7348-48a4-826d-accf921b604f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f9f73cb-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:ed:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 513819, 'reachable_time': 25099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287169, 'error': None, 'target': 'ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.511 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[dce112f2-13fd-4f24-b069-ddb2f424aa99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.600 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[1398f661-09c7-451d-9ccf-f4470096954b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.602 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f9f73cb-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.602 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.603 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f9f73cb-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:05 np0005542249 nova_compute[254900]: 2025-12-02 11:29:05.606 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:05 np0005542249 NetworkManager[48987]: <info>  [1764674945.6071] manager: (tap4f9f73cb-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Dec  2 06:29:05 np0005542249 kernel: tap4f9f73cb-90: entered promiscuous mode
Dec  2 06:29:05 np0005542249 nova_compute[254900]: 2025-12-02 11:29:05.611 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.612 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4f9f73cb-90, col_values=(('external_ids', {'iface-id': '244504fe-2e21-493b-8e56-0db40be1f53e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:05 np0005542249 nova_compute[254900]: 2025-12-02 11:29:05.613 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:05 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:05Z|00191|binding|INFO|Releasing lport 244504fe-2e21-493b-8e56-0db40be1f53e from this chassis (sb_readonly=0)
Dec  2 06:29:05 np0005542249 nova_compute[254900]: 2025-12-02 11:29:05.635 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:05 np0005542249 nova_compute[254900]: 2025-12-02 11:29:05.635 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.636 163757 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4f9f73cb-9730-4829-ae15-1f03b97e60f8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4f9f73cb-9730-4829-ae15-1f03b97e60f8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.637 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[c9b6c3eb-275e-44e9-81ce-7a6a2fd63490]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.639 163757 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: global
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]:    log         /dev/log local0 debug
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]:    log-tag     haproxy-metadata-proxy-4f9f73cb-9730-4829-ae15-1f03b97e60f8
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]:    user        root
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]:    group       root
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]:    maxconn     1024
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]:    pidfile     /var/lib/neutron/external/pids/4f9f73cb-9730-4829-ae15-1f03b97e60f8.pid.haproxy
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]:    daemon
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: defaults
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]:    log global
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]:    mode http
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]:    option httplog
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]:    option dontlognull
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]:    option http-server-close
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]:    option forwardfor
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]:    retries                 3
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]:    timeout http-request    30s
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]:    timeout connect         30s
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]:    timeout client          32s
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]:    timeout server          32s
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]:    timeout http-keep-alive 30s
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: listen listener
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]:    bind 169.254.169.254:80
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]:    http-request add-header X-OVN-Network-ID 4f9f73cb-9730-4829-ae15-1f03b97e60f8
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 06:29:05 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:05.639 163757 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'env', 'PROCESS_TAG=haproxy-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4f9f73cb-9730-4829-ae15-1f03b97e60f8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 06:29:05 np0005542249 nova_compute[254900]: 2025-12-02 11:29:05.730 254904 DEBUG nova.compute.manager [req-0cbac424-a1cf-4530-b023-c4ed5a627e82 req-351daea6-4ebd-4cb0-9bb5-ae7a9bdf700c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Received event network-vif-plugged-cf68cb85-2ba9-4bcb-acd6-30526d17ca18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:29:05 np0005542249 nova_compute[254900]: 2025-12-02 11:29:05.731 254904 DEBUG oslo_concurrency.lockutils [req-0cbac424-a1cf-4530-b023-c4ed5a627e82 req-351daea6-4ebd-4cb0-9bb5-ae7a9bdf700c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "c2b160a2-030e-4625-b36f-060da406de08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:05 np0005542249 nova_compute[254900]: 2025-12-02 11:29:05.731 254904 DEBUG oslo_concurrency.lockutils [req-0cbac424-a1cf-4530-b023-c4ed5a627e82 req-351daea6-4ebd-4cb0-9bb5-ae7a9bdf700c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "c2b160a2-030e-4625-b36f-060da406de08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:05 np0005542249 nova_compute[254900]: 2025-12-02 11:29:05.731 254904 DEBUG oslo_concurrency.lockutils [req-0cbac424-a1cf-4530-b023-c4ed5a627e82 req-351daea6-4ebd-4cb0-9bb5-ae7a9bdf700c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "c2b160a2-030e-4625-b36f-060da406de08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:05 np0005542249 nova_compute[254900]: 2025-12-02 11:29:05.731 254904 DEBUG nova.compute.manager [req-0cbac424-a1cf-4530-b023-c4ed5a627e82 req-351daea6-4ebd-4cb0-9bb5-ae7a9bdf700c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Processing event network-vif-plugged-cf68cb85-2ba9-4bcb-acd6-30526d17ca18 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:29:06 np0005542249 podman[287237]: 2025-12-02 11:29:06.108915904 +0000 UTC m=+0.058192656 container create 02ed0b797476bd4aca8672e02608e10c23947b4df0bc309a8d4e72cd4b0a5fe6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 06:29:06 np0005542249 systemd[1]: Started libpod-conmon-02ed0b797476bd4aca8672e02608e10c23947b4df0bc309a8d4e72cd4b0a5fe6.scope.
Dec  2 06:29:06 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:29:06 np0005542249 podman[287237]: 2025-12-02 11:29:06.083304111 +0000 UTC m=+0.032580883 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:29:06 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb2c4010f5a0388e05b5d91ae6c916c90a0db6aa76c34d681df45b5f9e3d54e1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 06:29:06 np0005542249 podman[287237]: 2025-12-02 11:29:06.195355515 +0000 UTC m=+0.144632287 container init 02ed0b797476bd4aca8672e02608e10c23947b4df0bc309a8d4e72cd4b0a5fe6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:29:06 np0005542249 podman[287237]: 2025-12-02 11:29:06.201143951 +0000 UTC m=+0.150420703 container start 02ed0b797476bd4aca8672e02608e10c23947b4df0bc309a8d4e72cd4b0a5fe6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:29:06 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[287252]: [NOTICE]   (287256) : New worker (287258) forked
Dec  2 06:29:06 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[287252]: [NOTICE]   (287256) : Loading success.
Dec  2 06:29:06 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1572: 321 pgs: 321 active+clean; 299 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 6.4 MiB/s wr, 10 op/s
Dec  2 06:29:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:29:07 np0005542249 nova_compute[254900]: 2025-12-02 11:29:07.961 254904 DEBUG nova.compute.manager [req-eb653d09-4034-40fd-89cc-3005c855eb70 req-276c4402-f681-4dae-a687-eb92502a3d48 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Received event network-vif-plugged-cf68cb85-2ba9-4bcb-acd6-30526d17ca18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:29:07 np0005542249 nova_compute[254900]: 2025-12-02 11:29:07.962 254904 DEBUG oslo_concurrency.lockutils [req-eb653d09-4034-40fd-89cc-3005c855eb70 req-276c4402-f681-4dae-a687-eb92502a3d48 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "c2b160a2-030e-4625-b36f-060da406de08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:07 np0005542249 nova_compute[254900]: 2025-12-02 11:29:07.962 254904 DEBUG oslo_concurrency.lockutils [req-eb653d09-4034-40fd-89cc-3005c855eb70 req-276c4402-f681-4dae-a687-eb92502a3d48 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "c2b160a2-030e-4625-b36f-060da406de08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:07 np0005542249 nova_compute[254900]: 2025-12-02 11:29:07.963 254904 DEBUG oslo_concurrency.lockutils [req-eb653d09-4034-40fd-89cc-3005c855eb70 req-276c4402-f681-4dae-a687-eb92502a3d48 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "c2b160a2-030e-4625-b36f-060da406de08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:07 np0005542249 nova_compute[254900]: 2025-12-02 11:29:07.963 254904 DEBUG nova.compute.manager [req-eb653d09-4034-40fd-89cc-3005c855eb70 req-276c4402-f681-4dae-a687-eb92502a3d48 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] No waiting events found dispatching network-vif-plugged-cf68cb85-2ba9-4bcb-acd6-30526d17ca18 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:29:07 np0005542249 nova_compute[254900]: 2025-12-02 11:29:07.963 254904 WARNING nova.compute.manager [req-eb653d09-4034-40fd-89cc-3005c855eb70 req-276c4402-f681-4dae-a687-eb92502a3d48 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Received unexpected event network-vif-plugged-cf68cb85-2ba9-4bcb-acd6-30526d17ca18 for instance with vm_state building and task_state spawning.#033[00m
Dec  2 06:29:08 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1573: 321 pgs: 321 active+clean; 299 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.9 MiB/s wr, 17 op/s
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.320 254904 DEBUG nova.compute.manager [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.321 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674948.321031, c2b160a2-030e-4625-b36f-060da406de08 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.322 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c2b160a2-030e-4625-b36f-060da406de08] VM Started (Lifecycle Event)#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.325 254904 DEBUG nova.virt.libvirt.driver [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.328 254904 INFO nova.virt.libvirt.driver [-] [instance: c2b160a2-030e-4625-b36f-060da406de08] Instance spawned successfully.#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.328 254904 DEBUG nova.virt.libvirt.driver [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.351 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c2b160a2-030e-4625-b36f-060da406de08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.360 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c2b160a2-030e-4625-b36f-060da406de08] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.365 254904 DEBUG nova.virt.libvirt.driver [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.366 254904 DEBUG nova.virt.libvirt.driver [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.367 254904 DEBUG nova.virt.libvirt.driver [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.368 254904 DEBUG nova.virt.libvirt.driver [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.368 254904 DEBUG nova.virt.libvirt.driver [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.369 254904 DEBUG nova.virt.libvirt.driver [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.409 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c2b160a2-030e-4625-b36f-060da406de08] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.410 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674948.3213897, c2b160a2-030e-4625-b36f-060da406de08 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.410 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c2b160a2-030e-4625-b36f-060da406de08] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.442 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c2b160a2-030e-4625-b36f-060da406de08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.447 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674948.3246336, c2b160a2-030e-4625-b36f-060da406de08 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.448 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c2b160a2-030e-4625-b36f-060da406de08] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.456 254904 INFO nova.compute.manager [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Took 7.77 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.457 254904 DEBUG nova.compute.manager [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.470 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c2b160a2-030e-4625-b36f-060da406de08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.474 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c2b160a2-030e-4625-b36f-060da406de08] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.503 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c2b160a2-030e-4625-b36f-060da406de08] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.540 254904 INFO nova.compute.manager [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Took 10.45 seconds to build instance.#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.557 254904 DEBUG oslo_concurrency.lockutils [None req-0f04c7db-f507-442b-a9d5-64eecb12551d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "c2b160a2-030e-4625-b36f-060da406de08" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.538s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:08 np0005542249 nova_compute[254900]: 2025-12-02 11:29:08.871 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:09 np0005542249 nova_compute[254900]: 2025-12-02 11:29:09.198 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:10 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1574: 321 pgs: 321 active+clean; 299 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 255 KiB/s rd, 19 KiB/s wr, 20 op/s
Dec  2 06:29:10 np0005542249 nova_compute[254900]: 2025-12-02 11:29:10.772 254904 DEBUG oslo_concurrency.lockutils [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "fabff5b9-969a-4502-91d0-6adacfa56156" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:10 np0005542249 nova_compute[254900]: 2025-12-02 11:29:10.774 254904 DEBUG oslo_concurrency.lockutils [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "fabff5b9-969a-4502-91d0-6adacfa56156" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:10 np0005542249 nova_compute[254900]: 2025-12-02 11:29:10.775 254904 DEBUG oslo_concurrency.lockutils [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "fabff5b9-969a-4502-91d0-6adacfa56156-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:10 np0005542249 nova_compute[254900]: 2025-12-02 11:29:10.776 254904 DEBUG oslo_concurrency.lockutils [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "fabff5b9-969a-4502-91d0-6adacfa56156-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:10 np0005542249 nova_compute[254900]: 2025-12-02 11:29:10.776 254904 DEBUG oslo_concurrency.lockutils [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "fabff5b9-969a-4502-91d0-6adacfa56156-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:10 np0005542249 nova_compute[254900]: 2025-12-02 11:29:10.779 254904 INFO nova.compute.manager [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Terminating instance#033[00m
Dec  2 06:29:10 np0005542249 nova_compute[254900]: 2025-12-02 11:29:10.782 254904 DEBUG nova.compute.manager [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:29:10 np0005542249 kernel: tapa1e6fdfb-25 (unregistering): left promiscuous mode
Dec  2 06:29:10 np0005542249 NetworkManager[48987]: <info>  [1764674950.8718] device (tapa1e6fdfb-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:29:10 np0005542249 nova_compute[254900]: 2025-12-02 11:29:10.882 254904 DEBUG oslo_concurrency.lockutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Acquiring lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:10 np0005542249 nova_compute[254900]: 2025-12-02 11:29:10.883 254904 DEBUG oslo_concurrency.lockutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:10 np0005542249 nova_compute[254900]: 2025-12-02 11:29:10.899 254904 DEBUG nova.compute.manager [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:29:10 np0005542249 nova_compute[254900]: 2025-12-02 11:29:10.915 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:10 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:10Z|00192|binding|INFO|Releasing lport a1e6fdfb-25f9-43b9-997f-77ee16acf923 from this chassis (sb_readonly=0)
Dec  2 06:29:10 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:10Z|00193|binding|INFO|Setting lport a1e6fdfb-25f9-43b9-997f-77ee16acf923 down in Southbound
Dec  2 06:29:10 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:10Z|00194|binding|INFO|Removing iface tapa1e6fdfb-25 ovn-installed in OVS
Dec  2 06:29:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:10.924 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:07:e1 10.100.0.11'], port_security=['fa:16:3e:d6:07:e1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'fabff5b9-969a-4502-91d0-6adacfa56156', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '625a6939c31646a4a83ea851774cf28c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a2145909-363d-45af-b384-4ef6e931b803', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.226'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cc823ef0-3e69-4062-a488-a82f483f10bb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=a1e6fdfb-25f9-43b9-997f-77ee16acf923) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:29:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:10.926 163757 INFO neutron.agent.ovn.metadata.agent [-] Port a1e6fdfb-25f9-43b9-997f-77ee16acf923 in datapath acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 unbound from our chassis#033[00m
Dec  2 06:29:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:10.928 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754#033[00m
Dec  2 06:29:10 np0005542249 nova_compute[254900]: 2025-12-02 11:29:10.922 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:10 np0005542249 nova_compute[254900]: 2025-12-02 11:29:10.947 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:10.968 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[26012320-ee5d-40a0-b7b0-fc080767d21a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:10 np0005542249 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Deactivated successfully.
Dec  2 06:29:10 np0005542249 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Consumed 15.552s CPU time.
Dec  2 06:29:10 np0005542249 systemd-machined[216222]: Machine qemu-19-instance-00000013 terminated.
Dec  2 06:29:10 np0005542249 nova_compute[254900]: 2025-12-02 11:29:10.992 254904 DEBUG oslo_concurrency.lockutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:10 np0005542249 nova_compute[254900]: 2025-12-02 11:29:10.993 254904 DEBUG oslo_concurrency.lockutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.007 254904 DEBUG nova.virt.hardware [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.008 254904 INFO nova.compute.claims [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:29:11 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:11.011 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[8934d0bb-3354-4284-bd05-a11b96a8de15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:11 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:11.015 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1fc5c4-5d48-4a9e-b127-66370c62f4cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.030 254904 INFO nova.virt.libvirt.driver [-] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Instance destroyed successfully.#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.031 254904 DEBUG nova.objects.instance [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lazy-loading 'resources' on Instance uuid fabff5b9-969a-4502-91d0-6adacfa56156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.049 254904 DEBUG nova.virt.libvirt.vif [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:28:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-2136046997',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-2136046997',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-2136046997',id=19,image_ref='a9e4f7b6-847e-46ce-bfbd-ed73825e90e7',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGq4u+zRVpMlSzv0GDh/hXtBiwrSpv2j/V7PltvrfOY2RYVPhY+lN2UU9H4FENMDN9eQ3hIm2t0KuoZAjx/ZJZ14cN3dZpLgEgF269wySXXtGqVNOYY82t2wxmzUz11CuQ==',key_name='tempest-keypair-722203544',keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:28:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='625a6939c31646a4a83ea851774cf28c',ramdisk_id='',reservation_id='r-vv379f7b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1396850361',image_owner_user_name='tempest-TestVolumeBootPattern-1396850361-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1396850361',owner_user_name='tempest-TestVolumeBootPattern-1396850361-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:28:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ccb73a613554d938221b4bf46d7ae83',uuid=fabff5b9-969a-4502-91d0-6adacfa56156,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a1e6fdfb-25f9-43b9-997f-77ee16acf923", "address": "fa:16:3e:d6:07:e1", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1e6fdfb-25", "ovs_interfaceid": "a1e6fdfb-25f9-43b9-997f-77ee16acf923", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.050 254904 DEBUG nova.network.os_vif_util [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converting VIF {"id": "a1e6fdfb-25f9-43b9-997f-77ee16acf923", "address": "fa:16:3e:d6:07:e1", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1e6fdfb-25", "ovs_interfaceid": "a1e6fdfb-25f9-43b9-997f-77ee16acf923", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.051 254904 DEBUG nova.network.os_vif_util [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d6:07:e1,bridge_name='br-int',has_traffic_filtering=True,id=a1e6fdfb-25f9-43b9-997f-77ee16acf923,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1e6fdfb-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:29:11 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:11.050 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[37086ba3-9f73-4a0c-97ae-b38a08778382]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.052 254904 DEBUG os_vif [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:07:e1,bridge_name='br-int',has_traffic_filtering=True,id=a1e6fdfb-25f9-43b9-997f-77ee16acf923,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1e6fdfb-25') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.054 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.054 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1e6fdfb-25, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.056 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.060 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.062 254904 INFO os_vif [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:07:e1,bridge_name='br-int',has_traffic_filtering=True,id=a1e6fdfb-25f9-43b9-997f-77ee16acf923,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1e6fdfb-25')#033[00m
Dec  2 06:29:11 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:11.088 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[0e2d7e86-0ecc-40ed-943c-2caf6d855983]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacfaa8ac-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:73:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504911, 'reachable_time': 33921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287295, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:11 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:11.109 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[af665409-a45e-49d4-9cdc-1482181f3d3e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapacfaa8ac-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504932, 'tstamp': 504932}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287311, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapacfaa8ac-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504936, 'tstamp': 504936}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287311, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:11 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:11.112 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacfaa8ac-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.117 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:11 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:11.118 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapacfaa8ac-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:11 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:11.118 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:29:11 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:11.119 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapacfaa8ac-00, col_values=(('external_ids', {'iface-id': '1636ad30-406d-4138-823e-abbe7f4d87ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:11 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:11.120 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.200 254904 DEBUG oslo_concurrency.processutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.290 254904 INFO nova.virt.libvirt.driver [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Deleting instance files /var/lib/nova/instances/fabff5b9-969a-4502-91d0-6adacfa56156_del#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.292 254904 INFO nova.virt.libvirt.driver [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Deletion of /var/lib/nova/instances/fabff5b9-969a-4502-91d0-6adacfa56156_del complete#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.372 254904 INFO nova.compute.manager [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.374 254904 DEBUG oslo.service.loopingcall [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.375 254904 DEBUG nova.compute.manager [-] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.376 254904 DEBUG nova.network.neutron [-] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:29:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:29:11 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1188722932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.694 254904 DEBUG nova.compute.manager [req-f8ff7012-4e7b-4b3a-af31-c2e564650d6c req-72627db7-da1b-4bc6-be83-8e882b69888f 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Received event network-vif-unplugged-a1e6fdfb-25f9-43b9-997f-77ee16acf923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.695 254904 DEBUG oslo_concurrency.lockutils [req-f8ff7012-4e7b-4b3a-af31-c2e564650d6c req-72627db7-da1b-4bc6-be83-8e882b69888f 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "fabff5b9-969a-4502-91d0-6adacfa56156-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.696 254904 DEBUG oslo_concurrency.lockutils [req-f8ff7012-4e7b-4b3a-af31-c2e564650d6c req-72627db7-da1b-4bc6-be83-8e882b69888f 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "fabff5b9-969a-4502-91d0-6adacfa56156-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.696 254904 DEBUG oslo_concurrency.lockutils [req-f8ff7012-4e7b-4b3a-af31-c2e564650d6c req-72627db7-da1b-4bc6-be83-8e882b69888f 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "fabff5b9-969a-4502-91d0-6adacfa56156-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.697 254904 DEBUG nova.compute.manager [req-f8ff7012-4e7b-4b3a-af31-c2e564650d6c req-72627db7-da1b-4bc6-be83-8e882b69888f 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] No waiting events found dispatching network-vif-unplugged-a1e6fdfb-25f9-43b9-997f-77ee16acf923 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.698 254904 DEBUG nova.compute.manager [req-f8ff7012-4e7b-4b3a-af31-c2e564650d6c req-72627db7-da1b-4bc6-be83-8e882b69888f 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Received event network-vif-unplugged-a1e6fdfb-25f9-43b9-997f-77ee16acf923 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.702 254904 DEBUG oslo_concurrency.processutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.711 254904 DEBUG nova.compute.provider_tree [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.731 254904 DEBUG nova.scheduler.client.report [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.756 254904 DEBUG oslo_concurrency.lockutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.757 254904 DEBUG nova.compute.manager [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.815 254904 DEBUG nova.compute.manager [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.817 254904 DEBUG nova.network.neutron [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.839 254904 INFO nova.virt.libvirt.driver [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.869 254904 DEBUG nova.compute.manager [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.963 254904 DEBUG nova.compute.manager [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.965 254904 DEBUG nova.virt.libvirt.driver [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.966 254904 INFO nova.virt.libvirt.driver [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Creating image(s)#033[00m
Dec  2 06:29:11 np0005542249 nova_compute[254900]: 2025-12-02 11:29:11.994 254904 DEBUG nova.storage.rbd_utils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] rbd image a8aab2b3-e5a2-451d-b77a-9d977f1dd00f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:29:12 np0005542249 nova_compute[254900]: 2025-12-02 11:29:12.019 254904 DEBUG nova.storage.rbd_utils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] rbd image a8aab2b3-e5a2-451d-b77a-9d977f1dd00f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:29:12 np0005542249 nova_compute[254900]: 2025-12-02 11:29:12.043 254904 DEBUG nova.storage.rbd_utils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] rbd image a8aab2b3-e5a2-451d-b77a-9d977f1dd00f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:29:12 np0005542249 nova_compute[254900]: 2025-12-02 11:29:12.048 254904 DEBUG oslo_concurrency.processutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:12 np0005542249 nova_compute[254900]: 2025-12-02 11:29:12.072 254904 DEBUG nova.policy [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4326297d589c4e5cafa95e1e95585b57', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bb74d6d8597c490e967d98a6a783175e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:29:12 np0005542249 nova_compute[254900]: 2025-12-02 11:29:12.110 254904 DEBUG oslo_concurrency.processutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:12 np0005542249 nova_compute[254900]: 2025-12-02 11:29:12.111 254904 DEBUG oslo_concurrency.lockutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Acquiring lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:12 np0005542249 nova_compute[254900]: 2025-12-02 11:29:12.112 254904 DEBUG oslo_concurrency.lockutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:12 np0005542249 nova_compute[254900]: 2025-12-02 11:29:12.112 254904 DEBUG oslo_concurrency.lockutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:12 np0005542249 nova_compute[254900]: 2025-12-02 11:29:12.136 254904 DEBUG nova.storage.rbd_utils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] rbd image a8aab2b3-e5a2-451d-b77a-9d977f1dd00f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:29:12 np0005542249 nova_compute[254900]: 2025-12-02 11:29:12.142 254904 DEBUG oslo_concurrency.processutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 a8aab2b3-e5a2-451d-b77a-9d977f1dd00f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:12 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1575: 321 pgs: 321 active+clean; 299 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 581 KiB/s rd, 20 KiB/s wr, 41 op/s
Dec  2 06:29:12 np0005542249 nova_compute[254900]: 2025-12-02 11:29:12.425 254904 DEBUG oslo_concurrency.processutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 a8aab2b3-e5a2-451d-b77a-9d977f1dd00f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:12 np0005542249 nova_compute[254900]: 2025-12-02 11:29:12.483 254904 DEBUG nova.storage.rbd_utils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] resizing rbd image a8aab2b3-e5a2-451d-b77a-9d977f1dd00f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  2 06:29:12 np0005542249 nova_compute[254900]: 2025-12-02 11:29:12.576 254904 DEBUG nova.objects.instance [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lazy-loading 'migration_context' on Instance uuid a8aab2b3-e5a2-451d-b77a-9d977f1dd00f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:29:12 np0005542249 nova_compute[254900]: 2025-12-02 11:29:12.670 254904 DEBUG nova.virt.libvirt.driver [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 06:29:12 np0005542249 nova_compute[254900]: 2025-12-02 11:29:12.671 254904 DEBUG nova.virt.libvirt.driver [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Ensure instance console log exists: /var/lib/nova/instances/a8aab2b3-e5a2-451d-b77a-9d977f1dd00f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:29:12 np0005542249 nova_compute[254900]: 2025-12-02 11:29:12.672 254904 DEBUG oslo_concurrency.lockutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:12 np0005542249 nova_compute[254900]: 2025-12-02 11:29:12.672 254904 DEBUG oslo_concurrency.lockutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:12 np0005542249 nova_compute[254900]: 2025-12-02 11:29:12.673 254904 DEBUG oslo_concurrency.lockutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:29:13 np0005542249 nova_compute[254900]: 2025-12-02 11:29:13.670 254904 DEBUG nova.network.neutron [-] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:29:13 np0005542249 nova_compute[254900]: 2025-12-02 11:29:13.695 254904 INFO nova.compute.manager [-] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Took 2.32 seconds to deallocate network for instance.#033[00m
Dec  2 06:29:13 np0005542249 nova_compute[254900]: 2025-12-02 11:29:13.755 254904 DEBUG nova.compute.manager [req-c97a936c-0b94-46be-83d7-eb36a943fdca req-55c43b37-80f5-4a77-9809-904c9723fc4e 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Received event network-vif-deleted-a1e6fdfb-25f9-43b9-997f-77ee16acf923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:29:13 np0005542249 nova_compute[254900]: 2025-12-02 11:29:13.862 254904 DEBUG nova.compute.manager [req-6de21e6b-0e45-47a0-8ee8-657ac53d01b6 req-43aaaccd-c47f-4756-a55c-441633474285 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Received event network-vif-plugged-a1e6fdfb-25f9-43b9-997f-77ee16acf923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:29:13 np0005542249 nova_compute[254900]: 2025-12-02 11:29:13.863 254904 DEBUG oslo_concurrency.lockutils [req-6de21e6b-0e45-47a0-8ee8-657ac53d01b6 req-43aaaccd-c47f-4756-a55c-441633474285 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "fabff5b9-969a-4502-91d0-6adacfa56156-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:13 np0005542249 nova_compute[254900]: 2025-12-02 11:29:13.864 254904 DEBUG oslo_concurrency.lockutils [req-6de21e6b-0e45-47a0-8ee8-657ac53d01b6 req-43aaaccd-c47f-4756-a55c-441633474285 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "fabff5b9-969a-4502-91d0-6adacfa56156-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:13 np0005542249 nova_compute[254900]: 2025-12-02 11:29:13.865 254904 DEBUG oslo_concurrency.lockutils [req-6de21e6b-0e45-47a0-8ee8-657ac53d01b6 req-43aaaccd-c47f-4756-a55c-441633474285 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "fabff5b9-969a-4502-91d0-6adacfa56156-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:13 np0005542249 nova_compute[254900]: 2025-12-02 11:29:13.865 254904 DEBUG nova.compute.manager [req-6de21e6b-0e45-47a0-8ee8-657ac53d01b6 req-43aaaccd-c47f-4756-a55c-441633474285 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] No waiting events found dispatching network-vif-plugged-a1e6fdfb-25f9-43b9-997f-77ee16acf923 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:29:13 np0005542249 nova_compute[254900]: 2025-12-02 11:29:13.866 254904 WARNING nova.compute.manager [req-6de21e6b-0e45-47a0-8ee8-657ac53d01b6 req-43aaaccd-c47f-4756-a55c-441633474285 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Received unexpected event network-vif-plugged-a1e6fdfb-25f9-43b9-997f-77ee16acf923 for instance with vm_state active and task_state deleting.#033[00m
Dec  2 06:29:13 np0005542249 nova_compute[254900]: 2025-12-02 11:29:13.876 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:13 np0005542249 nova_compute[254900]: 2025-12-02 11:29:13.927 254904 INFO nova.compute.manager [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Took 0.23 seconds to detach 1 volumes for instance.#033[00m
Dec  2 06:29:13 np0005542249 nova_compute[254900]: 2025-12-02 11:29:13.930 254904 DEBUG nova.compute.manager [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Deleting volume: 871b1127-6cb5-4828-aa75-235b62061769 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Dec  2 06:29:14 np0005542249 nova_compute[254900]: 2025-12-02 11:29:14.191 254904 DEBUG nova.network.neutron [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Successfully created port: 1b9be7b0-eed1-4455-a1fa-5418c0d54261 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:29:14 np0005542249 nova_compute[254900]: 2025-12-02 11:29:14.198 254904 DEBUG oslo_concurrency.lockutils [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:14 np0005542249 nova_compute[254900]: 2025-12-02 11:29:14.198 254904 DEBUG oslo_concurrency.lockutils [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:14 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1576: 321 pgs: 321 active+clean; 315 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 474 KiB/s wr, 89 op/s
Dec  2 06:29:14 np0005542249 nova_compute[254900]: 2025-12-02 11:29:14.323 254904 DEBUG oslo_concurrency.processutils [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:29:14 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3409342210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:29:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:29:14 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1701887528' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:29:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:29:14 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1701887528' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:29:14 np0005542249 nova_compute[254900]: 2025-12-02 11:29:14.788 254904 DEBUG oslo_concurrency.processutils [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:14 np0005542249 nova_compute[254900]: 2025-12-02 11:29:14.797 254904 DEBUG nova.compute.provider_tree [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:29:14 np0005542249 nova_compute[254900]: 2025-12-02 11:29:14.815 254904 DEBUG nova.scheduler.client.report [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:29:14 np0005542249 nova_compute[254900]: 2025-12-02 11:29:14.843 254904 DEBUG oslo_concurrency.lockutils [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:14 np0005542249 nova_compute[254900]: 2025-12-02 11:29:14.877 254904 INFO nova.scheduler.client.report [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Deleted allocations for instance fabff5b9-969a-4502-91d0-6adacfa56156#033[00m
Dec  2 06:29:14 np0005542249 nova_compute[254900]: 2025-12-02 11:29:14.964 254904 DEBUG oslo_concurrency.lockutils [None req-583ae5c2-66d3-494e-af3d-3c1172da4313 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "fabff5b9-969a-4502-91d0-6adacfa56156" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:15 np0005542249 podman[287526]: 2025-12-02 11:29:15.017025075 +0000 UTC m=+0.100696297 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  2 06:29:15 np0005542249 nova_compute[254900]: 2025-12-02 11:29:15.709 254904 DEBUG nova.network.neutron [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Successfully updated port: 1b9be7b0-eed1-4455-a1fa-5418c0d54261 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:29:15 np0005542249 nova_compute[254900]: 2025-12-02 11:29:15.729 254904 DEBUG oslo_concurrency.lockutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Acquiring lock "refresh_cache-a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:29:15 np0005542249 nova_compute[254900]: 2025-12-02 11:29:15.730 254904 DEBUG oslo_concurrency.lockutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Acquired lock "refresh_cache-a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:29:15 np0005542249 nova_compute[254900]: 2025-12-02 11:29:15.730 254904 DEBUG nova.network.neutron [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:29:15 np0005542249 nova_compute[254900]: 2025-12-02 11:29:15.836 254904 DEBUG nova.compute.manager [req-a1eff955-33de-4d25-9915-246aeef93217 req-fdd76503-dfe2-4dcd-aa5f-342aee621226 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Received event network-changed-1b9be7b0-eed1-4455-a1fa-5418c0d54261 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:29:15 np0005542249 nova_compute[254900]: 2025-12-02 11:29:15.838 254904 DEBUG nova.compute.manager [req-a1eff955-33de-4d25-9915-246aeef93217 req-fdd76503-dfe2-4dcd-aa5f-342aee621226 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Refreshing instance network info cache due to event network-changed-1b9be7b0-eed1-4455-a1fa-5418c0d54261. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:29:15 np0005542249 nova_compute[254900]: 2025-12-02 11:29:15.838 254904 DEBUG oslo_concurrency.lockutils [req-a1eff955-33de-4d25-9915-246aeef93217 req-fdd76503-dfe2-4dcd-aa5f-342aee621226 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:29:15 np0005542249 nova_compute[254900]: 2025-12-02 11:29:15.864 254904 DEBUG nova.network.neutron [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:29:15 np0005542249 nova_compute[254900]: 2025-12-02 11:29:15.970 254904 DEBUG nova.compute.manager [req-59ebc365-73bf-4e99-8f69-78f27b6e2cd8 req-1f92d028-7424-4d61-8a84-fff934e495a9 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Received event network-changed-cf68cb85-2ba9-4bcb-acd6-30526d17ca18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:29:15 np0005542249 nova_compute[254900]: 2025-12-02 11:29:15.971 254904 DEBUG nova.compute.manager [req-59ebc365-73bf-4e99-8f69-78f27b6e2cd8 req-1f92d028-7424-4d61-8a84-fff934e495a9 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Refreshing instance network info cache due to event network-changed-cf68cb85-2ba9-4bcb-acd6-30526d17ca18. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:29:15 np0005542249 nova_compute[254900]: 2025-12-02 11:29:15.972 254904 DEBUG oslo_concurrency.lockutils [req-59ebc365-73bf-4e99-8f69-78f27b6e2cd8 req-1f92d028-7424-4d61-8a84-fff934e495a9 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-c2b160a2-030e-4625-b36f-060da406de08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:29:15 np0005542249 nova_compute[254900]: 2025-12-02 11:29:15.972 254904 DEBUG oslo_concurrency.lockutils [req-59ebc365-73bf-4e99-8f69-78f27b6e2cd8 req-1f92d028-7424-4d61-8a84-fff934e495a9 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-c2b160a2-030e-4625-b36f-060da406de08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:29:15 np0005542249 nova_compute[254900]: 2025-12-02 11:29:15.973 254904 DEBUG nova.network.neutron [req-59ebc365-73bf-4e99-8f69-78f27b6e2cd8 req-1f92d028-7424-4d61-8a84-fff934e495a9 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Refreshing network info cache for port cf68cb85-2ba9-4bcb-acd6-30526d17ca18 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:29:16 np0005542249 nova_compute[254900]: 2025-12-02 11:29:16.058 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:16 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1577: 321 pgs: 321 active+clean; 330 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 115 op/s
Dec  2 06:29:16 np0005542249 nova_compute[254900]: 2025-12-02 11:29:16.963 254904 DEBUG nova.network.neutron [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Updating instance_info_cache with network_info: [{"id": "1b9be7b0-eed1-4455-a1fa-5418c0d54261", "address": "fa:16:3e:c4:74:f9", "network": {"id": "c9e99d38-205a-48b3-a3a5-ce9f2004f29f", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1523779374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb74d6d8597c490e967d98a6a783175e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b9be7b0-ee", "ovs_interfaceid": "1b9be7b0-eed1-4455-a1fa-5418c0d54261", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:29:16 np0005542249 nova_compute[254900]: 2025-12-02 11:29:16.981 254904 DEBUG oslo_concurrency.lockutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Releasing lock "refresh_cache-a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:29:16 np0005542249 nova_compute[254900]: 2025-12-02 11:29:16.982 254904 DEBUG nova.compute.manager [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Instance network_info: |[{"id": "1b9be7b0-eed1-4455-a1fa-5418c0d54261", "address": "fa:16:3e:c4:74:f9", "network": {"id": "c9e99d38-205a-48b3-a3a5-ce9f2004f29f", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1523779374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb74d6d8597c490e967d98a6a783175e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b9be7b0-ee", "ovs_interfaceid": "1b9be7b0-eed1-4455-a1fa-5418c0d54261", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:29:16 np0005542249 nova_compute[254900]: 2025-12-02 11:29:16.982 254904 DEBUG oslo_concurrency.lockutils [req-a1eff955-33de-4d25-9915-246aeef93217 req-fdd76503-dfe2-4dcd-aa5f-342aee621226 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:29:16 np0005542249 nova_compute[254900]: 2025-12-02 11:29:16.983 254904 DEBUG nova.network.neutron [req-a1eff955-33de-4d25-9915-246aeef93217 req-fdd76503-dfe2-4dcd-aa5f-342aee621226 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Refreshing network info cache for port 1b9be7b0-eed1-4455-a1fa-5418c0d54261 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:29:16 np0005542249 nova_compute[254900]: 2025-12-02 11:29:16.985 254904 DEBUG nova.virt.libvirt.driver [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Start _get_guest_xml network_info=[{"id": "1b9be7b0-eed1-4455-a1fa-5418c0d54261", "address": "fa:16:3e:c4:74:f9", "network": {"id": "c9e99d38-205a-48b3-a3a5-ce9f2004f29f", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1523779374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb74d6d8597c490e967d98a6a783175e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b9be7b0-ee", "ovs_interfaceid": "1b9be7b0-eed1-4455-a1fa-5418c0d54261", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'image_id': '5a40f66c-ab43-47dd-9880-e59f9fa2c60e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:29:16 np0005542249 nova_compute[254900]: 2025-12-02 11:29:16.989 254904 WARNING nova.virt.libvirt.driver [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:29:16 np0005542249 nova_compute[254900]: 2025-12-02 11:29:16.994 254904 DEBUG nova.virt.libvirt.host [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:29:16 np0005542249 nova_compute[254900]: 2025-12-02 11:29:16.995 254904 DEBUG nova.virt.libvirt.host [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.000 254904 DEBUG nova.virt.libvirt.host [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.000 254904 DEBUG nova.virt.libvirt.host [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.001 254904 DEBUG nova.virt.libvirt.driver [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.001 254904 DEBUG nova.virt.hardware [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.002 254904 DEBUG nova.virt.hardware [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.002 254904 DEBUG nova.virt.hardware [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.002 254904 DEBUG nova.virt.hardware [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.002 254904 DEBUG nova.virt.hardware [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.003 254904 DEBUG nova.virt.hardware [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.003 254904 DEBUG nova.virt.hardware [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.003 254904 DEBUG nova.virt.hardware [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.004 254904 DEBUG nova.virt.hardware [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.004 254904 DEBUG nova.virt.hardware [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.004 254904 DEBUG nova.virt.hardware [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.007 254904 DEBUG oslo_concurrency.processutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:29:17 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1252557180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:29:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e392 do_prune osdmap full prune enabled
Dec  2 06:29:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e393 e393: 3 total, 3 up, 3 in
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.407 254904 DEBUG oslo_concurrency.processutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:17 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e393: 3 total, 3 up, 3 in
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.447 254904 DEBUG nova.storage.rbd_utils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] rbd image a8aab2b3-e5a2-451d-b77a-9d977f1dd00f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.452 254904 DEBUG oslo_concurrency.processutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.696 254904 DEBUG nova.network.neutron [req-59ebc365-73bf-4e99-8f69-78f27b6e2cd8 req-1f92d028-7424-4d61-8a84-fff934e495a9 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Updated VIF entry in instance network info cache for port cf68cb85-2ba9-4bcb-acd6-30526d17ca18. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.697 254904 DEBUG nova.network.neutron [req-59ebc365-73bf-4e99-8f69-78f27b6e2cd8 req-1f92d028-7424-4d61-8a84-fff934e495a9 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Updating instance_info_cache with network_info: [{"id": "cf68cb85-2ba9-4bcb-acd6-30526d17ca18", "address": "fa:16:3e:dd:1b:59", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf68cb85-2b", "ovs_interfaceid": "cf68cb85-2ba9-4bcb-acd6-30526d17ca18", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.725 254904 DEBUG oslo_concurrency.lockutils [req-59ebc365-73bf-4e99-8f69-78f27b6e2cd8 req-1f92d028-7424-4d61-8a84-fff934e495a9 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-c2b160a2-030e-4625-b36f-060da406de08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:29:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:29:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:29:17 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2695048076' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.902 254904 DEBUG oslo_concurrency.processutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.906 254904 DEBUG nova.virt.libvirt.vif [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:29:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-126628184',display_name='tempest-SnapshotDataIntegrityTests-server-126628184',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-126628184',id=21,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA1dSphAQZ2jfKBzlGAu9edcO3UD9dt2xlesom22AZBB6h7sYaoLNyRUI6o7THwlIVSzyFUotzDDcp2T+yKMByLGwVw02W9JR0EYyH59E+9V8vLLDXvI1oTHk5o/kZ3q0g==',key_name='tempest-keypair-927487953',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bb74d6d8597c490e967d98a6a783175e',ramdisk_id='',reservation_id='r-ip7p9r8g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SnapshotDataIntegrityTests-359161340',owner_user_name='tempest-SnapshotDataIntegrityTests-359161340-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:29:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4326297d589c4e5cafa95e1e95585b57',uuid=a8aab2b3-e5a2-451d-b77a-9d977f1dd00f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b9be7b0-eed1-4455-a1fa-5418c0d54261", "address": "fa:16:3e:c4:74:f9", "network": {"id": "c9e99d38-205a-48b3-a3a5-ce9f2004f29f", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1523779374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb74d6d8597c490e967d98a6a783175e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b9be7b0-ee", "ovs_interfaceid": "1b9be7b0-eed1-4455-a1fa-5418c0d54261", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.907 254904 DEBUG nova.network.os_vif_util [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Converting VIF {"id": "1b9be7b0-eed1-4455-a1fa-5418c0d54261", "address": "fa:16:3e:c4:74:f9", "network": {"id": "c9e99d38-205a-48b3-a3a5-ce9f2004f29f", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1523779374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb74d6d8597c490e967d98a6a783175e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b9be7b0-ee", "ovs_interfaceid": "1b9be7b0-eed1-4455-a1fa-5418c0d54261", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.909 254904 DEBUG nova.network.os_vif_util [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:74:f9,bridge_name='br-int',has_traffic_filtering=True,id=1b9be7b0-eed1-4455-a1fa-5418c0d54261,network=Network(c9e99d38-205a-48b3-a3a5-ce9f2004f29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b9be7b0-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.912 254904 DEBUG nova.objects.instance [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lazy-loading 'pci_devices' on Instance uuid a8aab2b3-e5a2-451d-b77a-9d977f1dd00f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.934 254904 DEBUG nova.virt.libvirt.driver [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:29:17 np0005542249 nova_compute[254900]:  <uuid>a8aab2b3-e5a2-451d-b77a-9d977f1dd00f</uuid>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:  <name>instance-00000015</name>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <nova:name>tempest-SnapshotDataIntegrityTests-server-126628184</nova:name>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:29:16</nova:creationTime>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:29:17 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:        <nova:user uuid="4326297d589c4e5cafa95e1e95585b57">tempest-SnapshotDataIntegrityTests-359161340-project-member</nova:user>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:        <nova:project uuid="bb74d6d8597c490e967d98a6a783175e">tempest-SnapshotDataIntegrityTests-359161340</nova:project>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <nova:root type="image" uuid="5a40f66c-ab43-47dd-9880-e59f9fa2c60e"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:        <nova:port uuid="1b9be7b0-eed1-4455-a1fa-5418c0d54261">
Dec  2 06:29:17 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <entry name="serial">a8aab2b3-e5a2-451d-b77a-9d977f1dd00f</entry>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <entry name="uuid">a8aab2b3-e5a2-451d-b77a-9d977f1dd00f</entry>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/a8aab2b3-e5a2-451d-b77a-9d977f1dd00f_disk">
Dec  2 06:29:17 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:29:17 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/a8aab2b3-e5a2-451d-b77a-9d977f1dd00f_disk.config">
Dec  2 06:29:17 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:29:17 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:c4:74:f9"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <target dev="tap1b9be7b0-ee"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/a8aab2b3-e5a2-451d-b77a-9d977f1dd00f/console.log" append="off"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:29:17 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:29:17 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:29:17 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:29:17 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.946 254904 DEBUG nova.compute.manager [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Preparing to wait for external event network-vif-plugged-1b9be7b0-eed1-4455-a1fa-5418c0d54261 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.947 254904 DEBUG oslo_concurrency.lockutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Acquiring lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.947 254904 DEBUG oslo_concurrency.lockutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.947 254904 DEBUG oslo_concurrency.lockutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.948 254904 DEBUG nova.virt.libvirt.vif [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:29:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-126628184',display_name='tempest-SnapshotDataIntegrityTests-server-126628184',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-126628184',id=21,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA1dSphAQZ2jfKBzlGAu9edcO3UD9dt2xlesom22AZBB6h7sYaoLNyRUI6o7THwlIVSzyFUotzDDcp2T+yKMByLGwVw02W9JR0EYyH59E+9V8vLLDXvI1oTHk5o/kZ3q0g==',key_name='tempest-keypair-927487953',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bb74d6d8597c490e967d98a6a783175e',ramdisk_id='',reservation_id='r-ip7p9r8g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SnapshotDataIntegrityTests-359161340',owner_user_name='tempest-SnapshotDataIntegrityTests-359161340-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:29:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4326297d589c4e5cafa95e1e95585b57',uuid=a8aab2b3-e5a2-451d-b77a-9d977f1dd00f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b9be7b0-eed1-4455-a1fa-5418c0d54261", "address": "fa:16:3e:c4:74:f9", "network": {"id": "c9e99d38-205a-48b3-a3a5-ce9f2004f29f", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1523779374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb74d6d8597c490e967d98a6a783175e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b9be7b0-ee", "ovs_interfaceid": "1b9be7b0-eed1-4455-a1fa-5418c0d54261", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.948 254904 DEBUG nova.network.os_vif_util [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Converting VIF {"id": "1b9be7b0-eed1-4455-a1fa-5418c0d54261", "address": "fa:16:3e:c4:74:f9", "network": {"id": "c9e99d38-205a-48b3-a3a5-ce9f2004f29f", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1523779374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb74d6d8597c490e967d98a6a783175e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b9be7b0-ee", "ovs_interfaceid": "1b9be7b0-eed1-4455-a1fa-5418c0d54261", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.949 254904 DEBUG nova.network.os_vif_util [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:74:f9,bridge_name='br-int',has_traffic_filtering=True,id=1b9be7b0-eed1-4455-a1fa-5418c0d54261,network=Network(c9e99d38-205a-48b3-a3a5-ce9f2004f29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b9be7b0-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.949 254904 DEBUG os_vif [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:74:f9,bridge_name='br-int',has_traffic_filtering=True,id=1b9be7b0-eed1-4455-a1fa-5418c0d54261,network=Network(c9e99d38-205a-48b3-a3a5-ce9f2004f29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b9be7b0-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.950 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.950 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.951 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.954 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.955 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b9be7b0-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.955 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1b9be7b0-ee, col_values=(('external_ids', {'iface-id': '1b9be7b0-eed1-4455-a1fa-5418c0d54261', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:74:f9', 'vm-uuid': 'a8aab2b3-e5a2-451d-b77a-9d977f1dd00f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.957 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:17 np0005542249 NetworkManager[48987]: <info>  [1764674957.9586] manager: (tap1b9be7b0-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.960 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.966 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:17 np0005542249 nova_compute[254900]: 2025-12-02 11:29:17.967 254904 INFO os_vif [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:74:f9,bridge_name='br-int',has_traffic_filtering=True,id=1b9be7b0-eed1-4455-a1fa-5418c0d54261,network=Network(c9e99d38-205a-48b3-a3a5-ce9f2004f29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b9be7b0-ee')#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.038 254904 DEBUG nova.virt.libvirt.driver [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.038 254904 DEBUG nova.virt.libvirt.driver [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.039 254904 DEBUG nova.virt.libvirt.driver [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] No VIF found with MAC fa:16:3e:c4:74:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.039 254904 INFO nova.virt.libvirt.driver [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Using config drive#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.067 254904 DEBUG nova.storage.rbd_utils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] rbd image a8aab2b3-e5a2-451d-b77a-9d977f1dd00f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:29:18 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1579: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 332 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 153 op/s
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.569 254904 INFO nova.virt.libvirt.driver [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Creating config drive at /var/lib/nova/instances/a8aab2b3-e5a2-451d-b77a-9d977f1dd00f/disk.config#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.585 254904 DEBUG oslo_concurrency.processutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a8aab2b3-e5a2-451d-b77a-9d977f1dd00f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphzs0d879 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.620 254904 DEBUG nova.network.neutron [req-a1eff955-33de-4d25-9915-246aeef93217 req-fdd76503-dfe2-4dcd-aa5f-342aee621226 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Updated VIF entry in instance network info cache for port 1b9be7b0-eed1-4455-a1fa-5418c0d54261. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.622 254904 DEBUG nova.network.neutron [req-a1eff955-33de-4d25-9915-246aeef93217 req-fdd76503-dfe2-4dcd-aa5f-342aee621226 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Updating instance_info_cache with network_info: [{"id": "1b9be7b0-eed1-4455-a1fa-5418c0d54261", "address": "fa:16:3e:c4:74:f9", "network": {"id": "c9e99d38-205a-48b3-a3a5-ce9f2004f29f", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1523779374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb74d6d8597c490e967d98a6a783175e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b9be7b0-ee", "ovs_interfaceid": "1b9be7b0-eed1-4455-a1fa-5418c0d54261", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.644 254904 DEBUG oslo_concurrency.lockutils [req-a1eff955-33de-4d25-9915-246aeef93217 req-fdd76503-dfe2-4dcd-aa5f-342aee621226 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.692 254904 DEBUG oslo_concurrency.lockutils [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "66196772-8110-4d36-bdfa-d36400059313" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.693 254904 DEBUG oslo_concurrency.lockutils [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "66196772-8110-4d36-bdfa-d36400059313" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.693 254904 DEBUG oslo_concurrency.lockutils [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "66196772-8110-4d36-bdfa-d36400059313-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.693 254904 DEBUG oslo_concurrency.lockutils [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "66196772-8110-4d36-bdfa-d36400059313-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.694 254904 DEBUG oslo_concurrency.lockutils [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "66196772-8110-4d36-bdfa-d36400059313-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.696 254904 INFO nova.compute.manager [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Terminating instance#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.697 254904 DEBUG nova.compute.manager [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.725 254904 DEBUG oslo_concurrency.processutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a8aab2b3-e5a2-451d-b77a-9d977f1dd00f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphzs0d879" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.755 254904 DEBUG nova.storage.rbd_utils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] rbd image a8aab2b3-e5a2-451d-b77a-9d977f1dd00f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:29:18 np0005542249 kernel: tapd395b4fa-31 (unregistering): left promiscuous mode
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.770 254904 DEBUG oslo_concurrency.processutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a8aab2b3-e5a2-451d-b77a-9d977f1dd00f/disk.config a8aab2b3-e5a2-451d-b77a-9d977f1dd00f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:18 np0005542249 NetworkManager[48987]: <info>  [1764674958.7724] device (tapd395b4fa-31): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:29:18 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:18Z|00195|binding|INFO|Releasing lport d395b4fa-3166-499c-b9da-7d7d3574b4e3 from this chassis (sb_readonly=0)
Dec  2 06:29:18 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:18Z|00196|binding|INFO|Setting lport d395b4fa-3166-499c-b9da-7d7d3574b4e3 down in Southbound
Dec  2 06:29:18 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:18Z|00197|binding|INFO|Removing iface tapd395b4fa-31 ovn-installed in OVS
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.809 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:18 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:18.813 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:c6:b6 10.100.0.8'], port_security=['fa:16:3e:74:c6:b6 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '66196772-8110-4d36-bdfa-d36400059313', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '625a6939c31646a4a83ea851774cf28c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e85b987b-ee58-41cc-a0e3-ee2a2605694b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cc823ef0-3e69-4062-a488-a82f483f10bb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=d395b4fa-3166-499c-b9da-7d7d3574b4e3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:29:18 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:18.814 163757 INFO neutron.agent.ovn.metadata.agent [-] Port d395b4fa-3166-499c-b9da-7d7d3574b4e3 in datapath acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 unbound from our chassis#033[00m
Dec  2 06:29:18 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:18.816 163757 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 06:29:18 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:18.817 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[1b9eabe7-d99e-4954-96b3-88056a0be8cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:18 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:18.818 163757 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 namespace which is not needed anymore#033[00m
Dec  2 06:29:18 np0005542249 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Deactivated successfully.
Dec  2 06:29:18 np0005542249 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Consumed 18.917s CPU time.
Dec  2 06:29:18 np0005542249 systemd-machined[216222]: Machine qemu-18-instance-00000012 terminated.
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.877 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.945 254904 INFO nova.virt.libvirt.driver [-] [instance: 66196772-8110-4d36-bdfa-d36400059313] Instance destroyed successfully.#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.947 254904 DEBUG nova.objects.instance [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lazy-loading 'resources' on Instance uuid 66196772-8110-4d36-bdfa-d36400059313 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.972 254904 DEBUG nova.compute.manager [req-6c3f5061-73cd-4624-bb9e-e466ed721776 req-92f656b4-dc3c-4b45-957f-597594d0ebd1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Received event network-vif-unplugged-d395b4fa-3166-499c-b9da-7d7d3574b4e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.973 254904 DEBUG oslo_concurrency.lockutils [req-6c3f5061-73cd-4624-bb9e-e466ed721776 req-92f656b4-dc3c-4b45-957f-597594d0ebd1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "66196772-8110-4d36-bdfa-d36400059313-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.974 254904 DEBUG oslo_concurrency.lockutils [req-6c3f5061-73cd-4624-bb9e-e466ed721776 req-92f656b4-dc3c-4b45-957f-597594d0ebd1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "66196772-8110-4d36-bdfa-d36400059313-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.974 254904 DEBUG oslo_concurrency.lockutils [req-6c3f5061-73cd-4624-bb9e-e466ed721776 req-92f656b4-dc3c-4b45-957f-597594d0ebd1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "66196772-8110-4d36-bdfa-d36400059313-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.974 254904 DEBUG nova.compute.manager [req-6c3f5061-73cd-4624-bb9e-e466ed721776 req-92f656b4-dc3c-4b45-957f-597594d0ebd1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] No waiting events found dispatching network-vif-unplugged-d395b4fa-3166-499c-b9da-7d7d3574b4e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.974 254904 DEBUG nova.compute.manager [req-6c3f5061-73cd-4624-bb9e-e466ed721776 req-92f656b4-dc3c-4b45-957f-597594d0ebd1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Received event network-vif-unplugged-d395b4fa-3166-499c-b9da-7d7d3574b4e3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.978 254904 DEBUG nova.virt.libvirt.vif [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:27:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-932115055',display_name='tempest-TestVolumeBootPattern-volume-backed-server-932115055',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-932115055',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM1+BNrSwiIfoSvboLgHMOgj8+ABI7GuXea7hzH0xzjkB5iJlfyMmtzxiEQGFuepDMbjl1J4yDHepNYnTxnOAAq5FpECzzb0WcjXs5+xWv+i5saSs4Cfct94h1yiQ+rLVQ==',key_name='tempest-keypair-1565797974',keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:27:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='625a6939c31646a4a83ea851774cf28c',ramdisk_id='',reservation_id='r-ipf6jzv7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1396850361',owner_user_name='tempest-TestVolumeBootPattern-1396850361-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:27:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ccb73a613554d938221b4bf46d7ae83',uuid=66196772-8110-4d36-bdfa-d36400059313,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d395b4fa-3166-499c-b9da-7d7d3574b4e3", "address": "fa:16:3e:74:c6:b6", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd395b4fa-31", "ovs_interfaceid": "d395b4fa-3166-499c-b9da-7d7d3574b4e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.978 254904 DEBUG nova.network.os_vif_util [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converting VIF {"id": "d395b4fa-3166-499c-b9da-7d7d3574b4e3", "address": "fa:16:3e:74:c6:b6", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd395b4fa-31", "ovs_interfaceid": "d395b4fa-3166-499c-b9da-7d7d3574b4e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.979 254904 DEBUG nova.network.os_vif_util [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:74:c6:b6,bridge_name='br-int',has_traffic_filtering=True,id=d395b4fa-3166-499c-b9da-7d7d3574b4e3,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd395b4fa-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.979 254904 DEBUG os_vif [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:74:c6:b6,bridge_name='br-int',has_traffic_filtering=True,id=d395b4fa-3166-499c-b9da-7d7d3574b4e3,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd395b4fa-31') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.980 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.981 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd395b4fa-31, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.982 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.984 254904 DEBUG oslo_concurrency.processutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a8aab2b3-e5a2-451d-b77a-9d977f1dd00f/disk.config a8aab2b3-e5a2-451d-b77a-9d977f1dd00f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.985 254904 INFO nova.virt.libvirt.driver [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Deleting local config drive /var/lib/nova/instances/a8aab2b3-e5a2-451d-b77a-9d977f1dd00f/disk.config because it was imported into RBD.#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.985 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.988 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:18 np0005542249 nova_compute[254900]: 2025-12-02 11:29:18.996 254904 INFO os_vif [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:74:c6:b6,bridge_name='br-int',has_traffic_filtering=True,id=d395b4fa-3166-499c-b9da-7d7d3574b4e3,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd395b4fa-31')#033[00m
Dec  2 06:29:19 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[284679]: [NOTICE]   (284683) : haproxy version is 2.8.14-c23fe91
Dec  2 06:29:19 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[284679]: [NOTICE]   (284683) : path to executable is /usr/sbin/haproxy
Dec  2 06:29:19 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[284679]: [WARNING]  (284683) : Exiting Master process...
Dec  2 06:29:19 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[284679]: [ALERT]    (284683) : Current worker (284685) exited with code 143 (Terminated)
Dec  2 06:29:19 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[284679]: [WARNING]  (284683) : All workers exited. Exiting... (0)
Dec  2 06:29:19 np0005542249 systemd[1]: libpod-c17f785326132e02a3ad4f05020157d0e467d5b4a41cbdbbcfe7dc2eeeee1b07.scope: Deactivated successfully.
Dec  2 06:29:19 np0005542249 podman[287704]: 2025-12-02 11:29:19.025694166 +0000 UTC m=+0.060755377 container died c17f785326132e02a3ad4f05020157d0e467d5b4a41cbdbbcfe7dc2eeeee1b07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  2 06:29:19 np0005542249 kernel: tap1b9be7b0-ee: entered promiscuous mode
Dec  2 06:29:19 np0005542249 NetworkManager[48987]: <info>  [1764674959.0488] manager: (tap1b9be7b0-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/110)
Dec  2 06:29:19 np0005542249 systemd-udevd[287653]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:29:19 np0005542249 nova_compute[254900]: 2025-12-02 11:29:19.052 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:19 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:19Z|00198|binding|INFO|Claiming lport 1b9be7b0-eed1-4455-a1fa-5418c0d54261 for this chassis.
Dec  2 06:29:19 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:19Z|00199|binding|INFO|1b9be7b0-eed1-4455-a1fa-5418c0d54261: Claiming fa:16:3e:c4:74:f9 10.100.0.8
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.060 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:74:f9 10.100.0.8'], port_security=['fa:16:3e:c4:74:f9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a8aab2b3-e5a2-451d-b77a-9d977f1dd00f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9e99d38-205a-48b3-a3a5-ce9f2004f29f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bb74d6d8597c490e967d98a6a783175e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e1f14e34-c0e9-4193-8b35-e0db4664ab56 fe3876a3-1fb3-4ce2-9e8c-cda4930e5403', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c775ae36-b6c4-4114-a80c-240c856ae41b, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=1b9be7b0-eed1-4455-a1fa-5418c0d54261) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:29:19 np0005542249 NetworkManager[48987]: <info>  [1764674959.0672] device (tap1b9be7b0-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:29:19 np0005542249 NetworkManager[48987]: <info>  [1764674959.0681] device (tap1b9be7b0-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:29:19 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c17f785326132e02a3ad4f05020157d0e467d5b4a41cbdbbcfe7dc2eeeee1b07-userdata-shm.mount: Deactivated successfully.
Dec  2 06:29:19 np0005542249 systemd[1]: var-lib-containers-storage-overlay-17bfce662a46e281654b2d0304fe797236706ae2b860f845b8bd27566cc67495-merged.mount: Deactivated successfully.
Dec  2 06:29:19 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:19Z|00200|binding|INFO|Setting lport 1b9be7b0-eed1-4455-a1fa-5418c0d54261 up in Southbound
Dec  2 06:29:19 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:19Z|00201|binding|INFO|Setting lport 1b9be7b0-eed1-4455-a1fa-5418c0d54261 ovn-installed in OVS
Dec  2 06:29:19 np0005542249 nova_compute[254900]: 2025-12-02 11:29:19.079 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:19 np0005542249 nova_compute[254900]: 2025-12-02 11:29:19.082 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:19 np0005542249 podman[287704]: 2025-12-02 11:29:19.088667841 +0000 UTC m=+0.123729042 container cleanup c17f785326132e02a3ad4f05020157d0e467d5b4a41cbdbbcfe7dc2eeeee1b07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:29:19 np0005542249 systemd-machined[216222]: New machine qemu-21-instance-00000015.
Dec  2 06:29:19 np0005542249 systemd[1]: Started Virtual Machine qemu-21-instance-00000015.
Dec  2 06:29:19 np0005542249 systemd[1]: libpod-conmon-c17f785326132e02a3ad4f05020157d0e467d5b4a41cbdbbcfe7dc2eeeee1b07.scope: Deactivated successfully.
Dec  2 06:29:19 np0005542249 podman[287765]: 2025-12-02 11:29:19.163313242 +0000 UTC m=+0.049161182 container remove c17f785326132e02a3ad4f05020157d0e467d5b4a41cbdbbcfe7dc2eeeee1b07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.173 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3f1dbc-cc99-4c4b-94ea-500ad284ff5a]: (4, ('Tue Dec  2 11:29:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 (c17f785326132e02a3ad4f05020157d0e467d5b4a41cbdbbcfe7dc2eeeee1b07)\nc17f785326132e02a3ad4f05020157d0e467d5b4a41cbdbbcfe7dc2eeeee1b07\nTue Dec  2 11:29:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 (c17f785326132e02a3ad4f05020157d0e467d5b4a41cbdbbcfe7dc2eeeee1b07)\nc17f785326132e02a3ad4f05020157d0e467d5b4a41cbdbbcfe7dc2eeeee1b07\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.177 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[15eaa29f-cfb4-4bb2-b2a9-588d46eeec59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.179 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacfaa8ac-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:19 np0005542249 nova_compute[254900]: 2025-12-02 11:29:19.181 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:19 np0005542249 kernel: tapacfaa8ac-00: left promiscuous mode
Dec  2 06:29:19 np0005542249 nova_compute[254900]: 2025-12-02 11:29:19.203 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.209 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[1611fc8e-876f-4a6d-a2b8-29a1cfe3bce7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.224 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[0cfddfcf-018b-4ab9-9517-022ba657c859]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.225 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[5e02d6ba-2011-4422-9faf-12dbbbc27b67]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:19 np0005542249 nova_compute[254900]: 2025-12-02 11:29:19.251 254904 INFO nova.virt.libvirt.driver [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Deleting instance files /var/lib/nova/instances/66196772-8110-4d36-bdfa-d36400059313_del#033[00m
Dec  2 06:29:19 np0005542249 nova_compute[254900]: 2025-12-02 11:29:19.252 254904 INFO nova.virt.libvirt.driver [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Deletion of /var/lib/nova/instances/66196772-8110-4d36-bdfa-d36400059313_del complete#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.254 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[a0bd2915-a22c-4787-ac6f-6a09ad750d07]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504902, 'reachable_time': 27543, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287785, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:19 np0005542249 systemd[1]: run-netns-ovnmeta\x2dacfaa8ac\x2d0b3c\x2d4cdd\x2da6b8\x2da70a713ae754.mount: Deactivated successfully.
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.261 164036 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.261 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[96c2bc5a-42e3-4cba-9748-a84367f453e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.262 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 1b9be7b0-eed1-4455-a1fa-5418c0d54261 in datapath c9e99d38-205a-48b3-a3a5-ce9f2004f29f unbound from our chassis#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.264 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c9e99d38-205a-48b3-a3a5-ce9f2004f29f#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.278 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[6641efa6-a3d0-4cbc-8c74-533feb008ca4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.279 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc9e99d38-21 in ovnmeta-c9e99d38-205a-48b3-a3a5-ce9f2004f29f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.281 262398 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc9e99d38-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.281 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[22669363-4724-42f6-806c-0b3d87b9c910]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.282 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[1e87e9d5-acda-4a2c-a067-ce8403704dfc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.295 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[ab3bffc8-e0a1-4292-b74a-5f3472ccde2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:19 np0005542249 nova_compute[254900]: 2025-12-02 11:29:19.303 254904 INFO nova.compute.manager [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Took 0.61 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:29:19 np0005542249 nova_compute[254900]: 2025-12-02 11:29:19.304 254904 DEBUG oslo.service.loopingcall [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:29:19 np0005542249 nova_compute[254900]: 2025-12-02 11:29:19.304 254904 DEBUG nova.compute.manager [-] [instance: 66196772-8110-4d36-bdfa-d36400059313] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:29:19 np0005542249 nova_compute[254900]: 2025-12-02 11:29:19.304 254904 DEBUG nova.network.neutron [-] [instance: 66196772-8110-4d36-bdfa-d36400059313] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.317 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[d3fa022f-989a-48fb-9ae5-555aa030ac37]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.358 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[b5049693-8459-4c24-8b94-46b120cea5c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.371 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[bf29c985-0208-447f-8ef8-089ead392bf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:19 np0005542249 NetworkManager[48987]: <info>  [1764674959.3744] manager: (tapc9e99d38-20): new Veth device (/org/freedesktop/NetworkManager/Devices/111)
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.446 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[b1f69782-4722-43a2-b287-80931af0920d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.451 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[295df424-52c4-46f9-b4f2-a00433b2c683]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:19 np0005542249 NetworkManager[48987]: <info>  [1764674959.4911] device (tapc9e99d38-20): carrier: link connected
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.499 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[17bcf26f-22c8-467a-b1c0-44b6c9c1be23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.533 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[6ef22afa-45b7-4b2c-be66-3371dd74ec68]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc9e99d38-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:ce:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 515231, 'reachable_time': 17003, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287813, 'error': None, 'target': 'ovnmeta-c9e99d38-205a-48b3-a3a5-ce9f2004f29f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.564 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[758a9a51-e1e8-434f-956c-b949b674179d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:ce20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 515231, 'tstamp': 515231}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287814, 'error': None, 'target': 'ovnmeta-c9e99d38-205a-48b3-a3a5-ce9f2004f29f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.592 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[ebe48584-e170-4120-b499-56200d6bb71c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc9e99d38-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:ce:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 515231, 'reachable_time': 17003, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287828, 'error': None, 'target': 'ovnmeta-c9e99d38-205a-48b3-a3a5-ce9f2004f29f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.654 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[e888ec64-ee1c-43f3-ac7a-a1e34f58a5cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.730 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[3bae8034-dc20-4043-b58c-83aafa1efa39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.731 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9e99d38-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.732 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.732 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc9e99d38-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:19 np0005542249 nova_compute[254900]: 2025-12-02 11:29:19.734 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:19 np0005542249 kernel: tapc9e99d38-20: entered promiscuous mode
Dec  2 06:29:19 np0005542249 NetworkManager[48987]: <info>  [1764674959.7359] manager: (tapc9e99d38-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Dec  2 06:29:19 np0005542249 nova_compute[254900]: 2025-12-02 11:29:19.736 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.737 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc9e99d38-20, col_values=(('external_ids', {'iface-id': '1174a946-6ea5-4cb8-a0d5-527223216d63'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:19 np0005542249 nova_compute[254900]: 2025-12-02 11:29:19.737 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:19 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:19Z|00202|binding|INFO|Releasing lport 1174a946-6ea5-4cb8-a0d5-527223216d63 from this chassis (sb_readonly=0)
Dec  2 06:29:19 np0005542249 nova_compute[254900]: 2025-12-02 11:29:19.754 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.755 163757 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c9e99d38-205a-48b3-a3a5-ce9f2004f29f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c9e99d38-205a-48b3-a3a5-ce9f2004f29f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.756 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[72b0e080-3c49-4114-8933-c9ef22715b2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.757 163757 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: global
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]:    log         /dev/log local0 debug
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]:    log-tag     haproxy-metadata-proxy-c9e99d38-205a-48b3-a3a5-ce9f2004f29f
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]:    user        root
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]:    group       root
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]:    maxconn     1024
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]:    pidfile     /var/lib/neutron/external/pids/c9e99d38-205a-48b3-a3a5-ce9f2004f29f.pid.haproxy
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]:    daemon
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: defaults
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]:    log global
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]:    mode http
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]:    option httplog
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]:    option dontlognull
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]:    option http-server-close
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]:    option forwardfor
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]:    retries                 3
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]:    timeout http-request    30s
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]:    timeout connect         30s
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]:    timeout client          32s
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]:    timeout server          32s
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]:    timeout http-keep-alive 30s
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: listen listener
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]:    bind 169.254.169.254:80
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]:    http-request add-header X-OVN-Network-ID c9e99d38-205a-48b3-a3a5-ce9f2004f29f
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.758 163757 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c9e99d38-205a-48b3-a3a5-ce9f2004f29f', 'env', 'PROCESS_TAG=haproxy-c9e99d38-205a-48b3-a3a5-ce9f2004f29f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c9e99d38-205a-48b3-a3a5-ce9f2004f29f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 06:29:19 np0005542249 nova_compute[254900]: 2025-12-02 11:29:19.764 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674959.7644706, a8aab2b3-e5a2-451d-b77a-9d977f1dd00f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:29:19 np0005542249 nova_compute[254900]: 2025-12-02 11:29:19.765 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] VM Started (Lifecycle Event)#033[00m
Dec  2 06:29:19 np0005542249 nova_compute[254900]: 2025-12-02 11:29:19.786 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:29:19 np0005542249 nova_compute[254900]: 2025-12-02 11:29:19.792 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674959.764641, a8aab2b3-e5a2-451d-b77a-9d977f1dd00f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:29:19 np0005542249 nova_compute[254900]: 2025-12-02 11:29:19.792 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:29:19 np0005542249 nova_compute[254900]: 2025-12-02 11:29:19.815 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:29:19 np0005542249 nova_compute[254900]: 2025-12-02 11:29:19.818 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:29:19 np0005542249 nova_compute[254900]: 2025-12-02 11:29:19.839 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.843 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.845 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:19.847 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:20 np0005542249 podman[287889]: 2025-12-02 11:29:20.210115726 +0000 UTC m=+0.074869869 container create d4533fd04311b1019f7c04ce5f4145d036e21ac76e9d61a862a9b666bbb9ee69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9e99d38-205a-48b3-a3a5-ce9f2004f29f, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  2 06:29:20 np0005542249 systemd[1]: Started libpod-conmon-d4533fd04311b1019f7c04ce5f4145d036e21ac76e9d61a862a9b666bbb9ee69.scope.
Dec  2 06:29:20 np0005542249 podman[287889]: 2025-12-02 11:29:20.174351347 +0000 UTC m=+0.039105530 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:29:20 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1580: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 327 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 152 op/s
Dec  2 06:29:20 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:29:20 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3f2c1643f39088df85f8111a24d5894a2a780fc5ada4b8fd60c3997cb126675/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 06:29:20 np0005542249 podman[287889]: 2025-12-02 11:29:20.330263638 +0000 UTC m=+0.195017871 container init d4533fd04311b1019f7c04ce5f4145d036e21ac76e9d61a862a9b666bbb9ee69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9e99d38-205a-48b3-a3a5-ce9f2004f29f, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec  2 06:29:20 np0005542249 podman[287889]: 2025-12-02 11:29:20.336136068 +0000 UTC m=+0.200890241 container start d4533fd04311b1019f7c04ce5f4145d036e21ac76e9d61a862a9b666bbb9ee69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9e99d38-205a-48b3-a3a5-ce9f2004f29f, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:29:20 np0005542249 neutron-haproxy-ovnmeta-c9e99d38-205a-48b3-a3a5-ce9f2004f29f[287904]: [NOTICE]   (287908) : New worker (287910) forked
Dec  2 06:29:20 np0005542249 neutron-haproxy-ovnmeta-c9e99d38-205a-48b3-a3a5-ce9f2004f29f[287904]: [NOTICE]   (287908) : Loading success.
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.098 254904 DEBUG nova.compute.manager [req-db65eef1-1a48-44e3-9447-fbfa9376c5ab req-ff94370c-de07-48ba-a318-b76dde9f4061 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Received event network-vif-plugged-d395b4fa-3166-499c-b9da-7d7d3574b4e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.099 254904 DEBUG oslo_concurrency.lockutils [req-db65eef1-1a48-44e3-9447-fbfa9376c5ab req-ff94370c-de07-48ba-a318-b76dde9f4061 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "66196772-8110-4d36-bdfa-d36400059313-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.100 254904 DEBUG oslo_concurrency.lockutils [req-db65eef1-1a48-44e3-9447-fbfa9376c5ab req-ff94370c-de07-48ba-a318-b76dde9f4061 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "66196772-8110-4d36-bdfa-d36400059313-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.100 254904 DEBUG oslo_concurrency.lockutils [req-db65eef1-1a48-44e3-9447-fbfa9376c5ab req-ff94370c-de07-48ba-a318-b76dde9f4061 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "66196772-8110-4d36-bdfa-d36400059313-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.100 254904 DEBUG nova.compute.manager [req-db65eef1-1a48-44e3-9447-fbfa9376c5ab req-ff94370c-de07-48ba-a318-b76dde9f4061 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] No waiting events found dispatching network-vif-plugged-d395b4fa-3166-499c-b9da-7d7d3574b4e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.101 254904 WARNING nova.compute.manager [req-db65eef1-1a48-44e3-9447-fbfa9376c5ab req-ff94370c-de07-48ba-a318-b76dde9f4061 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Received unexpected event network-vif-plugged-d395b4fa-3166-499c-b9da-7d7d3574b4e3 for instance with vm_state active and task_state deleting.#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.101 254904 DEBUG nova.compute.manager [req-db65eef1-1a48-44e3-9447-fbfa9376c5ab req-ff94370c-de07-48ba-a318-b76dde9f4061 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Received event network-vif-plugged-1b9be7b0-eed1-4455-a1fa-5418c0d54261 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.102 254904 DEBUG oslo_concurrency.lockutils [req-db65eef1-1a48-44e3-9447-fbfa9376c5ab req-ff94370c-de07-48ba-a318-b76dde9f4061 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.102 254904 DEBUG oslo_concurrency.lockutils [req-db65eef1-1a48-44e3-9447-fbfa9376c5ab req-ff94370c-de07-48ba-a318-b76dde9f4061 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.102 254904 DEBUG oslo_concurrency.lockutils [req-db65eef1-1a48-44e3-9447-fbfa9376c5ab req-ff94370c-de07-48ba-a318-b76dde9f4061 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.103 254904 DEBUG nova.compute.manager [req-db65eef1-1a48-44e3-9447-fbfa9376c5ab req-ff94370c-de07-48ba-a318-b76dde9f4061 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Processing event network-vif-plugged-1b9be7b0-eed1-4455-a1fa-5418c0d54261 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.103 254904 DEBUG nova.compute.manager [req-db65eef1-1a48-44e3-9447-fbfa9376c5ab req-ff94370c-de07-48ba-a318-b76dde9f4061 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Received event network-vif-plugged-1b9be7b0-eed1-4455-a1fa-5418c0d54261 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.104 254904 DEBUG oslo_concurrency.lockutils [req-db65eef1-1a48-44e3-9447-fbfa9376c5ab req-ff94370c-de07-48ba-a318-b76dde9f4061 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.104 254904 DEBUG oslo_concurrency.lockutils [req-db65eef1-1a48-44e3-9447-fbfa9376c5ab req-ff94370c-de07-48ba-a318-b76dde9f4061 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.104 254904 DEBUG oslo_concurrency.lockutils [req-db65eef1-1a48-44e3-9447-fbfa9376c5ab req-ff94370c-de07-48ba-a318-b76dde9f4061 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.105 254904 DEBUG nova.compute.manager [req-db65eef1-1a48-44e3-9447-fbfa9376c5ab req-ff94370c-de07-48ba-a318-b76dde9f4061 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] No waiting events found dispatching network-vif-plugged-1b9be7b0-eed1-4455-a1fa-5418c0d54261 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.105 254904 WARNING nova.compute.manager [req-db65eef1-1a48-44e3-9447-fbfa9376c5ab req-ff94370c-de07-48ba-a318-b76dde9f4061 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Received unexpected event network-vif-plugged-1b9be7b0-eed1-4455-a1fa-5418c0d54261 for instance with vm_state building and task_state spawning.#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.107 254904 DEBUG nova.compute.manager [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.114 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674961.1125352, a8aab2b3-e5a2-451d-b77a-9d977f1dd00f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.114 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.118 254904 DEBUG nova.virt.libvirt.driver [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.124 254904 INFO nova.virt.libvirt.driver [-] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Instance spawned successfully.#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.125 254904 DEBUG nova.virt.libvirt.driver [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.145 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.157 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.162 254904 DEBUG nova.virt.libvirt.driver [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.162 254904 DEBUG nova.virt.libvirt.driver [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.163 254904 DEBUG nova.virt.libvirt.driver [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.164 254904 DEBUG nova.virt.libvirt.driver [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.165 254904 DEBUG nova.virt.libvirt.driver [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.165 254904 DEBUG nova.virt.libvirt.driver [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.176 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.200 254904 DEBUG nova.network.neutron [-] [instance: 66196772-8110-4d36-bdfa-d36400059313] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.224 254904 INFO nova.compute.manager [-] [instance: 66196772-8110-4d36-bdfa-d36400059313] Took 1.92 seconds to deallocate network for instance.#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.232 254904 INFO nova.compute.manager [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Took 9.27 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.233 254904 DEBUG nova.compute.manager [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.256 254904 DEBUG nova.compute.manager [req-a1e2d771-dcb6-4b1f-939a-d3a72f9e27eb req-0c837869-5dc7-448c-8de7-fbe332e1319a 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Received event network-vif-deleted-d395b4fa-3166-499c-b9da-7d7d3574b4e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.296 254904 INFO nova.compute.manager [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Took 10.34 seconds to build instance.#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.324 254904 DEBUG oslo_concurrency.lockutils [None req-c8aea00a-2e75-4e23-9934-9bcf4554bec4 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.441s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.439 254904 INFO nova.compute.manager [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Took 0.21 seconds to detach 1 volumes for instance.#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.441 254904 DEBUG nova.compute.manager [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 66196772-8110-4d36-bdfa-d36400059313] Deleting volume: 0224a34f-a66a-4461-a7df-9ffc5df8f71a _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.653 254904 DEBUG oslo_concurrency.lockutils [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.654 254904 DEBUG oslo_concurrency.lockutils [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:21 np0005542249 nova_compute[254900]: 2025-12-02 11:29:21.762 254904 DEBUG oslo_concurrency.processutils [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:22 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1581: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 327 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 142 op/s
Dec  2 06:29:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:29:22 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1889361079' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:29:22 np0005542249 nova_compute[254900]: 2025-12-02 11:29:22.314 254904 DEBUG oslo_concurrency.processutils [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:22 np0005542249 nova_compute[254900]: 2025-12-02 11:29:22.325 254904 DEBUG nova.compute.provider_tree [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:29:22 np0005542249 nova_compute[254900]: 2025-12-02 11:29:22.366 254904 DEBUG nova.scheduler.client.report [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:29:22 np0005542249 nova_compute[254900]: 2025-12-02 11:29:22.386 254904 DEBUG oslo_concurrency.lockutils [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:29:22 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2573649383' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:29:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:29:22 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2573649383' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:29:22 np0005542249 nova_compute[254900]: 2025-12-02 11:29:22.415 254904 INFO nova.scheduler.client.report [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Deleted allocations for instance 66196772-8110-4d36-bdfa-d36400059313#033[00m
Dec  2 06:29:22 np0005542249 nova_compute[254900]: 2025-12-02 11:29:22.484 254904 DEBUG oslo_concurrency.lockutils [None req-7ee4b4b2-3a91-4e99-865e-621763e4f7e2 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "66196772-8110-4d36-bdfa-d36400059313" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:22 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:22Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dd:1b:59 10.100.0.10
Dec  2 06:29:22 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:22Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dd:1b:59 10.100.0.10
Dec  2 06:29:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:29:23 np0005542249 nova_compute[254900]: 2025-12-02 11:29:23.879 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:23 np0005542249 nova_compute[254900]: 2025-12-02 11:29:23.949 254904 DEBUG nova.compute.manager [req-8d057fd5-e05e-4c99-8b4a-727f30dfdc17 req-5095c6f8-f044-45ae-a793-23d3c4b994da 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Received event network-changed-1b9be7b0-eed1-4455-a1fa-5418c0d54261 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:29:23 np0005542249 nova_compute[254900]: 2025-12-02 11:29:23.950 254904 DEBUG nova.compute.manager [req-8d057fd5-e05e-4c99-8b4a-727f30dfdc17 req-5095c6f8-f044-45ae-a793-23d3c4b994da 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Refreshing instance network info cache due to event network-changed-1b9be7b0-eed1-4455-a1fa-5418c0d54261. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:29:23 np0005542249 nova_compute[254900]: 2025-12-02 11:29:23.950 254904 DEBUG oslo_concurrency.lockutils [req-8d057fd5-e05e-4c99-8b4a-727f30dfdc17 req-5095c6f8-f044-45ae-a793-23d3c4b994da 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:29:23 np0005542249 nova_compute[254900]: 2025-12-02 11:29:23.951 254904 DEBUG oslo_concurrency.lockutils [req-8d057fd5-e05e-4c99-8b4a-727f30dfdc17 req-5095c6f8-f044-45ae-a793-23d3c4b994da 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:29:23 np0005542249 nova_compute[254900]: 2025-12-02 11:29:23.951 254904 DEBUG nova.network.neutron [req-8d057fd5-e05e-4c99-8b4a-727f30dfdc17 req-5095c6f8-f044-45ae-a793-23d3c4b994da 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Refreshing network info cache for port 1b9be7b0-eed1-4455-a1fa-5418c0d54261 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:29:23 np0005542249 nova_compute[254900]: 2025-12-02 11:29:23.983 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:24 np0005542249 podman[287942]: 2025-12-02 11:29:24.060898411 +0000 UTC m=+0.125416487 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:29:24 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1582: 321 pgs: 321 active+clean; 305 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 142 op/s
Dec  2 06:29:24 np0005542249 podman[287968]: 2025-12-02 11:29:24.974333375 +0000 UTC m=+0.061825095 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  2 06:29:25 np0005542249 nova_compute[254900]: 2025-12-02 11:29:25.085 254904 DEBUG nova.network.neutron [req-8d057fd5-e05e-4c99-8b4a-727f30dfdc17 req-5095c6f8-f044-45ae-a793-23d3c4b994da 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Updated VIF entry in instance network info cache for port 1b9be7b0-eed1-4455-a1fa-5418c0d54261. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:29:25 np0005542249 nova_compute[254900]: 2025-12-02 11:29:25.086 254904 DEBUG nova.network.neutron [req-8d057fd5-e05e-4c99-8b4a-727f30dfdc17 req-5095c6f8-f044-45ae-a793-23d3c4b994da 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Updating instance_info_cache with network_info: [{"id": "1b9be7b0-eed1-4455-a1fa-5418c0d54261", "address": "fa:16:3e:c4:74:f9", "network": {"id": "c9e99d38-205a-48b3-a3a5-ce9f2004f29f", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1523779374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb74d6d8597c490e967d98a6a783175e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b9be7b0-ee", "ovs_interfaceid": "1b9be7b0-eed1-4455-a1fa-5418c0d54261", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:29:25 np0005542249 nova_compute[254900]: 2025-12-02 11:29:25.114 254904 DEBUG oslo_concurrency.lockutils [req-8d057fd5-e05e-4c99-8b4a-727f30dfdc17 req-5095c6f8-f044-45ae-a793-23d3c4b994da 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:29:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e393 do_prune osdmap full prune enabled
Dec  2 06:29:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e394 e394: 3 total, 3 up, 3 in
Dec  2 06:29:25 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e394: 3 total, 3 up, 3 in
Dec  2 06:29:26 np0005542249 nova_compute[254900]: 2025-12-02 11:29:26.022 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764674951.0206447, fabff5b9-969a-4502-91d0-6adacfa56156 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:29:26 np0005542249 nova_compute[254900]: 2025-12-02 11:29:26.022 254904 INFO nova.compute.manager [-] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:29:26 np0005542249 nova_compute[254900]: 2025-12-02 11:29:26.049 254904 DEBUG nova.compute.manager [None req-41f111c3-a06e-404b-a871-b0affb2aa3af - - - - - -] [instance: fabff5b9-969a-4502-91d0-6adacfa56156] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:29:26 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1584: 321 pgs: 321 active+clean; 277 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 180 op/s
Dec  2 06:29:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:29:26
Dec  2 06:29:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:29:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:29:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['default.rgw.control', 'volumes', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'vms']
Dec  2 06:29:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:29:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:29:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:29:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:29:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:29:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:29:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:29:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:29:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:29:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:29:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:29:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:29:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:29:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:29:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:29:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:29:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:29:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:29:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e394 do_prune osdmap full prune enabled
Dec  2 06:29:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e395 e395: 3 total, 3 up, 3 in
Dec  2 06:29:27 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e395: 3 total, 3 up, 3 in
Dec  2 06:29:28 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1586: 321 pgs: 321 active+clean; 302 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 7.3 MiB/s wr, 290 op/s
Dec  2 06:29:28 np0005542249 nova_compute[254900]: 2025-12-02 11:29:28.881 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:28 np0005542249 nova_compute[254900]: 2025-12-02 11:29:28.985 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:30 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1587: 321 pgs: 321 active+clean; 317 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 8.7 MiB/s wr, 278 op/s
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.637 254904 DEBUG oslo_concurrency.lockutils [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "c2b160a2-030e-4625-b36f-060da406de08" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.638 254904 DEBUG oslo_concurrency.lockutils [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "c2b160a2-030e-4625-b36f-060da406de08" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.638 254904 DEBUG oslo_concurrency.lockutils [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "c2b160a2-030e-4625-b36f-060da406de08-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.639 254904 DEBUG oslo_concurrency.lockutils [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "c2b160a2-030e-4625-b36f-060da406de08-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.639 254904 DEBUG oslo_concurrency.lockutils [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "c2b160a2-030e-4625-b36f-060da406de08-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.640 254904 INFO nova.compute.manager [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Terminating instance#033[00m
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.641 254904 DEBUG nova.compute.manager [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:29:30 np0005542249 kernel: tapcf68cb85-2b (unregistering): left promiscuous mode
Dec  2 06:29:30 np0005542249 NetworkManager[48987]: <info>  [1764674970.6932] device (tapcf68cb85-2b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.705 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:30 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:30Z|00203|binding|INFO|Releasing lport cf68cb85-2ba9-4bcb-acd6-30526d17ca18 from this chassis (sb_readonly=0)
Dec  2 06:29:30 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:30Z|00204|binding|INFO|Setting lport cf68cb85-2ba9-4bcb-acd6-30526d17ca18 down in Southbound
Dec  2 06:29:30 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:30Z|00205|binding|INFO|Removing iface tapcf68cb85-2b ovn-installed in OVS
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.708 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:30.719 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:1b:59 10.100.0.10'], port_security=['fa:16:3e:dd:1b:59 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c2b160a2-030e-4625-b36f-060da406de08', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a893d0c223f746328e706d7491d73b20', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5f414683-a8ce-4d9a-ad73-51d7a84d1e7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.176'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9a246c4-d9fe-402e-8fa6-6099b55c4866, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=cf68cb85-2ba9-4bcb-acd6-30526d17ca18) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:29:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:30.721 163757 INFO neutron.agent.ovn.metadata.agent [-] Port cf68cb85-2ba9-4bcb-acd6-30526d17ca18 in datapath 4f9f73cb-9730-4829-ae15-1f03b97e60f8 unbound from our chassis#033[00m
Dec  2 06:29:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:30.722 163757 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4f9f73cb-9730-4829-ae15-1f03b97e60f8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 06:29:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:30.726 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc5dcc1-2785-4ef0-bc62-12f2addf63db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:30.727 163757 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8 namespace which is not needed anymore#033[00m
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.732 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:30 np0005542249 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000014.scope: Deactivated successfully.
Dec  2 06:29:30 np0005542249 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000014.scope: Consumed 19.368s CPU time.
Dec  2 06:29:30 np0005542249 systemd-machined[216222]: Machine qemu-20-instance-00000014 terminated.
Dec  2 06:29:30 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[287252]: [NOTICE]   (287256) : haproxy version is 2.8.14-c23fe91
Dec  2 06:29:30 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[287252]: [NOTICE]   (287256) : path to executable is /usr/sbin/haproxy
Dec  2 06:29:30 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[287252]: [WARNING]  (287256) : Exiting Master process...
Dec  2 06:29:30 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[287252]: [WARNING]  (287256) : Exiting Master process...
Dec  2 06:29:30 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[287252]: [ALERT]    (287256) : Current worker (287258) exited with code 143 (Terminated)
Dec  2 06:29:30 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[287252]: [WARNING]  (287256) : All workers exited. Exiting... (0)
Dec  2 06:29:30 np0005542249 systemd[1]: libpod-02ed0b797476bd4aca8672e02608e10c23947b4df0bc309a8d4e72cd4b0a5fe6.scope: Deactivated successfully.
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.880 254904 INFO nova.virt.libvirt.driver [-] [instance: c2b160a2-030e-4625-b36f-060da406de08] Instance destroyed successfully.#033[00m
Dec  2 06:29:30 np0005542249 podman[288011]: 2025-12-02 11:29:30.881337916 +0000 UTC m=+0.052142582 container died 02ed0b797476bd4aca8672e02608e10c23947b4df0bc309a8d4e72cd4b0a5fe6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.883 254904 DEBUG nova.objects.instance [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lazy-loading 'resources' on Instance uuid c2b160a2-030e-4625-b36f-060da406de08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.900 254904 DEBUG nova.virt.libvirt.vif [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:28:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1955822252',display_name='tempest-TransferEncryptedVolumeTest-server-1955822252',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1955822252',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBPd/5CNJJJVCM7bF71nyziMenMlWa5ulXBeejobfPYAVvOOigTWuMR262ZOPGYLdqIJzc7AMWApUvqaDK/XzxMpH8d3L2DeOAIkexGDCsfnTgIhEJIcaLGeYmajYRiu/w==',key_name='tempest-TransferEncryptedVolumeTest-1560588090',keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:29:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a893d0c223f746328e706d7491d73b20',ramdisk_id='',reservation_id='r-wgv2eazc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1499588457',owner_user_name='tempest-TransferEncryptedVolumeTest-1499588457-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:29:08Z,user_data=None,user_id='1caa62e7ee8b42be98bc34780a7197f9',uuid=c2b160a2-030e-4625-b36f-060da406de08,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cf68cb85-2ba9-4bcb-acd6-30526d17ca18", "address": "fa:16:3e:dd:1b:59", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf68cb85-2b", "ovs_interfaceid": "cf68cb85-2ba9-4bcb-acd6-30526d17ca18", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.901 254904 DEBUG nova.network.os_vif_util [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Converting VIF {"id": "cf68cb85-2ba9-4bcb-acd6-30526d17ca18", "address": "fa:16:3e:dd:1b:59", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf68cb85-2b", "ovs_interfaceid": "cf68cb85-2ba9-4bcb-acd6-30526d17ca18", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.903 254904 DEBUG nova.network.os_vif_util [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dd:1b:59,bridge_name='br-int',has_traffic_filtering=True,id=cf68cb85-2ba9-4bcb-acd6-30526d17ca18,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf68cb85-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.904 254904 DEBUG os_vif [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:1b:59,bridge_name='br-int',has_traffic_filtering=True,id=cf68cb85-2ba9-4bcb-acd6-30526d17ca18,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf68cb85-2b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.907 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.908 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf68cb85-2b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.909 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.911 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.914 254904 INFO os_vif [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:1b:59,bridge_name='br-int',has_traffic_filtering=True,id=cf68cb85-2ba9-4bcb-acd6-30526d17ca18,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf68cb85-2b')#033[00m
Dec  2 06:29:30 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-02ed0b797476bd4aca8672e02608e10c23947b4df0bc309a8d4e72cd4b0a5fe6-userdata-shm.mount: Deactivated successfully.
Dec  2 06:29:30 np0005542249 systemd[1]: var-lib-containers-storage-overlay-bb2c4010f5a0388e05b5d91ae6c916c90a0db6aa76c34d681df45b5f9e3d54e1-merged.mount: Deactivated successfully.
Dec  2 06:29:30 np0005542249 podman[288011]: 2025-12-02 11:29:30.929229643 +0000 UTC m=+0.100034289 container cleanup 02ed0b797476bd4aca8672e02608e10c23947b4df0bc309a8d4e72cd4b0a5fe6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:29:30 np0005542249 systemd[1]: libpod-conmon-02ed0b797476bd4aca8672e02608e10c23947b4df0bc309a8d4e72cd4b0a5fe6.scope: Deactivated successfully.
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.946 254904 DEBUG nova.compute.manager [req-d3bd1cbe-4a98-458e-a1af-49089f888560 req-15b51c77-7be3-421f-a2e2-2702161592e2 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Received event network-vif-unplugged-cf68cb85-2ba9-4bcb-acd6-30526d17ca18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.947 254904 DEBUG oslo_concurrency.lockutils [req-d3bd1cbe-4a98-458e-a1af-49089f888560 req-15b51c77-7be3-421f-a2e2-2702161592e2 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "c2b160a2-030e-4625-b36f-060da406de08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.947 254904 DEBUG oslo_concurrency.lockutils [req-d3bd1cbe-4a98-458e-a1af-49089f888560 req-15b51c77-7be3-421f-a2e2-2702161592e2 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "c2b160a2-030e-4625-b36f-060da406de08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.948 254904 DEBUG oslo_concurrency.lockutils [req-d3bd1cbe-4a98-458e-a1af-49089f888560 req-15b51c77-7be3-421f-a2e2-2702161592e2 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "c2b160a2-030e-4625-b36f-060da406de08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.948 254904 DEBUG nova.compute.manager [req-d3bd1cbe-4a98-458e-a1af-49089f888560 req-15b51c77-7be3-421f-a2e2-2702161592e2 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] No waiting events found dispatching network-vif-unplugged-cf68cb85-2ba9-4bcb-acd6-30526d17ca18 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:29:30 np0005542249 nova_compute[254900]: 2025-12-02 11:29:30.948 254904 DEBUG nova.compute.manager [req-d3bd1cbe-4a98-458e-a1af-49089f888560 req-15b51c77-7be3-421f-a2e2-2702161592e2 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Received event network-vif-unplugged-cf68cb85-2ba9-4bcb-acd6-30526d17ca18 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 06:29:31 np0005542249 podman[288063]: 2025-12-02 11:29:31.032971122 +0000 UTC m=+0.071357463 container remove 02ed0b797476bd4aca8672e02608e10c23947b4df0bc309a8d4e72cd4b0a5fe6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:29:31 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:31.038 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[c095f12b-582d-438d-9f49-5405d1eff637]: (4, ('Tue Dec  2 11:29:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8 (02ed0b797476bd4aca8672e02608e10c23947b4df0bc309a8d4e72cd4b0a5fe6)\n02ed0b797476bd4aca8672e02608e10c23947b4df0bc309a8d4e72cd4b0a5fe6\nTue Dec  2 11:29:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8 (02ed0b797476bd4aca8672e02608e10c23947b4df0bc309a8d4e72cd4b0a5fe6)\n02ed0b797476bd4aca8672e02608e10c23947b4df0bc309a8d4e72cd4b0a5fe6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:31 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:31.040 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[98412ded-45cf-4381-826c-f26ae1155f03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:31 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:31.041 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f9f73cb-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:31 np0005542249 nova_compute[254900]: 2025-12-02 11:29:31.043 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:31 np0005542249 kernel: tap4f9f73cb-90: left promiscuous mode
Dec  2 06:29:31 np0005542249 nova_compute[254900]: 2025-12-02 11:29:31.070 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:31 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:31.074 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[64e9597e-32a9-40a6-bacb-4a1278506ac3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:31 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:31.092 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[80ccc221-acf7-4b10-bd57-58e0644c0aee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:31 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:31.094 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[c7ee4938-d126-485d-a7ec-c4f5091447bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:31 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:31.110 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[ba18906b-623d-4b03-aa72-9709c1e2f13e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 513808, 'reachable_time': 28186, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288081, 'error': None, 'target': 'ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:31 np0005542249 systemd[1]: run-netns-ovnmeta\x2d4f9f73cb\x2d9730\x2d4829\x2dae15\x2d1f03b97e60f8.mount: Deactivated successfully.
Dec  2 06:29:31 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:31.116 164036 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 06:29:31 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:31.116 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[804e3fca-349e-458f-9e55-40cb87953234]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:31 np0005542249 nova_compute[254900]: 2025-12-02 11:29:31.179 254904 INFO nova.virt.libvirt.driver [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Deleting instance files /var/lib/nova/instances/c2b160a2-030e-4625-b36f-060da406de08_del#033[00m
Dec  2 06:29:31 np0005542249 nova_compute[254900]: 2025-12-02 11:29:31.180 254904 INFO nova.virt.libvirt.driver [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Deletion of /var/lib/nova/instances/c2b160a2-030e-4625-b36f-060da406de08_del complete#033[00m
Dec  2 06:29:31 np0005542249 nova_compute[254900]: 2025-12-02 11:29:31.229 254904 INFO nova.compute.manager [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:29:31 np0005542249 nova_compute[254900]: 2025-12-02 11:29:31.229 254904 DEBUG oslo.service.loopingcall [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:29:31 np0005542249 nova_compute[254900]: 2025-12-02 11:29:31.230 254904 DEBUG nova.compute.manager [-] [instance: c2b160a2-030e-4625-b36f-060da406de08] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:29:31 np0005542249 nova_compute[254900]: 2025-12-02 11:29:31.230 254904 DEBUG nova.network.neutron [-] [instance: c2b160a2-030e-4625-b36f-060da406de08] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:29:31 np0005542249 nova_compute[254900]: 2025-12-02 11:29:31.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:29:31 np0005542249 nova_compute[254900]: 2025-12-02 11:29:31.383 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  2 06:29:32 np0005542249 nova_compute[254900]: 2025-12-02 11:29:32.113 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  2 06:29:32 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1588: 321 pgs: 321 active+clean; 317 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 8.0 MiB/s wr, 210 op/s
Dec  2 06:29:32 np0005542249 nova_compute[254900]: 2025-12-02 11:29:32.737 254904 DEBUG nova.network.neutron [-] [instance: c2b160a2-030e-4625-b36f-060da406de08] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:29:32 np0005542249 nova_compute[254900]: 2025-12-02 11:29:32.754 254904 INFO nova.compute.manager [-] [instance: c2b160a2-030e-4625-b36f-060da406de08] Took 1.52 seconds to deallocate network for instance.#033[00m
Dec  2 06:29:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:29:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e395 do_prune osdmap full prune enabled
Dec  2 06:29:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e396 e396: 3 total, 3 up, 3 in
Dec  2 06:29:32 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e396: 3 total, 3 up, 3 in
Dec  2 06:29:32 np0005542249 nova_compute[254900]: 2025-12-02 11:29:32.945 254904 INFO nova.compute.manager [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Took 0.19 seconds to detach 1 volumes for instance.#033[00m
Dec  2 06:29:33 np0005542249 nova_compute[254900]: 2025-12-02 11:29:33.002 254904 DEBUG oslo_concurrency.lockutils [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:33 np0005542249 nova_compute[254900]: 2025-12-02 11:29:33.003 254904 DEBUG oslo_concurrency.lockutils [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:33 np0005542249 nova_compute[254900]: 2025-12-02 11:29:33.029 254904 DEBUG nova.compute.manager [req-0b26772e-ed44-4f78-8ee0-e06d749640be req-7a4873a4-5dd4-4261-ba53-51dd53d3ddda 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Received event network-vif-plugged-cf68cb85-2ba9-4bcb-acd6-30526d17ca18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:29:33 np0005542249 nova_compute[254900]: 2025-12-02 11:29:33.031 254904 DEBUG oslo_concurrency.lockutils [req-0b26772e-ed44-4f78-8ee0-e06d749640be req-7a4873a4-5dd4-4261-ba53-51dd53d3ddda 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "c2b160a2-030e-4625-b36f-060da406de08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:33 np0005542249 nova_compute[254900]: 2025-12-02 11:29:33.031 254904 DEBUG oslo_concurrency.lockutils [req-0b26772e-ed44-4f78-8ee0-e06d749640be req-7a4873a4-5dd4-4261-ba53-51dd53d3ddda 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "c2b160a2-030e-4625-b36f-060da406de08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:33 np0005542249 nova_compute[254900]: 2025-12-02 11:29:33.032 254904 DEBUG oslo_concurrency.lockutils [req-0b26772e-ed44-4f78-8ee0-e06d749640be req-7a4873a4-5dd4-4261-ba53-51dd53d3ddda 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "c2b160a2-030e-4625-b36f-060da406de08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:33 np0005542249 nova_compute[254900]: 2025-12-02 11:29:33.032 254904 DEBUG nova.compute.manager [req-0b26772e-ed44-4f78-8ee0-e06d749640be req-7a4873a4-5dd4-4261-ba53-51dd53d3ddda 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] No waiting events found dispatching network-vif-plugged-cf68cb85-2ba9-4bcb-acd6-30526d17ca18 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:29:33 np0005542249 nova_compute[254900]: 2025-12-02 11:29:33.033 254904 WARNING nova.compute.manager [req-0b26772e-ed44-4f78-8ee0-e06d749640be req-7a4873a4-5dd4-4261-ba53-51dd53d3ddda 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Received unexpected event network-vif-plugged-cf68cb85-2ba9-4bcb-acd6-30526d17ca18 for instance with vm_state deleted and task_state None.#033[00m
Dec  2 06:29:33 np0005542249 nova_compute[254900]: 2025-12-02 11:29:33.034 254904 DEBUG nova.compute.manager [req-0b26772e-ed44-4f78-8ee0-e06d749640be req-7a4873a4-5dd4-4261-ba53-51dd53d3ddda 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c2b160a2-030e-4625-b36f-060da406de08] Received event network-vif-deleted-cf68cb85-2ba9-4bcb-acd6-30526d17ca18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:29:33 np0005542249 nova_compute[254900]: 2025-12-02 11:29:33.079 254904 DEBUG oslo_concurrency.processutils [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:33 np0005542249 ceph-osd[88961]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Dec  2 06:29:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:29:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1963552741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:29:33 np0005542249 nova_compute[254900]: 2025-12-02 11:29:33.776 254904 DEBUG oslo_concurrency.processutils [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.697s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:33 np0005542249 nova_compute[254900]: 2025-12-02 11:29:33.786 254904 DEBUG nova.compute.provider_tree [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:29:33 np0005542249 nova_compute[254900]: 2025-12-02 11:29:33.813 254904 DEBUG nova.scheduler.client.report [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:29:33 np0005542249 nova_compute[254900]: 2025-12-02 11:29:33.848 254904 DEBUG oslo_concurrency.lockutils [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:33 np0005542249 nova_compute[254900]: 2025-12-02 11:29:33.884 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:33 np0005542249 nova_compute[254900]: 2025-12-02 11:29:33.894 254904 INFO nova.scheduler.client.report [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Deleted allocations for instance c2b160a2-030e-4625-b36f-060da406de08#033[00m
Dec  2 06:29:33 np0005542249 nova_compute[254900]: 2025-12-02 11:29:33.934 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764674958.93357, 66196772-8110-4d36-bdfa-d36400059313 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:29:33 np0005542249 nova_compute[254900]: 2025-12-02 11:29:33.936 254904 INFO nova.compute.manager [-] [instance: 66196772-8110-4d36-bdfa-d36400059313] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:29:33 np0005542249 nova_compute[254900]: 2025-12-02 11:29:33.965 254904 DEBUG nova.compute.manager [None req-f5c9a0c1-a421-41c2-83d6-f78f104cbd75 - - - - - -] [instance: 66196772-8110-4d36-bdfa-d36400059313] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:29:33 np0005542249 nova_compute[254900]: 2025-12-02 11:29:33.987 254904 DEBUG oslo_concurrency.lockutils [None req-f4df56eb-1d81-4006-8dc3-31ac2df82bc2 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "c2b160a2-030e-4625-b36f-060da406de08" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:34 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1590: 321 pgs: 321 active+clean; 323 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 7.8 MiB/s wr, 192 op/s
Dec  2 06:29:35 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:35Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c4:74:f9 10.100.0.8
Dec  2 06:29:35 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:35Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c4:74:f9 10.100.0.8
Dec  2 06:29:35 np0005542249 nova_compute[254900]: 2025-12-02 11:29:35.912 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:29:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:29:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:29:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:29:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00047979189050595817 of space, bias 1.0, pg target 0.14393756715178746 quantized to 32 (current 32)
Dec  2 06:29:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:29:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002890197670443446 of space, bias 1.0, pg target 0.8670593011330339 quantized to 32 (current 32)
Dec  2 06:29:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:29:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  2 06:29:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:29:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Dec  2 06:29:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:29:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:29:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:29:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:29:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:29:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:29:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:29:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:29:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:29:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:29:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:29:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:29:36 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1591: 321 pgs: 321 active+clean; 323 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.4 MiB/s wr, 61 op/s
Dec  2 06:29:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:29:38 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1592: 321 pgs: 321 active+clean; 365 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.6 MiB/s wr, 130 op/s
Dec  2 06:29:38 np0005542249 nova_compute[254900]: 2025-12-02 11:29:38.888 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:40 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1593: 321 pgs: 321 active+clean; 384 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.0 MiB/s wr, 142 op/s
Dec  2 06:29:40 np0005542249 nova_compute[254900]: 2025-12-02 11:29:40.965 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:42 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1594: 321 pgs: 321 active+clean; 396 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.7 MiB/s wr, 143 op/s
Dec  2 06:29:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:29:43 np0005542249 nova_compute[254900]: 2025-12-02 11:29:43.115 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:29:43 np0005542249 nova_compute[254900]: 2025-12-02 11:29:43.115 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:29:43 np0005542249 nova_compute[254900]: 2025-12-02 11:29:43.529 254904 DEBUG oslo_concurrency.lockutils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "bdac108b-bf09-468a-9c93-c72b5128519b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:43 np0005542249 nova_compute[254900]: 2025-12-02 11:29:43.530 254904 DEBUG oslo_concurrency.lockutils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "bdac108b-bf09-468a-9c93-c72b5128519b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:43 np0005542249 nova_compute[254900]: 2025-12-02 11:29:43.549 254904 DEBUG nova.compute.manager [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:29:43 np0005542249 nova_compute[254900]: 2025-12-02 11:29:43.759 254904 DEBUG oslo_concurrency.lockutils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:43 np0005542249 nova_compute[254900]: 2025-12-02 11:29:43.760 254904 DEBUG oslo_concurrency.lockutils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:43 np0005542249 nova_compute[254900]: 2025-12-02 11:29:43.770 254904 DEBUG nova.virt.hardware [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:29:43 np0005542249 nova_compute[254900]: 2025-12-02 11:29:43.771 254904 INFO nova.compute.claims [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:29:43 np0005542249 nova_compute[254900]: 2025-12-02 11:29:43.890 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:43 np0005542249 nova_compute[254900]: 2025-12-02 11:29:43.988 254904 DEBUG oslo_concurrency.processutils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:44 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1595: 321 pgs: 321 active+clean; 396 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.1 MiB/s wr, 115 op/s
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.383 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.412 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.414 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:29:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:29:44 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3930838279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.465 254904 DEBUG oslo_concurrency.processutils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.472 254904 DEBUG nova.compute.provider_tree [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.494 254904 DEBUG nova.scheduler.client.report [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.528 254904 DEBUG oslo_concurrency.lockutils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.529 254904 DEBUG nova.compute.manager [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.591 254904 DEBUG nova.compute.manager [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.592 254904 DEBUG nova.network.neutron [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.630 254904 INFO nova.virt.libvirt.driver [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.663 254904 DEBUG nova.compute.manager [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.715 254904 INFO nova.virt.block_device [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Booting with volume 8385edec-40f0-49d0-85a2-65e771001e39 at /dev/vda#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.827 254904 DEBUG nova.policy [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6ccb73a613554d938221b4bf46d7ae83', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '625a6939c31646a4a83ea851774cf28c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.880 254904 DEBUG os_brick.utils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.881 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.898 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.899 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[ed1ce0f5-aa72-4f7f-bae9-1d06cc95e841]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.901 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.914 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.914 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[cccb403f-4a2b-49da-8f49-b597e96c55ff]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.917 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.933 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.933 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[bf0bbe26-1181-438e-90c7-103e0a9fbf50]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.935 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[9d4868b0-b455-4796-8c0d-9b40cd8d0228]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.936 254904 DEBUG oslo_concurrency.processutils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.978 254904 DEBUG oslo_concurrency.processutils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "nvme version" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.984 254904 DEBUG os_brick.initiator.connectors.lightos [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.984 254904 DEBUG os_brick.initiator.connectors.lightos [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.985 254904 DEBUG os_brick.initiator.connectors.lightos [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.985 254904 DEBUG os_brick.utils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] <== get_connector_properties: return (105ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:29:44 np0005542249 nova_compute[254900]: 2025-12-02 11:29:44.986 254904 DEBUG nova.virt.block_device [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Updating existing volume attachment record: d53cae65-cde4-4d4c-9302-98f3bbd31a0f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:29:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:29:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3759552648' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:29:45 np0005542249 nova_compute[254900]: 2025-12-02 11:29:45.875 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764674970.8750718, c2b160a2-030e-4625-b36f-060da406de08 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:29:45 np0005542249 nova_compute[254900]: 2025-12-02 11:29:45.876 254904 INFO nova.compute.manager [-] [instance: c2b160a2-030e-4625-b36f-060da406de08] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:29:45 np0005542249 nova_compute[254900]: 2025-12-02 11:29:45.916 254904 DEBUG nova.compute.manager [None req-10c1e991-1e0e-4f39-8160-59a30029933f - - - - - -] [instance: c2b160a2-030e-4625-b36f-060da406de08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:29:45 np0005542249 nova_compute[254900]: 2025-12-02 11:29:45.983 254904 DEBUG nova.compute.manager [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:29:45 np0005542249 nova_compute[254900]: 2025-12-02 11:29:45.985 254904 DEBUG nova.virt.libvirt.driver [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:29:45 np0005542249 nova_compute[254900]: 2025-12-02 11:29:45.986 254904 INFO nova.virt.libvirt.driver [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Creating image(s)#033[00m
Dec  2 06:29:45 np0005542249 nova_compute[254900]: 2025-12-02 11:29:45.986 254904 DEBUG nova.virt.libvirt.driver [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  2 06:29:45 np0005542249 nova_compute[254900]: 2025-12-02 11:29:45.987 254904 DEBUG nova.virt.libvirt.driver [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Ensure instance console log exists: /var/lib/nova/instances/bdac108b-bf09-468a-9c93-c72b5128519b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:29:45 np0005542249 nova_compute[254900]: 2025-12-02 11:29:45.987 254904 DEBUG oslo_concurrency.lockutils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:45 np0005542249 nova_compute[254900]: 2025-12-02 11:29:45.988 254904 DEBUG oslo_concurrency.lockutils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:45 np0005542249 nova_compute[254900]: 2025-12-02 11:29:45.988 254904 DEBUG oslo_concurrency.lockutils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:46 np0005542249 nova_compute[254900]: 2025-12-02 11:29:46.005 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:46 np0005542249 podman[288134]: 2025-12-02 11:29:46.039986602 +0000 UTC m=+0.107260666 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  2 06:29:46 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1596: 321 pgs: 321 active+clean; 396 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 182 KiB/s rd, 3.2 MiB/s wr, 83 op/s
Dec  2 06:29:46 np0005542249 nova_compute[254900]: 2025-12-02 11:29:46.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:29:47 np0005542249 nova_compute[254900]: 2025-12-02 11:29:47.377 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:29:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:29:48 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1597: 321 pgs: 321 active+clean; 396 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 182 KiB/s rd, 3.2 MiB/s wr, 83 op/s
Dec  2 06:29:48 np0005542249 nova_compute[254900]: 2025-12-02 11:29:48.612 254904 DEBUG nova.network.neutron [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Successfully created port: 88d722e2-fd4e-4803-b606-992ec0618074 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:29:48 np0005542249 nova_compute[254900]: 2025-12-02 11:29:48.897 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:49 np0005542249 nova_compute[254900]: 2025-12-02 11:29:49.829 254904 DEBUG nova.network.neutron [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Successfully updated port: 88d722e2-fd4e-4803-b606-992ec0618074 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:29:49 np0005542249 nova_compute[254900]: 2025-12-02 11:29:49.848 254904 DEBUG oslo_concurrency.lockutils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "refresh_cache-bdac108b-bf09-468a-9c93-c72b5128519b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:29:49 np0005542249 nova_compute[254900]: 2025-12-02 11:29:49.849 254904 DEBUG oslo_concurrency.lockutils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquired lock "refresh_cache-bdac108b-bf09-468a-9c93-c72b5128519b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:29:49 np0005542249 nova_compute[254900]: 2025-12-02 11:29:49.849 254904 DEBUG nova.network.neutron [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:29:49 np0005542249 nova_compute[254900]: 2025-12-02 11:29:49.897 254904 DEBUG oslo_concurrency.lockutils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:49 np0005542249 nova_compute[254900]: 2025-12-02 11:29:49.897 254904 DEBUG oslo_concurrency.lockutils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:49 np0005542249 nova_compute[254900]: 2025-12-02 11:29:49.917 254904 DEBUG nova.compute.manager [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:29:49 np0005542249 nova_compute[254900]: 2025-12-02 11:29:49.944 254904 DEBUG nova.compute.manager [req-0755cf0a-8b45-4813-beac-3b38463c8ee1 req-0e39d8d2-74d7-4882-a1c0-1015ba2cefe1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Received event network-changed-88d722e2-fd4e-4803-b606-992ec0618074 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:29:49 np0005542249 nova_compute[254900]: 2025-12-02 11:29:49.944 254904 DEBUG nova.compute.manager [req-0755cf0a-8b45-4813-beac-3b38463c8ee1 req-0e39d8d2-74d7-4882-a1c0-1015ba2cefe1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Refreshing instance network info cache due to event network-changed-88d722e2-fd4e-4803-b606-992ec0618074. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:29:49 np0005542249 nova_compute[254900]: 2025-12-02 11:29:49.945 254904 DEBUG oslo_concurrency.lockutils [req-0755cf0a-8b45-4813-beac-3b38463c8ee1 req-0e39d8d2-74d7-4882-a1c0-1015ba2cefe1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-bdac108b-bf09-468a-9c93-c72b5128519b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:29:49 np0005542249 nova_compute[254900]: 2025-12-02 11:29:49.983 254904 DEBUG oslo_concurrency.lockutils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:49 np0005542249 nova_compute[254900]: 2025-12-02 11:29:49.983 254904 DEBUG oslo_concurrency.lockutils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:49 np0005542249 nova_compute[254900]: 2025-12-02 11:29:49.991 254904 DEBUG nova.virt.hardware [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:29:49 np0005542249 nova_compute[254900]: 2025-12-02 11:29:49.991 254904 INFO nova.compute.claims [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.022 254904 DEBUG nova.network.neutron [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.156 254904 DEBUG oslo_concurrency.processutils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:50 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1598: 321 pgs: 321 active+clean; 396 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.1 MiB/s wr, 18 op/s
Dec  2 06:29:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:29:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3468874041' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:29:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:29:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3468874041' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.407 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:29:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/537591643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.577 254904 DEBUG oslo_concurrency.processutils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.585 254904 DEBUG nova.compute.provider_tree [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.610 254904 DEBUG nova.scheduler.client.report [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.633 254904 DEBUG oslo_concurrency.lockutils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.634 254904 DEBUG nova.compute.manager [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.638 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.639 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.639 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.640 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.714 254904 DEBUG nova.compute.manager [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.715 254904 DEBUG nova.network.neutron [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.736 254904 INFO nova.virt.libvirt.driver [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.764 254904 DEBUG nova.compute.manager [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.809 254904 INFO nova.virt.block_device [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Booting with volume 88f19573-e013-4d95-9327-f6a5bc06f0d0 at /dev/vda#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.904 254904 DEBUG nova.policy [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1caa62e7ee8b42be98bc34780a7197f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a893d0c223f746328e706d7491d73b20', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.950 254904 DEBUG os_brick.utils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.952 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.967 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.967 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[bc284629-280e-496c-b2e4-183adcb9e54f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.969 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.980 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.980 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[1b377c11-79c1-4316-9fe7-2b09d2b03c91]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.982 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.993 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.994 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[9a2a3e68-7e60-4bf8-92a4-5b2cb6ec7121]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.995 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[b4d8e29a-b3bf-41d4-bff1-32b9d162de47]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:50 np0005542249 nova_compute[254900]: 2025-12-02 11:29:50.995 254904 DEBUG oslo_concurrency.processutils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.027 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.031 254904 DEBUG oslo_concurrency.processutils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.033 254904 DEBUG os_brick.initiator.connectors.lightos [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.034 254904 DEBUG os_brick.initiator.connectors.lightos [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.035 254904 DEBUG os_brick.initiator.connectors.lightos [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.036 254904 DEBUG os_brick.utils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] <== get_connector_properties: return (84ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.036 254904 DEBUG nova.virt.block_device [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Updating existing volume attachment record: c2c10c59-3956-4052-9129-9eeedf9461d4 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:29:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:29:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1753994263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.070 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.096 254904 DEBUG nova.network.neutron [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Updating instance_info_cache with network_info: [{"id": "88d722e2-fd4e-4803-b606-992ec0618074", "address": "fa:16:3e:cb:2c:a8", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d722e2-fd", "ovs_interfaceid": "88d722e2-fd4e-4803-b606-992ec0618074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.138 254904 DEBUG oslo_concurrency.lockutils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Releasing lock "refresh_cache-bdac108b-bf09-468a-9c93-c72b5128519b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.138 254904 DEBUG nova.compute.manager [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Instance network_info: |[{"id": "88d722e2-fd4e-4803-b606-992ec0618074", "address": "fa:16:3e:cb:2c:a8", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d722e2-fd", "ovs_interfaceid": "88d722e2-fd4e-4803-b606-992ec0618074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.139 254904 DEBUG oslo_concurrency.lockutils [req-0755cf0a-8b45-4813-beac-3b38463c8ee1 req-0e39d8d2-74d7-4882-a1c0-1015ba2cefe1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-bdac108b-bf09-468a-9c93-c72b5128519b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.139 254904 DEBUG nova.network.neutron [req-0755cf0a-8b45-4813-beac-3b38463c8ee1 req-0e39d8d2-74d7-4882-a1c0-1015ba2cefe1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Refreshing network info cache for port 88d722e2-fd4e-4803-b606-992ec0618074 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.141 254904 DEBUG nova.virt.libvirt.driver [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Start _get_guest_xml network_info=[{"id": "88d722e2-fd4e-4803-b606-992ec0618074", "address": "fa:16:3e:cb:2c:a8", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d722e2-fd", "ovs_interfaceid": "88d722e2-fd4e-4803-b606-992ec0618074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-8385edec-40f0-49d0-85a2-65e771001e39', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '8385edec-40f0-49d0-85a2-65e771001e39', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'bdac108b-bf09-468a-9c93-c72b5128519b', 'attached_at': '', 'detached_at': '', 'volume_id': '8385edec-40f0-49d0-85a2-65e771001e39', 'serial': '8385edec-40f0-49d0-85a2-65e771001e39'}, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'attachment_id': 'd53cae65-cde4-4d4c-9302-98f3bbd31a0f', 'delete_on_termination': False, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.147 254904 WARNING nova.virt.libvirt.driver [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.152 254904 DEBUG nova.virt.libvirt.host [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.152 254904 DEBUG nova.virt.libvirt.host [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.159 254904 DEBUG nova.virt.libvirt.host [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.159 254904 DEBUG nova.virt.libvirt.host [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.160 254904 DEBUG nova.virt.libvirt.driver [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.160 254904 DEBUG nova.virt.hardware [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.160 254904 DEBUG nova.virt.hardware [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.160 254904 DEBUG nova.virt.hardware [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.160 254904 DEBUG nova.virt.hardware [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.161 254904 DEBUG nova.virt.hardware [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.161 254904 DEBUG nova.virt.hardware [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.161 254904 DEBUG nova.virt.hardware [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.161 254904 DEBUG nova.virt.hardware [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.161 254904 DEBUG nova.virt.hardware [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.162 254904 DEBUG nova.virt.hardware [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.162 254904 DEBUG nova.virt.hardware [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.186 254904 DEBUG nova.storage.rbd_utils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] rbd image bdac108b-bf09-468a-9c93-c72b5128519b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.191 254904 DEBUG oslo_concurrency.processutils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.217 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.218 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.410 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.411 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4179MB free_disk=59.942718505859375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.411 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.411 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.475 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Instance a8aab2b3-e5a2-451d-b77a-9d977f1dd00f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.475 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Instance bdac108b-bf09-468a-9c93-c72b5128519b actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.476 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Instance 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.476 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.476 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.564 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:29:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/816800523' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:29:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:29:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2039114183' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.629 254904 DEBUG oslo_concurrency.processutils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.656 254904 DEBUG nova.virt.libvirt.vif [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:29:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1542057907',display_name='tempest-TestVolumeBootPattern-server-1542057907',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1542057907',id=22,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDl9Chitcp+6ZZ9O/so1iQpbrg+ZOVWOrATMsWTbgaWcZg2lFiQK4KEUyaqp5+G/z2wPorJssN622GdMYPRLScxIeivbRrFeE5q310MfETTcDT4f8HB9OmcWcicW5ZF4QA==',key_name='tempest-TestVolumeBootPattern-765108891',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='625a6939c31646a4a83ea851774cf28c',ramdisk_id='',reservation_id='r-chlfp0yu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1396850361',owner_user_name='tempest-TestVolumeBootPattern-1396850361-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:29:44Z,user_data=None,user_id='6ccb73a613554d938221b4bf46d7ae83',uuid=bdac108b-bf09-468a-9c93-c72b5128519b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88d722e2-fd4e-4803-b606-992ec0618074", "address": "fa:16:3e:cb:2c:a8", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d722e2-fd", "ovs_interfaceid": "88d722e2-fd4e-4803-b606-992ec0618074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.656 254904 DEBUG nova.network.os_vif_util [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converting VIF {"id": "88d722e2-fd4e-4803-b606-992ec0618074", "address": "fa:16:3e:cb:2c:a8", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d722e2-fd", "ovs_interfaceid": "88d722e2-fd4e-4803-b606-992ec0618074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.658 254904 DEBUG nova.network.os_vif_util [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:2c:a8,bridge_name='br-int',has_traffic_filtering=True,id=88d722e2-fd4e-4803-b606-992ec0618074,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d722e2-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.659 254904 DEBUG nova.objects.instance [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lazy-loading 'pci_devices' on Instance uuid bdac108b-bf09-468a-9c93-c72b5128519b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.675 254904 DEBUG nova.virt.libvirt.driver [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:29:51 np0005542249 nova_compute[254900]:  <uuid>bdac108b-bf09-468a-9c93-c72b5128519b</uuid>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:  <name>instance-00000016</name>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <nova:name>tempest-TestVolumeBootPattern-server-1542057907</nova:name>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:29:51</nova:creationTime>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:29:51 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:        <nova:user uuid="6ccb73a613554d938221b4bf46d7ae83">tempest-TestVolumeBootPattern-1396850361-project-member</nova:user>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:        <nova:project uuid="625a6939c31646a4a83ea851774cf28c">tempest-TestVolumeBootPattern-1396850361</nova:project>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:        <nova:port uuid="88d722e2-fd4e-4803-b606-992ec0618074">
Dec  2 06:29:51 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <entry name="serial">bdac108b-bf09-468a-9c93-c72b5128519b</entry>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <entry name="uuid">bdac108b-bf09-468a-9c93-c72b5128519b</entry>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/bdac108b-bf09-468a-9c93-c72b5128519b_disk.config">
Dec  2 06:29:51 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:29:51 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="volumes/volume-8385edec-40f0-49d0-85a2-65e771001e39">
Dec  2 06:29:51 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:29:51 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <serial>8385edec-40f0-49d0-85a2-65e771001e39</serial>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:cb:2c:a8"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <target dev="tap88d722e2-fd"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/bdac108b-bf09-468a-9c93-c72b5128519b/console.log" append="off"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:29:51 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:29:51 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:29:51 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:29:51 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.677 254904 DEBUG nova.compute.manager [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Preparing to wait for external event network-vif-plugged-88d722e2-fd4e-4803-b606-992ec0618074 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.677 254904 DEBUG oslo_concurrency.lockutils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "bdac108b-bf09-468a-9c93-c72b5128519b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.677 254904 DEBUG oslo_concurrency.lockutils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "bdac108b-bf09-468a-9c93-c72b5128519b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.677 254904 DEBUG oslo_concurrency.lockutils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "bdac108b-bf09-468a-9c93-c72b5128519b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.678 254904 DEBUG nova.virt.libvirt.vif [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:29:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1542057907',display_name='tempest-TestVolumeBootPattern-server-1542057907',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1542057907',id=22,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDl9Chitcp+6ZZ9O/so1iQpbrg+ZOVWOrATMsWTbgaWcZg2lFiQK4KEUyaqp5+G/z2wPorJssN622GdMYPRLScxIeivbRrFeE5q310MfETTcDT4f8HB9OmcWcicW5ZF4QA==',key_name='tempest-TestVolumeBootPattern-765108891',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='625a6939c31646a4a83ea851774cf28c',ramdisk_id='',reservation_id='r-chlfp0yu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1396850361',owner_user_name='tempest-TestVolumeBootPattern-1396850361-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:29:44Z,user_data=None,user_id='6ccb73a613554d938221b4bf46d7ae83',uuid=bdac108b-bf09-468a-9c93-c72b5128519b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88d722e2-fd4e-4803-b606-992ec0618074", "address": "fa:16:3e:cb:2c:a8", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d722e2-fd", "ovs_interfaceid": "88d722e2-fd4e-4803-b606-992ec0618074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.678 254904 DEBUG nova.network.os_vif_util [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converting VIF {"id": "88d722e2-fd4e-4803-b606-992ec0618074", "address": "fa:16:3e:cb:2c:a8", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d722e2-fd", "ovs_interfaceid": "88d722e2-fd4e-4803-b606-992ec0618074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.679 254904 DEBUG nova.network.os_vif_util [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:2c:a8,bridge_name='br-int',has_traffic_filtering=True,id=88d722e2-fd4e-4803-b606-992ec0618074,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d722e2-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.679 254904 DEBUG os_vif [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:2c:a8,bridge_name='br-int',has_traffic_filtering=True,id=88d722e2-fd4e-4803-b606-992ec0618074,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d722e2-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.683 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.683 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.683 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.686 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.686 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88d722e2-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.687 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap88d722e2-fd, col_values=(('external_ids', {'iface-id': '88d722e2-fd4e-4803-b606-992ec0618074', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:2c:a8', 'vm-uuid': 'bdac108b-bf09-468a-9c93-c72b5128519b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.688 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:51 np0005542249 NetworkManager[48987]: <info>  [1764674991.6902] manager: (tap88d722e2-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/113)
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.691 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.695 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.696 254904 INFO os_vif [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:2c:a8,bridge_name='br-int',has_traffic_filtering=True,id=88d722e2-fd4e-4803-b606-992ec0618074,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d722e2-fd')#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.761 254904 DEBUG nova.virt.libvirt.driver [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.762 254904 DEBUG nova.virt.libvirt.driver [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.762 254904 DEBUG nova.virt.libvirt.driver [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] No VIF found with MAC fa:16:3e:cb:2c:a8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.762 254904 INFO nova.virt.libvirt.driver [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Using config drive#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.781 254904 DEBUG nova.storage.rbd_utils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] rbd image bdac108b-bf09-468a-9c93-c72b5128519b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.854 254904 DEBUG nova.network.neutron [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Successfully created port: 0d534c62-22f0-40f5-b1f1-48ae2645c541 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.956 254904 DEBUG nova.compute.manager [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.957 254904 DEBUG nova.virt.libvirt.driver [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.958 254904 INFO nova.virt.libvirt.driver [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Creating image(s)#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.958 254904 DEBUG nova.virt.libvirt.driver [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.959 254904 DEBUG nova.virt.libvirt.driver [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Ensure instance console log exists: /var/lib/nova/instances/2b4b4b76-ca4f-438b-a3e5-c5d4b3583290/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.959 254904 DEBUG oslo_concurrency.lockutils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.960 254904 DEBUG oslo_concurrency.lockutils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.960 254904 DEBUG oslo_concurrency.lockutils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:29:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3957506361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.985 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:51 np0005542249 nova_compute[254900]: 2025-12-02 11:29:51.991 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:29:52 np0005542249 nova_compute[254900]: 2025-12-02 11:29:52.007 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:29:52 np0005542249 nova_compute[254900]: 2025-12-02 11:29:52.033 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:29:52 np0005542249 nova_compute[254900]: 2025-12-02 11:29:52.033 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:52 np0005542249 nova_compute[254900]: 2025-12-02 11:29:52.034 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:29:52 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1599: 321 pgs: 321 active+clean; 396 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 632 KiB/s wr, 2 op/s
Dec  2 06:29:52 np0005542249 nova_compute[254900]: 2025-12-02 11:29:52.340 254904 INFO nova.virt.libvirt.driver [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Creating config drive at /var/lib/nova/instances/bdac108b-bf09-468a-9c93-c72b5128519b/disk.config#033[00m
Dec  2 06:29:52 np0005542249 nova_compute[254900]: 2025-12-02 11:29:52.344 254904 DEBUG oslo_concurrency.processutils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bdac108b-bf09-468a-9c93-c72b5128519b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyk_tda1x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:52 np0005542249 nova_compute[254900]: 2025-12-02 11:29:52.471 254904 DEBUG oslo_concurrency.processutils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bdac108b-bf09-468a-9c93-c72b5128519b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyk_tda1x" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:52 np0005542249 nova_compute[254900]: 2025-12-02 11:29:52.510 254904 DEBUG nova.storage.rbd_utils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] rbd image bdac108b-bf09-468a-9c93-c72b5128519b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:29:52 np0005542249 nova_compute[254900]: 2025-12-02 11:29:52.516 254904 DEBUG oslo_concurrency.processutils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bdac108b-bf09-468a-9c93-c72b5128519b/disk.config bdac108b-bf09-468a-9c93-c72b5128519b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:52 np0005542249 nova_compute[254900]: 2025-12-02 11:29:52.544 254904 DEBUG nova.network.neutron [req-0755cf0a-8b45-4813-beac-3b38463c8ee1 req-0e39d8d2-74d7-4882-a1c0-1015ba2cefe1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Updated VIF entry in instance network info cache for port 88d722e2-fd4e-4803-b606-992ec0618074. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:29:52 np0005542249 nova_compute[254900]: 2025-12-02 11:29:52.546 254904 DEBUG nova.network.neutron [req-0755cf0a-8b45-4813-beac-3b38463c8ee1 req-0e39d8d2-74d7-4882-a1c0-1015ba2cefe1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Updating instance_info_cache with network_info: [{"id": "88d722e2-fd4e-4803-b606-992ec0618074", "address": "fa:16:3e:cb:2c:a8", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d722e2-fd", "ovs_interfaceid": "88d722e2-fd4e-4803-b606-992ec0618074", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:29:52 np0005542249 nova_compute[254900]: 2025-12-02 11:29:52.557 254904 DEBUG nova.network.neutron [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Successfully updated port: 0d534c62-22f0-40f5-b1f1-48ae2645c541 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:29:52 np0005542249 nova_compute[254900]: 2025-12-02 11:29:52.573 254904 DEBUG oslo_concurrency.lockutils [req-0755cf0a-8b45-4813-beac-3b38463c8ee1 req-0e39d8d2-74d7-4882-a1c0-1015ba2cefe1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-bdac108b-bf09-468a-9c93-c72b5128519b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:29:52 np0005542249 nova_compute[254900]: 2025-12-02 11:29:52.576 254904 DEBUG oslo_concurrency.lockutils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "refresh_cache-2b4b4b76-ca4f-438b-a3e5-c5d4b3583290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:29:52 np0005542249 nova_compute[254900]: 2025-12-02 11:29:52.577 254904 DEBUG oslo_concurrency.lockutils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquired lock "refresh_cache-2b4b4b76-ca4f-438b-a3e5-c5d4b3583290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:29:52 np0005542249 nova_compute[254900]: 2025-12-02 11:29:52.577 254904 DEBUG nova.network.neutron [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:29:52 np0005542249 nova_compute[254900]: 2025-12-02 11:29:52.674 254904 DEBUG oslo_concurrency.processutils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bdac108b-bf09-468a-9c93-c72b5128519b/disk.config bdac108b-bf09-468a-9c93-c72b5128519b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:52 np0005542249 nova_compute[254900]: 2025-12-02 11:29:52.675 254904 INFO nova.virt.libvirt.driver [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Deleting local config drive /var/lib/nova/instances/bdac108b-bf09-468a-9c93-c72b5128519b/disk.config because it was imported into RBD.#033[00m
Dec  2 06:29:52 np0005542249 nova_compute[254900]: 2025-12-02 11:29:52.698 254904 DEBUG nova.compute.manager [req-58167ff2-7236-4a1b-b23a-f2fd6bb86c93 req-102a2301-8d74-4e60-bb6a-127c37cafb36 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Received event network-changed-0d534c62-22f0-40f5-b1f1-48ae2645c541 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:29:52 np0005542249 nova_compute[254900]: 2025-12-02 11:29:52.698 254904 DEBUG nova.compute.manager [req-58167ff2-7236-4a1b-b23a-f2fd6bb86c93 req-102a2301-8d74-4e60-bb6a-127c37cafb36 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Refreshing instance network info cache due to event network-changed-0d534c62-22f0-40f5-b1f1-48ae2645c541. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:29:52 np0005542249 nova_compute[254900]: 2025-12-02 11:29:52.699 254904 DEBUG oslo_concurrency.lockutils [req-58167ff2-7236-4a1b-b23a-f2fd6bb86c93 req-102a2301-8d74-4e60-bb6a-127c37cafb36 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-2b4b4b76-ca4f-438b-a3e5-c5d4b3583290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:29:52 np0005542249 kernel: tap88d722e2-fd: entered promiscuous mode
Dec  2 06:29:52 np0005542249 NetworkManager[48987]: <info>  [1764674992.7364] manager: (tap88d722e2-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/114)
Dec  2 06:29:52 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:52Z|00206|binding|INFO|Claiming lport 88d722e2-fd4e-4803-b606-992ec0618074 for this chassis.
Dec  2 06:29:52 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:52Z|00207|binding|INFO|88d722e2-fd4e-4803-b606-992ec0618074: Claiming fa:16:3e:cb:2c:a8 10.100.0.3
Dec  2 06:29:52 np0005542249 nova_compute[254900]: 2025-12-02 11:29:52.736 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:52 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:52.745 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:2c:a8 10.100.0.3'], port_security=['fa:16:3e:cb:2c:a8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'bdac108b-bf09-468a-9c93-c72b5128519b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '625a6939c31646a4a83ea851774cf28c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bf93629e-6336-4a9c-a41d-6ce19e6b6662', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cc823ef0-3e69-4062-a488-a82f483f10bb, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=88d722e2-fd4e-4803-b606-992ec0618074) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:29:52 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:52.746 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 88d722e2-fd4e-4803-b606-992ec0618074 in datapath acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 bound to our chassis#033[00m
Dec  2 06:29:52 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:52.748 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754#033[00m
Dec  2 06:29:52 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:52Z|00208|binding|INFO|Setting lport 88d722e2-fd4e-4803-b606-992ec0618074 ovn-installed in OVS
Dec  2 06:29:52 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:52Z|00209|binding|INFO|Setting lport 88d722e2-fd4e-4803-b606-992ec0618074 up in Southbound
Dec  2 06:29:52 np0005542249 nova_compute[254900]: 2025-12-02 11:29:52.764 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:52 np0005542249 nova_compute[254900]: 2025-12-02 11:29:52.767 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:52 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:52.771 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[59943248-2075-47b8-9d93-ec4ea714785d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:52 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:52.772 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapacfaa8ac-01 in ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 06:29:52 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:52.773 262398 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapacfaa8ac-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 06:29:52 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:52.773 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[0884b2e7-bf6f-41e2-a863-6a011c8082e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:52 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:52.775 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee42b51-f872-4bfd-a8dc-f2171715ffd3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:52 np0005542249 systemd-udevd[288343]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:29:52 np0005542249 systemd-machined[216222]: New machine qemu-22-instance-00000016.
Dec  2 06:29:52 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:52.794 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[1e8efaae-edc5-4c76-9767-085f00991efa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:52 np0005542249 NetworkManager[48987]: <info>  [1764674992.7987] device (tap88d722e2-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:29:52 np0005542249 systemd[1]: Started Virtual Machine qemu-22-instance-00000016.
Dec  2 06:29:52 np0005542249 NetworkManager[48987]: <info>  [1764674992.7998] device (tap88d722e2-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:29:52 np0005542249 nova_compute[254900]: 2025-12-02 11:29:52.811 254904 DEBUG nova.network.neutron [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:29:52 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:52.821 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[5aa5db6c-200c-42de-8594-46e30c5f69ee]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:29:52 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:52.855 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[066a18f0-81df-41c8-a0dd-5734f064611a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:52 np0005542249 NetworkManager[48987]: <info>  [1764674992.8616] manager: (tapacfaa8ac-00): new Veth device (/org/freedesktop/NetworkManager/Devices/115)
Dec  2 06:29:52 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:52.861 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[d30212ad-9411-406d-a14b-2cbd726e4ff0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:52 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:52.889 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[9ecde976-d3eb-4aba-b308-a8dcc841a208]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:52 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:52.892 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[505f3dd9-b5f6-46eb-b319-0127d73b1d0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:52 np0005542249 NetworkManager[48987]: <info>  [1764674992.9121] device (tapacfaa8ac-00): carrier: link connected
Dec  2 06:29:52 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:52.919 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[9f822fd5-4226-420e-a584-96d425c8ee49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:52 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:52.936 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[dea533a3-28b3-442f-9bd0-07e8a97bf6b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacfaa8ac-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:73:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518573, 'reachable_time': 17403, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288375, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:52 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:52.962 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[33b8dbc5-b721-49be-a733-b884daaf832b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fece:73a3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 518573, 'tstamp': 518573}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288376, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:52 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:52.983 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[7a54eeaf-d2c0-4031-9084-6f6d5083698c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacfaa8ac-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:73:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518573, 'reachable_time': 17403, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288377, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:53.028 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[05ef66d1-a996-4c0e-b8fa-cc2c647e4fd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.042 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.042 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:53.111 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[8671aad5-fd56-4437-a83b-538b3c6b0220]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:53.113 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacfaa8ac-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:53.113 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:53.114 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapacfaa8ac-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.116 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:53 np0005542249 kernel: tapacfaa8ac-00: entered promiscuous mode
Dec  2 06:29:53 np0005542249 NetworkManager[48987]: <info>  [1764674993.1170] manager: (tapacfaa8ac-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:53.119 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapacfaa8ac-00, col_values=(('external_ids', {'iface-id': '1636ad30-406d-4138-823e-abbe7f4d87ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:53 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:53Z|00210|binding|INFO|Releasing lport 1636ad30-406d-4138-823e-abbe7f4d87ac from this chassis (sb_readonly=0)
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:53.144 163757 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.143 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:53.146 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[6abd7aa0-b7de-4d25-b918-8f064da041c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:53.147 163757 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]: global
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]:    log         /dev/log local0 debug
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]:    log-tag     haproxy-metadata-proxy-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]:    user        root
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]:    group       root
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]:    maxconn     1024
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]:    pidfile     /var/lib/neutron/external/pids/acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754.pid.haproxy
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]:    daemon
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]: defaults
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]:    log global
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]:    mode http
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]:    option httplog
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]:    option dontlognull
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]:    option http-server-close
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]:    option forwardfor
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]:    retries                 3
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]:    timeout http-request    30s
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]:    timeout connect         30s
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]:    timeout client          32s
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]:    timeout server          32s
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]:    timeout http-keep-alive 30s
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]: listen listener
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]:    bind 169.254.169.254:80
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]:    http-request add-header X-OVN-Network-ID acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 06:29:53 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:53.148 163757 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'env', 'PROCESS_TAG=haproxy-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.191 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674993.1912842, bdac108b-bf09-468a-9c93-c72b5128519b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.192 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] VM Started (Lifecycle Event)#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.213 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.217 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674993.1935313, bdac108b-bf09-468a-9c93-c72b5128519b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.217 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.235 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.239 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.260 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:29:53 np0005542249 podman[288451]: 2025-12-02 11:29:53.572758515 +0000 UTC m=+0.071407465 container create 94996d27b6399a1f056d438a20a154ae45dbc34084e49f4a4166e4d8e67f4572 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  2 06:29:53 np0005542249 systemd[1]: Started libpod-conmon-94996d27b6399a1f056d438a20a154ae45dbc34084e49f4a4166e4d8e67f4572.scope.
Dec  2 06:29:53 np0005542249 podman[288451]: 2025-12-02 11:29:53.535962688 +0000 UTC m=+0.034611638 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:29:53 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:29:53 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/471a968f155a9bb1a9de02c5d89bdb5057a837f5d745c535a690b463bdf887a1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 06:29:53 np0005542249 podman[288451]: 2025-12-02 11:29:53.683458602 +0000 UTC m=+0.182107592 container init 94996d27b6399a1f056d438a20a154ae45dbc34084e49f4a4166e4d8e67f4572 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:29:53 np0005542249 podman[288451]: 2025-12-02 11:29:53.694058598 +0000 UTC m=+0.192707538 container start 94996d27b6399a1f056d438a20a154ae45dbc34084e49f4a4166e4d8e67f4572 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec  2 06:29:53 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[288467]: [NOTICE]   (288471) : New worker (288473) forked
Dec  2 06:29:53 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[288467]: [NOTICE]   (288471) : Loading success.
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.862 254904 DEBUG nova.network.neutron [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Updating instance_info_cache with network_info: [{"id": "0d534c62-22f0-40f5-b1f1-48ae2645c541", "address": "fa:16:3e:9b:43:9e", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d534c62-22", "ovs_interfaceid": "0d534c62-22f0-40f5-b1f1-48ae2645c541", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.877 254904 DEBUG oslo_concurrency.lockutils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Releasing lock "refresh_cache-2b4b4b76-ca4f-438b-a3e5-c5d4b3583290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.878 254904 DEBUG nova.compute.manager [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Instance network_info: |[{"id": "0d534c62-22f0-40f5-b1f1-48ae2645c541", "address": "fa:16:3e:9b:43:9e", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d534c62-22", "ovs_interfaceid": "0d534c62-22f0-40f5-b1f1-48ae2645c541", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.878 254904 DEBUG oslo_concurrency.lockutils [req-58167ff2-7236-4a1b-b23a-f2fd6bb86c93 req-102a2301-8d74-4e60-bb6a-127c37cafb36 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-2b4b4b76-ca4f-438b-a3e5-c5d4b3583290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.879 254904 DEBUG nova.network.neutron [req-58167ff2-7236-4a1b-b23a-f2fd6bb86c93 req-102a2301-8d74-4e60-bb6a-127c37cafb36 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Refreshing network info cache for port 0d534c62-22f0-40f5-b1f1-48ae2645c541 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.884 254904 DEBUG nova.virt.libvirt.driver [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Start _get_guest_xml network_info=[{"id": "0d534c62-22f0-40f5-b1f1-48ae2645c541", "address": "fa:16:3e:9b:43:9e", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d534c62-22", "ovs_interfaceid": "0d534c62-22f0-40f5-b1f1-48ae2645c541", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-88f19573-e013-4d95-9327-f6a5bc06f0d0', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '88f19573-e013-4d95-9327-f6a5bc06f0d0', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '2b4b4b76-ca4f-438b-a3e5-c5d4b3583290', 'attached_at': '', 'detached_at': '', 'volume_id': '88f19573-e013-4d95-9327-f6a5bc06f0d0', 'serial': '88f19573-e013-4d95-9327-f6a5bc06f0d0'}, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'attachment_id': 'c2c10c59-3956-4052-9129-9eeedf9461d4', 'delete_on_termination': False, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.891 254904 WARNING nova.virt.libvirt.driver [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.896 254904 DEBUG nova.virt.libvirt.host [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.897 254904 DEBUG nova.virt.libvirt.host [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.905 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.912 254904 DEBUG nova.virt.libvirt.host [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.913 254904 DEBUG nova.virt.libvirt.host [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.913 254904 DEBUG nova.virt.libvirt.driver [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.914 254904 DEBUG nova.virt.hardware [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.915 254904 DEBUG nova.virt.hardware [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.915 254904 DEBUG nova.virt.hardware [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.915 254904 DEBUG nova.virt.hardware [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.916 254904 DEBUG nova.virt.hardware [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.916 254904 DEBUG nova.virt.hardware [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.917 254904 DEBUG nova.virt.hardware [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.917 254904 DEBUG nova.virt.hardware [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.918 254904 DEBUG nova.virt.hardware [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.918 254904 DEBUG nova.virt.hardware [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.918 254904 DEBUG nova.virt.hardware [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.953 254904 DEBUG nova.storage.rbd_utils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] rbd image 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:29:53 np0005542249 nova_compute[254900]: 2025-12-02 11:29:53.958 254904 DEBUG oslo_concurrency.processutils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:54 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1600: 321 pgs: 321 active+clean; 396 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 26 KiB/s wr, 4 op/s
Dec  2 06:29:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:29:54 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3896776203' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.508 254904 DEBUG oslo_concurrency.processutils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.684 254904 DEBUG os_brick.encryptors [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Using volume encryption metadata '{'encryption_key_id': 'fef5d5c6-61a9-4941-8511-3b3d70f33d7b', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-88f19573-e013-4d95-9327-f6a5bc06f0d0', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '88f19573-e013-4d95-9327-f6a5bc06f0d0', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '2b4b4b76-ca4f-438b-a3e5-c5d4b3583290', 'attached_at': '', 'detached_at': '', 'volume_id': '88f19573-e013-4d95-9327-f6a5bc06f0d0', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.688 254904 DEBUG barbicanclient.client [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.709 254904 DEBUG barbicanclient.v1.secrets [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/fef5d5c6-61a9-4941-8511-3b3d70f33d7b get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.710 254904 INFO barbicanclient.base [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/fef5d5c6-61a9-4941-8511-3b3d70f33d7b#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.735 254904 DEBUG barbicanclient.client [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.736 254904 INFO barbicanclient.base [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/fef5d5c6-61a9-4941-8511-3b3d70f33d7b#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.763 254904 DEBUG barbicanclient.client [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.763 254904 INFO barbicanclient.base [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/fef5d5c6-61a9-4941-8511-3b3d70f33d7b#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.795 254904 DEBUG barbicanclient.client [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.796 254904 INFO barbicanclient.base [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/fef5d5c6-61a9-4941-8511-3b3d70f33d7b#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.805 254904 DEBUG nova.compute.manager [req-4844cb78-b95a-42eb-be32-41984db14e44 req-39958901-888f-4322-9301-5878978f7264 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Received event network-vif-plugged-88d722e2-fd4e-4803-b606-992ec0618074 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.806 254904 DEBUG oslo_concurrency.lockutils [req-4844cb78-b95a-42eb-be32-41984db14e44 req-39958901-888f-4322-9301-5878978f7264 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "bdac108b-bf09-468a-9c93-c72b5128519b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.806 254904 DEBUG oslo_concurrency.lockutils [req-4844cb78-b95a-42eb-be32-41984db14e44 req-39958901-888f-4322-9301-5878978f7264 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "bdac108b-bf09-468a-9c93-c72b5128519b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.807 254904 DEBUG oslo_concurrency.lockutils [req-4844cb78-b95a-42eb-be32-41984db14e44 req-39958901-888f-4322-9301-5878978f7264 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "bdac108b-bf09-468a-9c93-c72b5128519b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.807 254904 DEBUG nova.compute.manager [req-4844cb78-b95a-42eb-be32-41984db14e44 req-39958901-888f-4322-9301-5878978f7264 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Processing event network-vif-plugged-88d722e2-fd4e-4803-b606-992ec0618074 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.807 254904 DEBUG nova.compute.manager [req-4844cb78-b95a-42eb-be32-41984db14e44 req-39958901-888f-4322-9301-5878978f7264 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Received event network-vif-plugged-88d722e2-fd4e-4803-b606-992ec0618074 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.808 254904 DEBUG oslo_concurrency.lockutils [req-4844cb78-b95a-42eb-be32-41984db14e44 req-39958901-888f-4322-9301-5878978f7264 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "bdac108b-bf09-468a-9c93-c72b5128519b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.808 254904 DEBUG oslo_concurrency.lockutils [req-4844cb78-b95a-42eb-be32-41984db14e44 req-39958901-888f-4322-9301-5878978f7264 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "bdac108b-bf09-468a-9c93-c72b5128519b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.808 254904 DEBUG oslo_concurrency.lockutils [req-4844cb78-b95a-42eb-be32-41984db14e44 req-39958901-888f-4322-9301-5878978f7264 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "bdac108b-bf09-468a-9c93-c72b5128519b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.809 254904 DEBUG nova.compute.manager [req-4844cb78-b95a-42eb-be32-41984db14e44 req-39958901-888f-4322-9301-5878978f7264 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] No waiting events found dispatching network-vif-plugged-88d722e2-fd4e-4803-b606-992ec0618074 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.809 254904 WARNING nova.compute.manager [req-4844cb78-b95a-42eb-be32-41984db14e44 req-39958901-888f-4322-9301-5878978f7264 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Received unexpected event network-vif-plugged-88d722e2-fd4e-4803-b606-992ec0618074 for instance with vm_state building and task_state spawning.#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.810 254904 DEBUG nova.compute.manager [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.819 254904 DEBUG barbicanclient.client [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.820 254904 INFO barbicanclient.base [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/fef5d5c6-61a9-4941-8511-3b3d70f33d7b#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.821 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764674994.8180876, bdac108b-bf09-468a-9c93-c72b5128519b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.821 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.823 254904 DEBUG nova.virt.libvirt.driver [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.826 254904 INFO nova.virt.libvirt.driver [-] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Instance spawned successfully.#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.826 254904 DEBUG nova.virt.libvirt.driver [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.842 254904 DEBUG barbicanclient.client [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.843 254904 INFO barbicanclient.base [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/fef5d5c6-61a9-4941-8511-3b3d70f33d7b#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.848 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.856 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.861 254904 DEBUG nova.virt.libvirt.driver [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.862 254904 DEBUG nova.virt.libvirt.driver [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.863 254904 DEBUG nova.virt.libvirt.driver [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.863 254904 DEBUG nova.virt.libvirt.driver [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.864 254904 DEBUG nova.virt.libvirt.driver [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.865 254904 DEBUG nova.virt.libvirt.driver [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.880 254904 DEBUG barbicanclient.client [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.880 254904 INFO barbicanclient.base [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/fef5d5c6-61a9-4941-8511-3b3d70f33d7b#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.891 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.906 254904 DEBUG barbicanclient.client [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.906 254904 INFO barbicanclient.base [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/fef5d5c6-61a9-4941-8511-3b3d70f33d7b#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.917 254904 INFO nova.compute.manager [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Took 8.93 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:29:54 np0005542249 nova_compute[254900]: 2025-12-02 11:29:54.917 254904 DEBUG nova.compute.manager [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.050 254904 DEBUG barbicanclient.client [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.050 254904 INFO barbicanclient.base [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/fef5d5c6-61a9-4941-8511-3b3d70f33d7b#033[00m
Dec  2 06:29:55 np0005542249 podman[288522]: 2025-12-02 11:29:55.069948893 +0000 UTC m=+0.136505937 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.075 254904 DEBUG barbicanclient.client [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.075 254904 INFO barbicanclient.base [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/fef5d5c6-61a9-4941-8511-3b3d70f33d7b#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.098 254904 DEBUG barbicanclient.client [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.098 254904 INFO barbicanclient.base [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/fef5d5c6-61a9-4941-8511-3b3d70f33d7b#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.115 254904 INFO nova.compute.manager [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Took 11.51 seconds to build instance.#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.119 254904 DEBUG barbicanclient.client [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.120 254904 INFO barbicanclient.base [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/fef5d5c6-61a9-4941-8511-3b3d70f33d7b#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.134 254904 DEBUG oslo_concurrency.lockutils [None req-5b30d3a8-9ee4-44bb-b525-4906346c9f6c 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "bdac108b-bf09-468a-9c93-c72b5128519b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.142 254904 DEBUG barbicanclient.client [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.142 254904 INFO barbicanclient.base [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/fef5d5c6-61a9-4941-8511-3b3d70f33d7b#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.161 254904 DEBUG barbicanclient.client [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.161 254904 INFO barbicanclient.base [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/fef5d5c6-61a9-4941-8511-3b3d70f33d7b#033[00m
Dec  2 06:29:55 np0005542249 podman[288547]: 2025-12-02 11:29:55.172263853 +0000 UTC m=+0.074466947 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.195 254904 DEBUG barbicanclient.client [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.196 254904 INFO barbicanclient.base [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/fef5d5c6-61a9-4941-8511-3b3d70f33d7b#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.198 254904 DEBUG nova.network.neutron [req-58167ff2-7236-4a1b-b23a-f2fd6bb86c93 req-102a2301-8d74-4e60-bb6a-127c37cafb36 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Updated VIF entry in instance network info cache for port 0d534c62-22f0-40f5-b1f1-48ae2645c541. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.199 254904 DEBUG nova.network.neutron [req-58167ff2-7236-4a1b-b23a-f2fd6bb86c93 req-102a2301-8d74-4e60-bb6a-127c37cafb36 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Updating instance_info_cache with network_info: [{"id": "0d534c62-22f0-40f5-b1f1-48ae2645c541", "address": "fa:16:3e:9b:43:9e", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d534c62-22", "ovs_interfaceid": "0d534c62-22f0-40f5-b1f1-48ae2645c541", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.215 254904 DEBUG oslo_concurrency.lockutils [req-58167ff2-7236-4a1b-b23a-f2fd6bb86c93 req-102a2301-8d74-4e60-bb6a-127c37cafb36 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-2b4b4b76-ca4f-438b-a3e5-c5d4b3583290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.221 254904 DEBUG barbicanclient.client [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.221 254904 DEBUG nova.virt.libvirt.host [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Secret XML: <secret ephemeral="no" private="no">
Dec  2 06:29:55 np0005542249 nova_compute[254900]:  <usage type="volume">
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <volume>88f19573-e013-4d95-9327-f6a5bc06f0d0</volume>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:  </usage>
Dec  2 06:29:55 np0005542249 nova_compute[254900]: </secret>
Dec  2 06:29:55 np0005542249 nova_compute[254900]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.251 254904 DEBUG nova.virt.libvirt.vif [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:29:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-377789930',display_name='tempest-TransferEncryptedVolumeTest-server-377789930',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-377789930',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBPd/5CNJJJVCM7bF71nyziMenMlWa5ulXBeejobfPYAVvOOigTWuMR262ZOPGYLdqIJzc7AMWApUvqaDK/XzxMpH8d3L2DeOAIkexGDCsfnTgIhEJIcaLGeYmajYRiu/w==',key_name='tempest-TransferEncryptedVolumeTest-1560588090',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a893d0c223f746328e706d7491d73b20',ramdisk_id='',reservation_id='r-pprvpknh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1499588457',owner_user_name='tempest-TransferEncryptedVolumeTest-1499588457-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:29:50Z,user_data=None,user_id='1caa62e7ee8b42be98bc34780a7197f9',uuid=2b4b4b76-ca4f-438b-a3e5-c5d4b3583290,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d534c62-22f0-40f5-b1f1-48ae2645c541", "address": "fa:16:3e:9b:43:9e", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d534c62-22", "ovs_interfaceid": "0d534c62-22f0-40f5-b1f1-48ae2645c541", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.251 254904 DEBUG nova.network.os_vif_util [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Converting VIF {"id": "0d534c62-22f0-40f5-b1f1-48ae2645c541", "address": "fa:16:3e:9b:43:9e", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d534c62-22", "ovs_interfaceid": "0d534c62-22f0-40f5-b1f1-48ae2645c541", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.252 254904 DEBUG nova.network.os_vif_util [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:43:9e,bridge_name='br-int',has_traffic_filtering=True,id=0d534c62-22f0-40f5-b1f1-48ae2645c541,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d534c62-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.254 254904 DEBUG nova.objects.instance [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.269 254904 DEBUG nova.virt.libvirt.driver [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:29:55 np0005542249 nova_compute[254900]:  <uuid>2b4b4b76-ca4f-438b-a3e5-c5d4b3583290</uuid>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:  <name>instance-00000017</name>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-377789930</nova:name>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:29:53</nova:creationTime>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:29:55 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:        <nova:user uuid="1caa62e7ee8b42be98bc34780a7197f9">tempest-TransferEncryptedVolumeTest-1499588457-project-member</nova:user>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:        <nova:project uuid="a893d0c223f746328e706d7491d73b20">tempest-TransferEncryptedVolumeTest-1499588457</nova:project>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:        <nova:port uuid="0d534c62-22f0-40f5-b1f1-48ae2645c541">
Dec  2 06:29:55 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <entry name="serial">2b4b4b76-ca4f-438b-a3e5-c5d4b3583290</entry>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <entry name="uuid">2b4b4b76-ca4f-438b-a3e5-c5d4b3583290</entry>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/2b4b4b76-ca4f-438b-a3e5-c5d4b3583290_disk.config">
Dec  2 06:29:55 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:29:55 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="volumes/volume-88f19573-e013-4d95-9327-f6a5bc06f0d0">
Dec  2 06:29:55 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:29:55 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <serial>88f19573-e013-4d95-9327-f6a5bc06f0d0</serial>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <encryption format="luks">
Dec  2 06:29:55 np0005542249 nova_compute[254900]:        <secret type="passphrase" uuid="6f19730a-9614-4855-9a6c-2f5a9e23d3ba"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      </encryption>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:9b:43:9e"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <target dev="tap0d534c62-22"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/2b4b4b76-ca4f-438b-a3e5-c5d4b3583290/console.log" append="off"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:29:55 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:29:55 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:29:55 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:29:55 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.270 254904 DEBUG nova.compute.manager [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Preparing to wait for external event network-vif-plugged-0d534c62-22f0-40f5-b1f1-48ae2645c541 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.270 254904 DEBUG oslo_concurrency.lockutils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.270 254904 DEBUG oslo_concurrency.lockutils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.271 254904 DEBUG oslo_concurrency.lockutils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.272 254904 DEBUG nova.virt.libvirt.vif [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:29:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-377789930',display_name='tempest-TransferEncryptedVolumeTest-server-377789930',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-377789930',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBPd/5CNJJJVCM7bF71nyziMenMlWa5ulXBeejobfPYAVvOOigTWuMR262ZOPGYLdqIJzc7AMWApUvqaDK/XzxMpH8d3L2DeOAIkexGDCsfnTgIhEJIcaLGeYmajYRiu/w==',key_name='tempest-TransferEncryptedVolumeTest-1560588090',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a893d0c223f746328e706d7491d73b20',ramdisk_id='',reservation_id='r-pprvpknh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1499588457',owner_user_name='tempest-TransferEncryptedVolumeTest-1499588457-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:29:50Z,user_data=None,user_id='1caa62e7ee8b42be98bc34780a7197f9',uuid=2b4b4b76-ca4f-438b-a3e5-c5d4b3583290,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d534c62-22f0-40f5-b1f1-48ae2645c541", "address": "fa:16:3e:9b:43:9e", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d534c62-22", "ovs_interfaceid": "0d534c62-22f0-40f5-b1f1-48ae2645c541", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.272 254904 DEBUG nova.network.os_vif_util [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Converting VIF {"id": "0d534c62-22f0-40f5-b1f1-48ae2645c541", "address": "fa:16:3e:9b:43:9e", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d534c62-22", "ovs_interfaceid": "0d534c62-22f0-40f5-b1f1-48ae2645c541", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.273 254904 DEBUG nova.network.os_vif_util [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:43:9e,bridge_name='br-int',has_traffic_filtering=True,id=0d534c62-22f0-40f5-b1f1-48ae2645c541,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d534c62-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.273 254904 DEBUG os_vif [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:43:9e,bridge_name='br-int',has_traffic_filtering=True,id=0d534c62-22f0-40f5-b1f1-48ae2645c541,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d534c62-22') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.274 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.275 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.275 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.278 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.278 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d534c62-22, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.279 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0d534c62-22, col_values=(('external_ids', {'iface-id': '0d534c62-22f0-40f5-b1f1-48ae2645c541', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:43:9e', 'vm-uuid': '2b4b4b76-ca4f-438b-a3e5-c5d4b3583290'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.281 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:55 np0005542249 NetworkManager[48987]: <info>  [1764674995.2839] manager: (tap0d534c62-22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/117)
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.284 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.291 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.291 254904 INFO os_vif [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:43:9e,bridge_name='br-int',has_traffic_filtering=True,id=0d534c62-22f0-40f5-b1f1-48ae2645c541,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d534c62-22')#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.500 254904 DEBUG nova.virt.libvirt.driver [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.500 254904 DEBUG nova.virt.libvirt.driver [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.501 254904 DEBUG nova.virt.libvirt.driver [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] No VIF found with MAC fa:16:3e:9b:43:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.501 254904 INFO nova.virt.libvirt.driver [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Using config drive#033[00m
Dec  2 06:29:55 np0005542249 nova_compute[254900]: 2025-12-02 11:29:55.532 254904 DEBUG nova.storage.rbd_utils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] rbd image 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:29:56 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1601: 321 pgs: 321 active+clean; 396 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 13 KiB/s wr, 3 op/s
Dec  2 06:29:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:29:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:29:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:29:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:29:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:29:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:29:57 np0005542249 nova_compute[254900]: 2025-12-02 11:29:57.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:29:57 np0005542249 nova_compute[254900]: 2025-12-02 11:29:57.383 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  2 06:29:57 np0005542249 nova_compute[254900]: 2025-12-02 11:29:57.502 254904 INFO nova.virt.libvirt.driver [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Creating config drive at /var/lib/nova/instances/2b4b4b76-ca4f-438b-a3e5-c5d4b3583290/disk.config#033[00m
Dec  2 06:29:57 np0005542249 nova_compute[254900]: 2025-12-02 11:29:57.512 254904 DEBUG oslo_concurrency.processutils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2b4b4b76-ca4f-438b-a3e5-c5d4b3583290/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgna_2tub execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:57 np0005542249 nova_compute[254900]: 2025-12-02 11:29:57.649 254904 DEBUG oslo_concurrency.processutils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2b4b4b76-ca4f-438b-a3e5-c5d4b3583290/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgna_2tub" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:57 np0005542249 nova_compute[254900]: 2025-12-02 11:29:57.676 254904 DEBUG nova.storage.rbd_utils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] rbd image 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:29:57 np0005542249 nova_compute[254900]: 2025-12-02 11:29:57.681 254904 DEBUG oslo_concurrency.processutils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2b4b4b76-ca4f-438b-a3e5-c5d4b3583290/disk.config 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:29:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:29:57 np0005542249 nova_compute[254900]: 2025-12-02 11:29:57.880 254904 DEBUG oslo_concurrency.processutils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2b4b4b76-ca4f-438b-a3e5-c5d4b3583290/disk.config 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:29:57 np0005542249 nova_compute[254900]: 2025-12-02 11:29:57.880 254904 INFO nova.virt.libvirt.driver [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Deleting local config drive /var/lib/nova/instances/2b4b4b76-ca4f-438b-a3e5-c5d4b3583290/disk.config because it was imported into RBD.#033[00m
Dec  2 06:29:57 np0005542249 kernel: tap0d534c62-22: entered promiscuous mode
Dec  2 06:29:57 np0005542249 NetworkManager[48987]: <info>  [1764674997.9440] manager: (tap0d534c62-22): new Tun device (/org/freedesktop/NetworkManager/Devices/118)
Dec  2 06:29:57 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:57Z|00211|binding|INFO|Claiming lport 0d534c62-22f0-40f5-b1f1-48ae2645c541 for this chassis.
Dec  2 06:29:57 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:57Z|00212|binding|INFO|0d534c62-22f0-40f5-b1f1-48ae2645c541: Claiming fa:16:3e:9b:43:9e 10.100.0.12
Dec  2 06:29:57 np0005542249 nova_compute[254900]: 2025-12-02 11:29:57.949 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:57 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:57Z|00213|binding|INFO|Setting lport 0d534c62-22f0-40f5-b1f1-48ae2645c541 up in Southbound
Dec  2 06:29:57 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:57Z|00214|binding|INFO|Setting lport 0d534c62-22f0-40f5-b1f1-48ae2645c541 ovn-installed in OVS
Dec  2 06:29:57 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:57.959 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:43:9e 10.100.0.12'], port_security=['fa:16:3e:9b:43:9e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2b4b4b76-ca4f-438b-a3e5-c5d4b3583290', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a893d0c223f746328e706d7491d73b20', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5f414683-a8ce-4d9a-ad73-51d7a84d1e7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9a246c4-d9fe-402e-8fa6-6099b55c4866, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=0d534c62-22f0-40f5-b1f1-48ae2645c541) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:29:57 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:57.962 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 0d534c62-22f0-40f5-b1f1-48ae2645c541 in datapath 4f9f73cb-9730-4829-ae15-1f03b97e60f8 bound to our chassis#033[00m
Dec  2 06:29:57 np0005542249 nova_compute[254900]: 2025-12-02 11:29:57.965 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:57 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:57.968 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4f9f73cb-9730-4829-ae15-1f03b97e60f8#033[00m
Dec  2 06:29:57 np0005542249 nova_compute[254900]: 2025-12-02 11:29:57.974 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:57 np0005542249 systemd-udevd[288637]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:29:57 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:57.991 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[e8c5355e-22fe-49f2-a240-419f9ad9bc8c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:57 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:57.993 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4f9f73cb-91 in ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 06:29:57 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:57.995 262398 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4f9f73cb-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:57.996 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[b5f5ba9e-14e1-4943-9319-896a5b0aefc0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:58 np0005542249 systemd-machined[216222]: New machine qemu-23-instance-00000017.
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:58.003 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[99cb7dcc-d005-4066-9ab8-a9006ba601f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:58 np0005542249 NetworkManager[48987]: <info>  [1764674998.0081] device (tap0d534c62-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:29:58 np0005542249 NetworkManager[48987]: <info>  [1764674998.0088] device (tap0d534c62-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:29:58 np0005542249 systemd[1]: Started Virtual Machine qemu-23-instance-00000017.
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:58.023 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[12c2a0a8-67ae-4e91-b658-1d64aa3bc031]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:58.052 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[c19b971d-c6a0-4732-bcd4-04f000b3d806]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:58.088 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[a1eb34a0-8115-4deb-b02a-73ed36a69e1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:58.094 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[73e8393a-91f7-4a32-867d-fb0fec08bee3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:58 np0005542249 NetworkManager[48987]: <info>  [1764674998.0956] manager: (tap4f9f73cb-90): new Veth device (/org/freedesktop/NetworkManager/Devices/119)
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:58.133 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[5e5bf469-726e-4013-bba6-9e855327c66d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:58.137 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[8422f298-c162-4a9c-af9f-918822936c31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:58 np0005542249 NetworkManager[48987]: <info>  [1764674998.1643] device (tap4f9f73cb-90): carrier: link connected
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:58.172 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[b1be9786-7609-4421-a89b-97e18ed8306c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:58 np0005542249 nova_compute[254900]: 2025-12-02 11:29:58.177 254904 DEBUG nova.compute.manager [req-050c6162-6e39-4e2a-8f3d-2db569e12a56 req-115721fc-cf29-475b-b0c8-0468276e81b2 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Received event network-vif-plugged-0d534c62-22f0-40f5-b1f1-48ae2645c541 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:29:58 np0005542249 nova_compute[254900]: 2025-12-02 11:29:58.177 254904 DEBUG oslo_concurrency.lockutils [req-050c6162-6e39-4e2a-8f3d-2db569e12a56 req-115721fc-cf29-475b-b0c8-0468276e81b2 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:29:58 np0005542249 nova_compute[254900]: 2025-12-02 11:29:58.178 254904 DEBUG oslo_concurrency.lockutils [req-050c6162-6e39-4e2a-8f3d-2db569e12a56 req-115721fc-cf29-475b-b0c8-0468276e81b2 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:29:58 np0005542249 nova_compute[254900]: 2025-12-02 11:29:58.178 254904 DEBUG oslo_concurrency.lockutils [req-050c6162-6e39-4e2a-8f3d-2db569e12a56 req-115721fc-cf29-475b-b0c8-0468276e81b2 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:29:58 np0005542249 nova_compute[254900]: 2025-12-02 11:29:58.178 254904 DEBUG nova.compute.manager [req-050c6162-6e39-4e2a-8f3d-2db569e12a56 req-115721fc-cf29-475b-b0c8-0468276e81b2 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Processing event network-vif-plugged-0d534c62-22f0-40f5-b1f1-48ae2645c541 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:58.195 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[d65d1851-1c54-463c-8afe-b2b04b7c30f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f9f73cb-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:ed:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519098, 'reachable_time': 23567, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288673, 'error': None, 'target': 'ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:58.219 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[6ce59d8d-4514-433e-ab73-e4c1e22811d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:edbf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519098, 'tstamp': 519098}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288674, 'error': None, 'target': 'ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:58.241 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[4acdc312-360d-4fd7-8478-4d96447f51d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f9f73cb-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:ed:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519098, 'reachable_time': 23567, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288675, 'error': None, 'target': 'ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:58.277 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[66ec81e2-28aa-40c4-8b0c-819027a90c37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:58 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1602: 321 pgs: 321 active+clean; 396 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 13 KiB/s wr, 59 op/s
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:58.353 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[9103c845-e6c2-4398-b293-60cc0192aa2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:58.356 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f9f73cb-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:58.358 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:58.359 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f9f73cb-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:58 np0005542249 kernel: tap4f9f73cb-90: entered promiscuous mode
Dec  2 06:29:58 np0005542249 NetworkManager[48987]: <info>  [1764674998.3631] manager: (tap4f9f73cb-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Dec  2 06:29:58 np0005542249 nova_compute[254900]: 2025-12-02 11:29:58.362 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:58.368 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4f9f73cb-90, col_values=(('external_ids', {'iface-id': '244504fe-2e21-493b-8e56-0db40be1f53e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:29:58 np0005542249 nova_compute[254900]: 2025-12-02 11:29:58.369 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:58 np0005542249 ovn_controller[153849]: 2025-12-02T11:29:58Z|00215|binding|INFO|Releasing lport 244504fe-2e21-493b-8e56-0db40be1f53e from this chassis (sb_readonly=0)
Dec  2 06:29:58 np0005542249 nova_compute[254900]: 2025-12-02 11:29:58.370 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:58.371 163757 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4f9f73cb-9730-4829-ae15-1f03b97e60f8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4f9f73cb-9730-4829-ae15-1f03b97e60f8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:58.373 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[7c9f5355-0279-4c4d-9b01-09e288ac858c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:58.374 163757 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: global
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]:    log         /dev/log local0 debug
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]:    log-tag     haproxy-metadata-proxy-4f9f73cb-9730-4829-ae15-1f03b97e60f8
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]:    user        root
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]:    group       root
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]:    maxconn     1024
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]:    pidfile     /var/lib/neutron/external/pids/4f9f73cb-9730-4829-ae15-1f03b97e60f8.pid.haproxy
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]:    daemon
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: defaults
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]:    log global
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]:    mode http
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]:    option httplog
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]:    option dontlognull
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]:    option http-server-close
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]:    option forwardfor
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]:    retries                 3
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]:    timeout http-request    30s
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]:    timeout connect         30s
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]:    timeout client          32s
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]:    timeout server          32s
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]:    timeout http-keep-alive 30s
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: listen listener
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]:    bind 169.254.169.254:80
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]:    http-request add-header X-OVN-Network-ID 4f9f73cb-9730-4829-ae15-1f03b97e60f8
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 06:29:58 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:29:58.375 163757 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'env', 'PROCESS_TAG=haproxy-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4f9f73cb-9730-4829-ae15-1f03b97e60f8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 06:29:58 np0005542249 nova_compute[254900]: 2025-12-02 11:29:58.383 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:29:58 np0005542249 podman[288744]: 2025-12-02 11:29:58.77418364 +0000 UTC m=+0.051953357 container create 9462c0e2651785855caf76a83c07df96547126df7119dbcd8c05ecf8637b4e13 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  2 06:29:58 np0005542249 systemd[1]: Started libpod-conmon-9462c0e2651785855caf76a83c07df96547126df7119dbcd8c05ecf8637b4e13.scope.
Dec  2 06:29:58 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:29:58 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dc35e506505137e74533495fd4885cfc9cc137626dd8055aee89b60d077d862/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 06:29:58 np0005542249 podman[288744]: 2025-12-02 11:29:58.747800267 +0000 UTC m=+0.025570004 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:29:58 np0005542249 podman[288744]: 2025-12-02 11:29:58.846442197 +0000 UTC m=+0.124211924 container init 9462c0e2651785855caf76a83c07df96547126df7119dbcd8c05ecf8637b4e13 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Dec  2 06:29:58 np0005542249 podman[288744]: 2025-12-02 11:29:58.852579563 +0000 UTC m=+0.130349280 container start 9462c0e2651785855caf76a83c07df96547126df7119dbcd8c05ecf8637b4e13 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 06:29:58 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[288760]: [NOTICE]   (288764) : New worker (288766) forked
Dec  2 06:29:58 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[288760]: [NOTICE]   (288764) : Loading success.
Dec  2 06:29:58 np0005542249 nova_compute[254900]: 2025-12-02 11:29:58.902 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.273 254904 DEBUG oslo_concurrency.lockutils [None req-1edfd9df-47c1-4535-a4ee-2fefae31d2b6 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Acquiring lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.273 254904 DEBUG oslo_concurrency.lockutils [None req-1edfd9df-47c1-4535-a4ee-2fefae31d2b6 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.282 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.290 254904 DEBUG nova.objects.instance [None req-1edfd9df-47c1-4535-a4ee-2fefae31d2b6 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lazy-loading 'flavor' on Instance uuid a8aab2b3-e5a2-451d-b77a-9d977f1dd00f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:30:00 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1603: 321 pgs: 321 active+clean; 396 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 75 op/s
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.322 254904 DEBUG oslo_concurrency.lockutils [None req-1edfd9df-47c1-4535-a4ee-2fefae31d2b6 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.498 254904 DEBUG oslo_concurrency.lockutils [None req-1edfd9df-47c1-4535-a4ee-2fefae31d2b6 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Acquiring lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.499 254904 DEBUG oslo_concurrency.lockutils [None req-1edfd9df-47c1-4535-a4ee-2fefae31d2b6 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.500 254904 INFO nova.compute.manager [None req-1edfd9df-47c1-4535-a4ee-2fefae31d2b6 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Attaching volume 69e7c3ed-249c-4868-ab1d-87633b87c462 to /dev/vdb#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.575 254904 DEBUG nova.compute.manager [req-4d0832ae-af88-4e87-8b42-63a416e14341 req-12a6c0be-5986-4027-b2b2-922843649e98 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Received event network-vif-plugged-0d534c62-22f0-40f5-b1f1-48ae2645c541 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.579 254904 DEBUG oslo_concurrency.lockutils [req-4d0832ae-af88-4e87-8b42-63a416e14341 req-12a6c0be-5986-4027-b2b2-922843649e98 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.579 254904 DEBUG oslo_concurrency.lockutils [req-4d0832ae-af88-4e87-8b42-63a416e14341 req-12a6c0be-5986-4027-b2b2-922843649e98 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.580 254904 DEBUG oslo_concurrency.lockutils [req-4d0832ae-af88-4e87-8b42-63a416e14341 req-12a6c0be-5986-4027-b2b2-922843649e98 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.581 254904 DEBUG nova.compute.manager [req-4d0832ae-af88-4e87-8b42-63a416e14341 req-12a6c0be-5986-4027-b2b2-922843649e98 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] No waiting events found dispatching network-vif-plugged-0d534c62-22f0-40f5-b1f1-48ae2645c541 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.581 254904 WARNING nova.compute.manager [req-4d0832ae-af88-4e87-8b42-63a416e14341 req-12a6c0be-5986-4027-b2b2-922843649e98 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Received unexpected event network-vif-plugged-0d534c62-22f0-40f5-b1f1-48ae2645c541 for instance with vm_state building and task_state spawning.#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.670 254904 DEBUG os_brick.utils [None req-1edfd9df-47c1-4535-a4ee-2fefae31d2b6 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.672 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.686 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.687 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[3143363d-a734-455f-b73a-09cdaabf30f3]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.689 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.703 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.704 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[44bc18c5-957c-4bab-8a0e-3f96f366f3b4]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.707 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.728 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.729 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[07698ae9-8dd7-4387-b8b7-ed4a05e9699e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.731 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[3545179b-183d-400a-9b9a-79c329b6abd6]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.732 254904 DEBUG oslo_concurrency.processutils [None req-1edfd9df-47c1-4535-a4ee-2fefae31d2b6 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.771 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764675000.747972, 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.773 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] VM Started (Lifecycle Event)#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.778 254904 DEBUG nova.compute.manager [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.780 254904 DEBUG oslo_concurrency.processutils [None req-1edfd9df-47c1-4535-a4ee-2fefae31d2b6 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] CMD "nvme version" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.798 254904 DEBUG nova.virt.libvirt.driver [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.799 254904 DEBUG os_brick.initiator.connectors.lightos [None req-1edfd9df-47c1-4535-a4ee-2fefae31d2b6 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.800 254904 DEBUG os_brick.initiator.connectors.lightos [None req-1edfd9df-47c1-4535-a4ee-2fefae31d2b6 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.801 254904 DEBUG os_brick.initiator.connectors.lightos [None req-1edfd9df-47c1-4535-a4ee-2fefae31d2b6 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.802 254904 DEBUG os_brick.utils [None req-1edfd9df-47c1-4535-a4ee-2fefae31d2b6 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] <== get_connector_properties: return (130ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.803 254904 DEBUG nova.virt.block_device [None req-1edfd9df-47c1-4535-a4ee-2fefae31d2b6 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Updating existing volume attachment record: caae4c25-124d-486f-a24e-856f866444e3 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.809 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.815 254904 INFO nova.virt.libvirt.driver [-] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Instance spawned successfully.#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.816 254904 DEBUG nova.virt.libvirt.driver [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.818 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.836 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.841 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764675000.7491848, 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.842 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.848 254904 DEBUG nova.virt.libvirt.driver [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.849 254904 DEBUG nova.virt.libvirt.driver [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.849 254904 DEBUG nova.virt.libvirt.driver [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.850 254904 DEBUG nova.virt.libvirt.driver [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.850 254904 DEBUG nova.virt.libvirt.driver [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.851 254904 DEBUG nova.virt.libvirt.driver [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.859 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.862 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764675000.7961137, 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.862 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.878 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.882 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.901 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.917 254904 INFO nova.compute.manager [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Took 8.96 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.919 254904 DEBUG nova.compute.manager [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.975 254904 INFO nova.compute.manager [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Took 11.02 seconds to build instance.#033[00m
Dec  2 06:30:00 np0005542249 nova_compute[254900]: 2025-12-02 11:30:00.993 254904 DEBUG oslo_concurrency.lockutils [None req-81a356ef-7fa4-45eb-acde-27d40271d26e 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:30:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2576744010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:30:01 np0005542249 nova_compute[254900]: 2025-12-02 11:30:01.563 254904 DEBUG nova.objects.instance [None req-1edfd9df-47c1-4535-a4ee-2fefae31d2b6 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lazy-loading 'flavor' on Instance uuid a8aab2b3-e5a2-451d-b77a-9d977f1dd00f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:30:01 np0005542249 nova_compute[254900]: 2025-12-02 11:30:01.585 254904 DEBUG nova.virt.libvirt.driver [None req-1edfd9df-47c1-4535-a4ee-2fefae31d2b6 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Attempting to attach volume 69e7c3ed-249c-4868-ab1d-87633b87c462 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Dec  2 06:30:01 np0005542249 nova_compute[254900]: 2025-12-02 11:30:01.589 254904 DEBUG nova.virt.libvirt.guest [None req-1edfd9df-47c1-4535-a4ee-2fefae31d2b6 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] attach device xml: <disk type="network" device="disk">
Dec  2 06:30:01 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:30:01 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-69e7c3ed-249c-4868-ab1d-87633b87c462">
Dec  2 06:30:01 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:30:01 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:30:01 np0005542249 nova_compute[254900]:  <auth username="openstack">
Dec  2 06:30:01 np0005542249 nova_compute[254900]:    <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:30:01 np0005542249 nova_compute[254900]:  </auth>
Dec  2 06:30:01 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:30:01 np0005542249 nova_compute[254900]:  <serial>69e7c3ed-249c-4868-ab1d-87633b87c462</serial>
Dec  2 06:30:01 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:30:01 np0005542249 nova_compute[254900]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  2 06:30:01 np0005542249 nova_compute[254900]: 2025-12-02 11:30:01.719 254904 DEBUG nova.virt.libvirt.driver [None req-1edfd9df-47c1-4535-a4ee-2fefae31d2b6 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:30:01 np0005542249 nova_compute[254900]: 2025-12-02 11:30:01.722 254904 DEBUG nova.virt.libvirt.driver [None req-1edfd9df-47c1-4535-a4ee-2fefae31d2b6 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:30:01 np0005542249 nova_compute[254900]: 2025-12-02 11:30:01.722 254904 DEBUG nova.virt.libvirt.driver [None req-1edfd9df-47c1-4535-a4ee-2fefae31d2b6 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:30:01 np0005542249 nova_compute[254900]: 2025-12-02 11:30:01.723 254904 DEBUG nova.virt.libvirt.driver [None req-1edfd9df-47c1-4535-a4ee-2fefae31d2b6 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] No VIF found with MAC fa:16:3e:c4:74:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:30:01 np0005542249 nova_compute[254900]: 2025-12-02 11:30:01.926 254904 DEBUG oslo_concurrency.lockutils [None req-1edfd9df-47c1-4535-a4ee-2fefae31d2b6 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:02 np0005542249 nova_compute[254900]: 2025-12-02 11:30:02.030 254904 DEBUG nova.compute.manager [req-95ceb2d4-db0c-420c-9965-433efc2515bf req-78ae9ee4-7406-4f75-bd0e-ef941b45eb6d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Received event network-changed-88d722e2-fd4e-4803-b606-992ec0618074 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:30:02 np0005542249 nova_compute[254900]: 2025-12-02 11:30:02.032 254904 DEBUG nova.compute.manager [req-95ceb2d4-db0c-420c-9965-433efc2515bf req-78ae9ee4-7406-4f75-bd0e-ef941b45eb6d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Refreshing instance network info cache due to event network-changed-88d722e2-fd4e-4803-b606-992ec0618074. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:30:02 np0005542249 nova_compute[254900]: 2025-12-02 11:30:02.033 254904 DEBUG oslo_concurrency.lockutils [req-95ceb2d4-db0c-420c-9965-433efc2515bf req-78ae9ee4-7406-4f75-bd0e-ef941b45eb6d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-bdac108b-bf09-468a-9c93-c72b5128519b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:30:02 np0005542249 nova_compute[254900]: 2025-12-02 11:30:02.033 254904 DEBUG oslo_concurrency.lockutils [req-95ceb2d4-db0c-420c-9965-433efc2515bf req-78ae9ee4-7406-4f75-bd0e-ef941b45eb6d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-bdac108b-bf09-468a-9c93-c72b5128519b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:30:02 np0005542249 nova_compute[254900]: 2025-12-02 11:30:02.033 254904 DEBUG nova.network.neutron [req-95ceb2d4-db0c-420c-9965-433efc2515bf req-78ae9ee4-7406-4f75-bd0e-ef941b45eb6d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Refreshing network info cache for port 88d722e2-fd4e-4803-b606-992ec0618074 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:30:02 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1604: 321 pgs: 321 active+clean; 396 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 14 KiB/s wr, 94 op/s
Dec  2 06:30:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:30:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:30:03 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:30:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:30:03 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:30:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:30:03 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:30:03 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 7b9c17e2-b0ca-4c8d-8dc4-7ed3478f6bb1 does not exist
Dec  2 06:30:03 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev e349faba-15cb-43c1-9be0-78692727cb5b does not exist
Dec  2 06:30:03 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev ae2a94e8-b795-4b93-a2af-1bcb7156dafa does not exist
Dec  2 06:30:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:30:03 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:30:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:30:03 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:30:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:30:03 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:30:03 np0005542249 nova_compute[254900]: 2025-12-02 11:30:03.906 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:04 np0005542249 nova_compute[254900]: 2025-12-02 11:30:04.093 254904 DEBUG nova.network.neutron [req-95ceb2d4-db0c-420c-9965-433efc2515bf req-78ae9ee4-7406-4f75-bd0e-ef941b45eb6d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Updated VIF entry in instance network info cache for port 88d722e2-fd4e-4803-b606-992ec0618074. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:30:04 np0005542249 nova_compute[254900]: 2025-12-02 11:30:04.094 254904 DEBUG nova.network.neutron [req-95ceb2d4-db0c-420c-9965-433efc2515bf req-78ae9ee4-7406-4f75-bd0e-ef941b45eb6d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Updating instance_info_cache with network_info: [{"id": "88d722e2-fd4e-4803-b606-992ec0618074", "address": "fa:16:3e:cb:2c:a8", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d722e2-fd", "ovs_interfaceid": "88d722e2-fd4e-4803-b606-992ec0618074", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:30:04 np0005542249 nova_compute[254900]: 2025-12-02 11:30:04.118 254904 DEBUG oslo_concurrency.lockutils [req-95ceb2d4-db0c-420c-9965-433efc2515bf req-78ae9ee4-7406-4f75-bd0e-ef941b45eb6d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-bdac108b-bf09-468a-9c93-c72b5128519b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:30:04 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1605: 321 pgs: 321 active+clean; 397 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 36 KiB/s wr, 152 op/s
Dec  2 06:30:04 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:04.478 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:23:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:25:d0:a1:75:ed'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:30:04 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:04.479 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 06:30:04 np0005542249 nova_compute[254900]: 2025-12-02 11:30:04.478 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:04 np0005542249 podman[289079]: 2025-12-02 11:30:04.481799684 +0000 UTC m=+0.023132658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:30:04 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:30:04 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:30:04 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:30:04 np0005542249 podman[289079]: 2025-12-02 11:30:04.717189078 +0000 UTC m=+0.258522032 container create f1650439ab25186fb1b16137acb1211366f9f4279ce606c3fb68a26b6b359175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hugle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  2 06:30:04 np0005542249 systemd[1]: Started libpod-conmon-f1650439ab25186fb1b16137acb1211366f9f4279ce606c3fb68a26b6b359175.scope.
Dec  2 06:30:04 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:30:04 np0005542249 podman[289079]: 2025-12-02 11:30:04.833323152 +0000 UTC m=+0.374656156 container init f1650439ab25186fb1b16137acb1211366f9f4279ce606c3fb68a26b6b359175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hugle, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:30:04 np0005542249 podman[289079]: 2025-12-02 11:30:04.8443104 +0000 UTC m=+0.385643354 container start f1650439ab25186fb1b16137acb1211366f9f4279ce606c3fb68a26b6b359175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hugle, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:30:04 np0005542249 podman[289079]: 2025-12-02 11:30:04.847686661 +0000 UTC m=+0.389019625 container attach f1650439ab25186fb1b16137acb1211366f9f4279ce606c3fb68a26b6b359175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:30:04 np0005542249 suspicious_hugle[289093]: 167 167
Dec  2 06:30:04 np0005542249 systemd[1]: libpod-f1650439ab25186fb1b16137acb1211366f9f4279ce606c3fb68a26b6b359175.scope: Deactivated successfully.
Dec  2 06:30:04 np0005542249 conmon[289093]: conmon f1650439ab25186fb1b1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f1650439ab25186fb1b16137acb1211366f9f4279ce606c3fb68a26b6b359175.scope/container/memory.events
Dec  2 06:30:04 np0005542249 podman[289098]: 2025-12-02 11:30:04.935043917 +0000 UTC m=+0.048258959 container died f1650439ab25186fb1b16137acb1211366f9f4279ce606c3fb68a26b6b359175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hugle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:30:04 np0005542249 systemd[1]: var-lib-containers-storage-overlay-d3b58b71b281a3f38c022afc61d4b8eb1f4e6ec11eb62ae33bc3c03436e0c4dd-merged.mount: Deactivated successfully.
Dec  2 06:30:04 np0005542249 podman[289098]: 2025-12-02 11:30:04.991362001 +0000 UTC m=+0.104577023 container remove f1650439ab25186fb1b16137acb1211366f9f4279ce606c3fb68a26b6b359175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:30:04 np0005542249 systemd[1]: libpod-conmon-f1650439ab25186fb1b16137acb1211366f9f4279ce606c3fb68a26b6b359175.scope: Deactivated successfully.
Dec  2 06:30:05 np0005542249 nova_compute[254900]: 2025-12-02 11:30:05.191 254904 DEBUG nova.compute.manager [req-72b68a90-bf88-40cb-bca8-8b74fdef6ef2 req-523a376d-20ba-4f47-ab29-c4d0822c8b70 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Received event network-changed-0d534c62-22f0-40f5-b1f1-48ae2645c541 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:30:05 np0005542249 nova_compute[254900]: 2025-12-02 11:30:05.198 254904 DEBUG nova.compute.manager [req-72b68a90-bf88-40cb-bca8-8b74fdef6ef2 req-523a376d-20ba-4f47-ab29-c4d0822c8b70 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Refreshing instance network info cache due to event network-changed-0d534c62-22f0-40f5-b1f1-48ae2645c541. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:30:05 np0005542249 nova_compute[254900]: 2025-12-02 11:30:05.199 254904 DEBUG oslo_concurrency.lockutils [req-72b68a90-bf88-40cb-bca8-8b74fdef6ef2 req-523a376d-20ba-4f47-ab29-c4d0822c8b70 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-2b4b4b76-ca4f-438b-a3e5-c5d4b3583290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:30:05 np0005542249 nova_compute[254900]: 2025-12-02 11:30:05.199 254904 DEBUG oslo_concurrency.lockutils [req-72b68a90-bf88-40cb-bca8-8b74fdef6ef2 req-523a376d-20ba-4f47-ab29-c4d0822c8b70 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-2b4b4b76-ca4f-438b-a3e5-c5d4b3583290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:30:05 np0005542249 nova_compute[254900]: 2025-12-02 11:30:05.200 254904 DEBUG nova.network.neutron [req-72b68a90-bf88-40cb-bca8-8b74fdef6ef2 req-523a376d-20ba-4f47-ab29-c4d0822c8b70 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Refreshing network info cache for port 0d534c62-22f0-40f5-b1f1-48ae2645c541 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:30:05 np0005542249 podman[289121]: 2025-12-02 11:30:05.215272304 +0000 UTC m=+0.059934144 container create c6c6268990feb8958b9108d57e66fef6e3b65762eb75529451967a7fe0c2950c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  2 06:30:05 np0005542249 systemd[1]: Started libpod-conmon-c6c6268990feb8958b9108d57e66fef6e3b65762eb75529451967a7fe0c2950c.scope.
Dec  2 06:30:05 np0005542249 podman[289121]: 2025-12-02 11:30:05.186492124 +0000 UTC m=+0.031154014 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:30:05 np0005542249 nova_compute[254900]: 2025-12-02 11:30:05.284 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:05 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:30:05 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dde4bf659735106737a9bd3736068a01b114eeafa704c29a1d0fec324c1a57b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:30:05 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dde4bf659735106737a9bd3736068a01b114eeafa704c29a1d0fec324c1a57b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:30:05 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dde4bf659735106737a9bd3736068a01b114eeafa704c29a1d0fec324c1a57b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:30:05 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dde4bf659735106737a9bd3736068a01b114eeafa704c29a1d0fec324c1a57b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:30:05 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dde4bf659735106737a9bd3736068a01b114eeafa704c29a1d0fec324c1a57b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:30:05 np0005542249 podman[289121]: 2025-12-02 11:30:05.305924129 +0000 UTC m=+0.150586029 container init c6c6268990feb8958b9108d57e66fef6e3b65762eb75529451967a7fe0c2950c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bardeen, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  2 06:30:05 np0005542249 podman[289121]: 2025-12-02 11:30:05.313896014 +0000 UTC m=+0.158557854 container start c6c6268990feb8958b9108d57e66fef6e3b65762eb75529451967a7fe0c2950c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bardeen, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  2 06:30:05 np0005542249 podman[289121]: 2025-12-02 11:30:05.318218611 +0000 UTC m=+0.162880491 container attach c6c6268990feb8958b9108d57e66fef6e3b65762eb75529451967a7fe0c2950c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  2 06:30:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e396 do_prune osdmap full prune enabled
Dec  2 06:30:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e397 e397: 3 total, 3 up, 3 in
Dec  2 06:30:05 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e397: 3 total, 3 up, 3 in
Dec  2 06:30:06 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1607: 321 pgs: 321 active+clean; 397 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 27 KiB/s wr, 178 op/s
Dec  2 06:30:06 np0005542249 nifty_bardeen[289138]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:30:06 np0005542249 nifty_bardeen[289138]: --> relative data size: 1.0
Dec  2 06:30:06 np0005542249 nifty_bardeen[289138]: --> All data devices are unavailable
Dec  2 06:30:06 np0005542249 systemd[1]: libpod-c6c6268990feb8958b9108d57e66fef6e3b65762eb75529451967a7fe0c2950c.scope: Deactivated successfully.
Dec  2 06:30:06 np0005542249 systemd[1]: libpod-c6c6268990feb8958b9108d57e66fef6e3b65762eb75529451967a7fe0c2950c.scope: Consumed 1.102s CPU time.
Dec  2 06:30:06 np0005542249 conmon[289138]: conmon c6c6268990feb8958b91 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c6c6268990feb8958b9108d57e66fef6e3b65762eb75529451967a7fe0c2950c.scope/container/memory.events
Dec  2 06:30:06 np0005542249 podman[289121]: 2025-12-02 11:30:06.507882883 +0000 UTC m=+1.352544783 container died c6c6268990feb8958b9108d57e66fef6e3b65762eb75529451967a7fe0c2950c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:30:06 np0005542249 systemd[1]: var-lib-containers-storage-overlay-0dde4bf659735106737a9bd3736068a01b114eeafa704c29a1d0fec324c1a57b-merged.mount: Deactivated successfully.
Dec  2 06:30:06 np0005542249 podman[289121]: 2025-12-02 11:30:06.598200098 +0000 UTC m=+1.442861958 container remove c6c6268990feb8958b9108d57e66fef6e3b65762eb75529451967a7fe0c2950c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bardeen, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:30:06 np0005542249 systemd[1]: libpod-conmon-c6c6268990feb8958b9108d57e66fef6e3b65762eb75529451967a7fe0c2950c.scope: Deactivated successfully.
Dec  2 06:30:06 np0005542249 nova_compute[254900]: 2025-12-02 11:30:06.734 254904 DEBUG nova.network.neutron [req-72b68a90-bf88-40cb-bca8-8b74fdef6ef2 req-523a376d-20ba-4f47-ab29-c4d0822c8b70 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Updated VIF entry in instance network info cache for port 0d534c62-22f0-40f5-b1f1-48ae2645c541. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:30:06 np0005542249 nova_compute[254900]: 2025-12-02 11:30:06.737 254904 DEBUG nova.network.neutron [req-72b68a90-bf88-40cb-bca8-8b74fdef6ef2 req-523a376d-20ba-4f47-ab29-c4d0822c8b70 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Updating instance_info_cache with network_info: [{"id": "0d534c62-22f0-40f5-b1f1-48ae2645c541", "address": "fa:16:3e:9b:43:9e", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d534c62-22", "ovs_interfaceid": "0d534c62-22f0-40f5-b1f1-48ae2645c541", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:30:06 np0005542249 nova_compute[254900]: 2025-12-02 11:30:06.763 254904 DEBUG oslo_concurrency.lockutils [req-72b68a90-bf88-40cb-bca8-8b74fdef6ef2 req-523a376d-20ba-4f47-ab29-c4d0822c8b70 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-2b4b4b76-ca4f-438b-a3e5-c5d4b3583290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:30:07 np0005542249 podman[289324]: 2025-12-02 11:30:07.418722256 +0000 UTC m=+0.062574745 container create f42f296c9dc8cccc4f601e09d4cb87114a2ffece7cb27118396378b42a3e8e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mayer, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:30:07 np0005542249 systemd[1]: Started libpod-conmon-f42f296c9dc8cccc4f601e09d4cb87114a2ffece7cb27118396378b42a3e8e17.scope.
Dec  2 06:30:07 np0005542249 podman[289324]: 2025-12-02 11:30:07.391022556 +0000 UTC m=+0.034875085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:30:07 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:30:07 np0005542249 podman[289324]: 2025-12-02 11:30:07.510383118 +0000 UTC m=+0.154235607 container init f42f296c9dc8cccc4f601e09d4cb87114a2ffece7cb27118396378b42a3e8e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mayer, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:30:07 np0005542249 podman[289324]: 2025-12-02 11:30:07.521753065 +0000 UTC m=+0.165605544 container start f42f296c9dc8cccc4f601e09d4cb87114a2ffece7cb27118396378b42a3e8e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  2 06:30:07 np0005542249 silly_mayer[289341]: 167 167
Dec  2 06:30:07 np0005542249 systemd[1]: libpod-f42f296c9dc8cccc4f601e09d4cb87114a2ffece7cb27118396378b42a3e8e17.scope: Deactivated successfully.
Dec  2 06:30:07 np0005542249 podman[289324]: 2025-12-02 11:30:07.538607792 +0000 UTC m=+0.182460321 container attach f42f296c9dc8cccc4f601e09d4cb87114a2ffece7cb27118396378b42a3e8e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:30:07 np0005542249 podman[289324]: 2025-12-02 11:30:07.53927376 +0000 UTC m=+0.183126269 container died f42f296c9dc8cccc4f601e09d4cb87114a2ffece7cb27118396378b42a3e8e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:30:07 np0005542249 systemd[1]: var-lib-containers-storage-overlay-1540650a7b033529a683aef119cf818d2a121c55b0ff9c73eb4bc76d1a62ae7e-merged.mount: Deactivated successfully.
Dec  2 06:30:07 np0005542249 podman[289324]: 2025-12-02 11:30:07.59615331 +0000 UTC m=+0.240005819 container remove f42f296c9dc8cccc4f601e09d4cb87114a2ffece7cb27118396378b42a3e8e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mayer, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  2 06:30:07 np0005542249 systemd[1]: libpod-conmon-f42f296c9dc8cccc4f601e09d4cb87114a2ffece7cb27118396378b42a3e8e17.scope: Deactivated successfully.
Dec  2 06:30:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e397 do_prune osdmap full prune enabled
Dec  2 06:30:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e398 e398: 3 total, 3 up, 3 in
Dec  2 06:30:07 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e398: 3 total, 3 up, 3 in
Dec  2 06:30:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:30:07 np0005542249 podman[289367]: 2025-12-02 11:30:07.855948135 +0000 UTC m=+0.065408093 container create ec590e11b92d47348ebc2cca7a15db8d9351f548f277370c0129a0a8a9d13122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_tu, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:30:07 np0005542249 systemd[1]: Started libpod-conmon-ec590e11b92d47348ebc2cca7a15db8d9351f548f277370c0129a0a8a9d13122.scope.
Dec  2 06:30:07 np0005542249 podman[289367]: 2025-12-02 11:30:07.824735739 +0000 UTC m=+0.034195727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:30:07 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:30:07 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/056d0d644bcd9917688229c20a1abd0be9671e2f810029ac530692604d347b69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:30:07 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/056d0d644bcd9917688229c20a1abd0be9671e2f810029ac530692604d347b69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:30:07 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/056d0d644bcd9917688229c20a1abd0be9671e2f810029ac530692604d347b69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:30:07 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/056d0d644bcd9917688229c20a1abd0be9671e2f810029ac530692604d347b69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:30:07 np0005542249 podman[289367]: 2025-12-02 11:30:07.953024283 +0000 UTC m=+0.162484271 container init ec590e11b92d47348ebc2cca7a15db8d9351f548f277370c0129a0a8a9d13122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_tu, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:30:07 np0005542249 podman[289367]: 2025-12-02 11:30:07.962326215 +0000 UTC m=+0.171786193 container start ec590e11b92d47348ebc2cca7a15db8d9351f548f277370c0129a0a8a9d13122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_tu, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:30:07 np0005542249 podman[289367]: 2025-12-02 11:30:07.966707234 +0000 UTC m=+0.176167222 container attach ec590e11b92d47348ebc2cca7a15db8d9351f548f277370c0129a0a8a9d13122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_tu, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  2 06:30:08 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1609: 321 pgs: 321 active+clean; 403 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.2 MiB/s wr, 147 op/s
Dec  2 06:30:08 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:08Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cb:2c:a8 10.100.0.3
Dec  2 06:30:08 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:08Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cb:2c:a8 10.100.0.3
Dec  2 06:30:08 np0005542249 serene_tu[289384]: {
Dec  2 06:30:08 np0005542249 serene_tu[289384]:    "0": [
Dec  2 06:30:08 np0005542249 serene_tu[289384]:        {
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "devices": [
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "/dev/loop3"
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            ],
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "lv_name": "ceph_lv0",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "lv_size": "21470642176",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "name": "ceph_lv0",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "tags": {
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.cluster_name": "ceph",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.crush_device_class": "",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.encrypted": "0",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.osd_id": "0",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.type": "block",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.vdo": "0"
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            },
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "type": "block",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "vg_name": "ceph_vg0"
Dec  2 06:30:08 np0005542249 serene_tu[289384]:        }
Dec  2 06:30:08 np0005542249 serene_tu[289384]:    ],
Dec  2 06:30:08 np0005542249 serene_tu[289384]:    "1": [
Dec  2 06:30:08 np0005542249 serene_tu[289384]:        {
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "devices": [
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "/dev/loop4"
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            ],
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "lv_name": "ceph_lv1",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "lv_size": "21470642176",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "name": "ceph_lv1",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "tags": {
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.cluster_name": "ceph",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.crush_device_class": "",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.encrypted": "0",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.osd_id": "1",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.type": "block",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.vdo": "0"
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            },
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "type": "block",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "vg_name": "ceph_vg1"
Dec  2 06:30:08 np0005542249 serene_tu[289384]:        }
Dec  2 06:30:08 np0005542249 serene_tu[289384]:    ],
Dec  2 06:30:08 np0005542249 serene_tu[289384]:    "2": [
Dec  2 06:30:08 np0005542249 serene_tu[289384]:        {
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "devices": [
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "/dev/loop5"
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            ],
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "lv_name": "ceph_lv2",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "lv_size": "21470642176",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "name": "ceph_lv2",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "tags": {
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.cluster_name": "ceph",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.crush_device_class": "",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.encrypted": "0",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.osd_id": "2",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.type": "block",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:                "ceph.vdo": "0"
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            },
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "type": "block",
Dec  2 06:30:08 np0005542249 serene_tu[289384]:            "vg_name": "ceph_vg2"
Dec  2 06:30:08 np0005542249 serene_tu[289384]:        }
Dec  2 06:30:08 np0005542249 serene_tu[289384]:    ]
Dec  2 06:30:08 np0005542249 serene_tu[289384]: }
Dec  2 06:30:08 np0005542249 systemd[1]: libpod-ec590e11b92d47348ebc2cca7a15db8d9351f548f277370c0129a0a8a9d13122.scope: Deactivated successfully.
Dec  2 06:30:08 np0005542249 podman[289393]: 2025-12-02 11:30:08.901802553 +0000 UTC m=+0.031632158 container died ec590e11b92d47348ebc2cca7a15db8d9351f548f277370c0129a0a8a9d13122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_tu, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  2 06:30:08 np0005542249 nova_compute[254900]: 2025-12-02 11:30:08.908 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:08 np0005542249 systemd[1]: var-lib-containers-storage-overlay-056d0d644bcd9917688229c20a1abd0be9671e2f810029ac530692604d347b69-merged.mount: Deactivated successfully.
Dec  2 06:30:08 np0005542249 podman[289393]: 2025-12-02 11:30:08.955615099 +0000 UTC m=+0.085444684 container remove ec590e11b92d47348ebc2cca7a15db8d9351f548f277370c0129a0a8a9d13122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Dec  2 06:30:08 np0005542249 systemd[1]: libpod-conmon-ec590e11b92d47348ebc2cca7a15db8d9351f548f277370c0129a0a8a9d13122.scope: Deactivated successfully.
Dec  2 06:30:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e398 do_prune osdmap full prune enabled
Dec  2 06:30:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e399 e399: 3 total, 3 up, 3 in
Dec  2 06:30:09 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e399: 3 total, 3 up, 3 in
Dec  2 06:30:09 np0005542249 podman[289546]: 2025-12-02 11:30:09.797876775 +0000 UTC m=+0.072447153 container create 4a204e2dcb56957ce5b8ef58bf1f45d1f0188a4a605ea1724483d42d6c928bd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:30:09 np0005542249 systemd[1]: Started libpod-conmon-4a204e2dcb56957ce5b8ef58bf1f45d1f0188a4a605ea1724483d42d6c928bd0.scope.
Dec  2 06:30:09 np0005542249 podman[289546]: 2025-12-02 11:30:09.767834931 +0000 UTC m=+0.042405359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:30:09 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:30:09 np0005542249 podman[289546]: 2025-12-02 11:30:09.91032122 +0000 UTC m=+0.184891618 container init 4a204e2dcb56957ce5b8ef58bf1f45d1f0188a4a605ea1724483d42d6c928bd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  2 06:30:09 np0005542249 podman[289546]: 2025-12-02 11:30:09.928937213 +0000 UTC m=+0.203507601 container start 4a204e2dcb56957ce5b8ef58bf1f45d1f0188a4a605ea1724483d42d6c928bd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  2 06:30:09 np0005542249 podman[289546]: 2025-12-02 11:30:09.933522978 +0000 UTC m=+0.208093386 container attach 4a204e2dcb56957ce5b8ef58bf1f45d1f0188a4a605ea1724483d42d6c928bd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_stonebraker, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  2 06:30:09 np0005542249 sleepy_stonebraker[289563]: 167 167
Dec  2 06:30:09 np0005542249 systemd[1]: libpod-4a204e2dcb56957ce5b8ef58bf1f45d1f0188a4a605ea1724483d42d6c928bd0.scope: Deactivated successfully.
Dec  2 06:30:09 np0005542249 podman[289546]: 2025-12-02 11:30:09.935895202 +0000 UTC m=+0.210465590 container died 4a204e2dcb56957ce5b8ef58bf1f45d1f0188a4a605ea1724483d42d6c928bd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  2 06:30:09 np0005542249 systemd[1]: var-lib-containers-storage-overlay-236e2387dbdb0258dc2bc00398ab1c826790df05e9420d584b561dd0374f5521-merged.mount: Deactivated successfully.
Dec  2 06:30:09 np0005542249 podman[289546]: 2025-12-02 11:30:09.988581909 +0000 UTC m=+0.263152287 container remove 4a204e2dcb56957ce5b8ef58bf1f45d1f0188a4a605ea1724483d42d6c928bd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_stonebraker, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  2 06:30:10 np0005542249 systemd[1]: libpod-conmon-4a204e2dcb56957ce5b8ef58bf1f45d1f0188a4a605ea1724483d42d6c928bd0.scope: Deactivated successfully.
Dec  2 06:30:10 np0005542249 podman[289586]: 2025-12-02 11:30:10.253595434 +0000 UTC m=+0.089674629 container create 12d41b102b22258753c61d160576baba7ed4c2ddafd9c5dc977cae242a59854d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:30:10 np0005542249 nova_compute[254900]: 2025-12-02 11:30:10.287 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:10 np0005542249 podman[289586]: 2025-12-02 11:30:10.20688354 +0000 UTC m=+0.042962795 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:30:10 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1611: 321 pgs: 321 active+clean; 416 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 129 KiB/s rd, 2.7 MiB/s wr, 80 op/s
Dec  2 06:30:10 np0005542249 systemd[1]: Started libpod-conmon-12d41b102b22258753c61d160576baba7ed4c2ddafd9c5dc977cae242a59854d.scope.
Dec  2 06:30:10 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:30:10 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e327dc23d0615cb9e5426a3c1b9068e3b5b84781c0b3432fe923cd7626b0eaf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:30:10 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e327dc23d0615cb9e5426a3c1b9068e3b5b84781c0b3432fe923cd7626b0eaf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:30:10 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e327dc23d0615cb9e5426a3c1b9068e3b5b84781c0b3432fe923cd7626b0eaf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:30:10 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e327dc23d0615cb9e5426a3c1b9068e3b5b84781c0b3432fe923cd7626b0eaf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:30:10 np0005542249 podman[289586]: 2025-12-02 11:30:10.390726527 +0000 UTC m=+0.226805782 container init 12d41b102b22258753c61d160576baba7ed4c2ddafd9c5dc977cae242a59854d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chandrasekhar, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:30:10 np0005542249 podman[289586]: 2025-12-02 11:30:10.402121396 +0000 UTC m=+0.238200581 container start 12d41b102b22258753c61d160576baba7ed4c2ddafd9c5dc977cae242a59854d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chandrasekhar, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:30:10 np0005542249 podman[289586]: 2025-12-02 11:30:10.406640389 +0000 UTC m=+0.242719644 container attach 12d41b102b22258753c61d160576baba7ed4c2ddafd9c5dc977cae242a59854d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:30:11 np0005542249 nova_compute[254900]: 2025-12-02 11:30:11.255 254904 DEBUG oslo_concurrency.lockutils [None req-39e5fed0-2b8e-461f-97e1-6d129ae916e7 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Acquiring lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:11 np0005542249 nova_compute[254900]: 2025-12-02 11:30:11.256 254904 DEBUG oslo_concurrency.lockutils [None req-39e5fed0-2b8e-461f-97e1-6d129ae916e7 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:11 np0005542249 nova_compute[254900]: 2025-12-02 11:30:11.273 254904 INFO nova.compute.manager [None req-39e5fed0-2b8e-461f-97e1-6d129ae916e7 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Detaching volume 69e7c3ed-249c-4868-ab1d-87633b87c462#033[00m
Dec  2 06:30:11 np0005542249 nova_compute[254900]: 2025-12-02 11:30:11.424 254904 INFO nova.virt.block_device [None req-39e5fed0-2b8e-461f-97e1-6d129ae916e7 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Attempting to driver detach volume 69e7c3ed-249c-4868-ab1d-87633b87c462 from mountpoint /dev/vdb#033[00m
Dec  2 06:30:11 np0005542249 nova_compute[254900]: 2025-12-02 11:30:11.442 254904 DEBUG nova.virt.libvirt.driver [None req-39e5fed0-2b8e-461f-97e1-6d129ae916e7 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Attempting to detach device vdb from instance a8aab2b3-e5a2-451d-b77a-9d977f1dd00f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  2 06:30:11 np0005542249 nova_compute[254900]: 2025-12-02 11:30:11.443 254904 DEBUG nova.virt.libvirt.guest [None req-39e5fed0-2b8e-461f-97e1-6d129ae916e7 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] detach device xml: <disk type="network" device="disk">
Dec  2 06:30:11 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:30:11 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-69e7c3ed-249c-4868-ab1d-87633b87c462">
Dec  2 06:30:11 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:30:11 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:30:11 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:30:11 np0005542249 nova_compute[254900]:  <serial>69e7c3ed-249c-4868-ab1d-87633b87c462</serial>
Dec  2 06:30:11 np0005542249 nova_compute[254900]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  2 06:30:11 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:30:11 np0005542249 nova_compute[254900]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  2 06:30:11 np0005542249 nova_compute[254900]: 2025-12-02 11:30:11.456 254904 INFO nova.virt.libvirt.driver [None req-39e5fed0-2b8e-461f-97e1-6d129ae916e7 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Successfully detached device vdb from instance a8aab2b3-e5a2-451d-b77a-9d977f1dd00f from the persistent domain config.#033[00m
Dec  2 06:30:11 np0005542249 nova_compute[254900]: 2025-12-02 11:30:11.457 254904 DEBUG nova.virt.libvirt.driver [None req-39e5fed0-2b8e-461f-97e1-6d129ae916e7 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance a8aab2b3-e5a2-451d-b77a-9d977f1dd00f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  2 06:30:11 np0005542249 nova_compute[254900]: 2025-12-02 11:30:11.458 254904 DEBUG nova.virt.libvirt.guest [None req-39e5fed0-2b8e-461f-97e1-6d129ae916e7 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] detach device xml: <disk type="network" device="disk">
Dec  2 06:30:11 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:30:11 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-69e7c3ed-249c-4868-ab1d-87633b87c462">
Dec  2 06:30:11 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:30:11 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:30:11 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:30:11 np0005542249 nova_compute[254900]:  <serial>69e7c3ed-249c-4868-ab1d-87633b87c462</serial>
Dec  2 06:30:11 np0005542249 nova_compute[254900]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  2 06:30:11 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:30:11 np0005542249 nova_compute[254900]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  2 06:30:11 np0005542249 nervous_chandrasekhar[289602]: {
Dec  2 06:30:11 np0005542249 nervous_chandrasekhar[289602]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:30:11 np0005542249 nervous_chandrasekhar[289602]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:30:11 np0005542249 nervous_chandrasekhar[289602]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:30:11 np0005542249 nervous_chandrasekhar[289602]:        "osd_id": 0,
Dec  2 06:30:11 np0005542249 nervous_chandrasekhar[289602]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:30:11 np0005542249 nervous_chandrasekhar[289602]:        "type": "bluestore"
Dec  2 06:30:11 np0005542249 nervous_chandrasekhar[289602]:    },
Dec  2 06:30:11 np0005542249 nervous_chandrasekhar[289602]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:30:11 np0005542249 nervous_chandrasekhar[289602]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:30:11 np0005542249 nervous_chandrasekhar[289602]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:30:11 np0005542249 nervous_chandrasekhar[289602]:        "osd_id": 2,
Dec  2 06:30:11 np0005542249 nervous_chandrasekhar[289602]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:30:11 np0005542249 nervous_chandrasekhar[289602]:        "type": "bluestore"
Dec  2 06:30:11 np0005542249 nervous_chandrasekhar[289602]:    },
Dec  2 06:30:11 np0005542249 nervous_chandrasekhar[289602]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:30:11 np0005542249 nervous_chandrasekhar[289602]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:30:11 np0005542249 nervous_chandrasekhar[289602]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:30:11 np0005542249 nervous_chandrasekhar[289602]:        "osd_id": 1,
Dec  2 06:30:11 np0005542249 nervous_chandrasekhar[289602]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:30:11 np0005542249 nervous_chandrasekhar[289602]:        "type": "bluestore"
Dec  2 06:30:11 np0005542249 nervous_chandrasekhar[289602]:    }
Dec  2 06:30:11 np0005542249 nervous_chandrasekhar[289602]: }
Dec  2 06:30:11 np0005542249 systemd[1]: libpod-12d41b102b22258753c61d160576baba7ed4c2ddafd9c5dc977cae242a59854d.scope: Deactivated successfully.
Dec  2 06:30:11 np0005542249 systemd[1]: libpod-12d41b102b22258753c61d160576baba7ed4c2ddafd9c5dc977cae242a59854d.scope: Consumed 1.144s CPU time.
Dec  2 06:30:11 np0005542249 nova_compute[254900]: 2025-12-02 11:30:11.610 254904 DEBUG nova.virt.libvirt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Received event <DeviceRemovedEvent: 1764675011.6103325, a8aab2b3-e5a2-451d-b77a-9d977f1dd00f => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  2 06:30:11 np0005542249 nova_compute[254900]: 2025-12-02 11:30:11.614 254904 DEBUG nova.virt.libvirt.driver [None req-39e5fed0-2b8e-461f-97e1-6d129ae916e7 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance a8aab2b3-e5a2-451d-b77a-9d977f1dd00f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  2 06:30:11 np0005542249 nova_compute[254900]: 2025-12-02 11:30:11.619 254904 INFO nova.virt.libvirt.driver [None req-39e5fed0-2b8e-461f-97e1-6d129ae916e7 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Successfully detached device vdb from instance a8aab2b3-e5a2-451d-b77a-9d977f1dd00f from the live domain config.#033[00m
Dec  2 06:30:11 np0005542249 podman[289635]: 2025-12-02 11:30:11.648306478 +0000 UTC m=+0.072442432 container died 12d41b102b22258753c61d160576baba7ed4c2ddafd9c5dc977cae242a59854d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Dec  2 06:30:11 np0005542249 systemd[1]: var-lib-containers-storage-overlay-1e327dc23d0615cb9e5426a3c1b9068e3b5b84781c0b3432fe923cd7626b0eaf-merged.mount: Deactivated successfully.
Dec  2 06:30:11 np0005542249 podman[289635]: 2025-12-02 11:30:11.742309553 +0000 UTC m=+0.166445477 container remove 12d41b102b22258753c61d160576baba7ed4c2ddafd9c5dc977cae242a59854d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:30:11 np0005542249 systemd[1]: libpod-conmon-12d41b102b22258753c61d160576baba7ed4c2ddafd9c5dc977cae242a59854d.scope: Deactivated successfully.
Dec  2 06:30:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:30:11 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:30:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:30:11 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:30:11 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 624d7f1a-e4e3-42c8-8ce1-e5a7f5133368 does not exist
Dec  2 06:30:11 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 2a40e70c-bf07-4508-88c0-01f8e99c8db6 does not exist
Dec  2 06:30:11 np0005542249 nova_compute[254900]: 2025-12-02 11:30:11.854 254904 DEBUG nova.objects.instance [None req-39e5fed0-2b8e-461f-97e1-6d129ae916e7 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lazy-loading 'flavor' on Instance uuid a8aab2b3-e5a2-451d-b77a-9d977f1dd00f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:30:11 np0005542249 nova_compute[254900]: 2025-12-02 11:30:11.899 254904 DEBUG oslo_concurrency.lockutils [None req-39e5fed0-2b8e-461f-97e1-6d129ae916e7 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:12 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1612: 321 pgs: 321 active+clean; 429 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 443 KiB/s rd, 3.9 MiB/s wr, 166 op/s
Dec  2 06:30:12 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:12.481 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:30:12 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:30:12 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:30:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:30:13 np0005542249 nova_compute[254900]: 2025-12-02 11:30:13.911 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:14 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1613: 321 pgs: 321 active+clean; 432 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 403 KiB/s rd, 3.3 MiB/s wr, 155 op/s
Dec  2 06:30:14 np0005542249 nova_compute[254900]: 2025-12-02 11:30:14.582 254904 DEBUG oslo_concurrency.lockutils [None req-2804c428-b44a-4d42-ba01-8c3a632c5f38 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Acquiring lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:14 np0005542249 nova_compute[254900]: 2025-12-02 11:30:14.583 254904 DEBUG oslo_concurrency.lockutils [None req-2804c428-b44a-4d42-ba01-8c3a632c5f38 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:14 np0005542249 nova_compute[254900]: 2025-12-02 11:30:14.603 254904 DEBUG nova.objects.instance [None req-2804c428-b44a-4d42-ba01-8c3a632c5f38 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lazy-loading 'flavor' on Instance uuid a8aab2b3-e5a2-451d-b77a-9d977f1dd00f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:30:14 np0005542249 nova_compute[254900]: 2025-12-02 11:30:14.642 254904 DEBUG oslo_concurrency.lockutils [None req-2804c428-b44a-4d42-ba01-8c3a632c5f38 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:14 np0005542249 nova_compute[254900]: 2025-12-02 11:30:14.893 254904 DEBUG oslo_concurrency.lockutils [None req-2804c428-b44a-4d42-ba01-8c3a632c5f38 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Acquiring lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:14 np0005542249 nova_compute[254900]: 2025-12-02 11:30:14.894 254904 DEBUG oslo_concurrency.lockutils [None req-2804c428-b44a-4d42-ba01-8c3a632c5f38 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:14 np0005542249 nova_compute[254900]: 2025-12-02 11:30:14.894 254904 INFO nova.compute.manager [None req-2804c428-b44a-4d42-ba01-8c3a632c5f38 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Attaching volume 70bc7c46-7f34-42aa-a07f-5c0ad238526d to /dev/vdb#033[00m
Dec  2 06:30:15 np0005542249 nova_compute[254900]: 2025-12-02 11:30:15.042 254904 DEBUG os_brick.utils [None req-2804c428-b44a-4d42-ba01-8c3a632c5f38 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:30:15 np0005542249 nova_compute[254900]: 2025-12-02 11:30:15.042 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:15 np0005542249 nova_compute[254900]: 2025-12-02 11:30:15.055 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:15 np0005542249 nova_compute[254900]: 2025-12-02 11:30:15.055 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[ece7cd59-f6fe-44bb-b40c-76bce0f11971]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:15 np0005542249 nova_compute[254900]: 2025-12-02 11:30:15.057 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:15 np0005542249 nova_compute[254900]: 2025-12-02 11:30:15.068 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:15 np0005542249 nova_compute[254900]: 2025-12-02 11:30:15.069 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[6ec34cdc-72ab-4856-9216-bbcbeb451e88]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:15 np0005542249 nova_compute[254900]: 2025-12-02 11:30:15.070 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:15 np0005542249 nova_compute[254900]: 2025-12-02 11:30:15.082 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:15 np0005542249 nova_compute[254900]: 2025-12-02 11:30:15.083 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[bb2c141d-3300-46f9-8c2f-0d363c9e202c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:15 np0005542249 nova_compute[254900]: 2025-12-02 11:30:15.084 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[3f3db496-c3ea-487a-a4d8-bd926f2d9917]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:15 np0005542249 nova_compute[254900]: 2025-12-02 11:30:15.084 254904 DEBUG oslo_concurrency.processutils [None req-2804c428-b44a-4d42-ba01-8c3a632c5f38 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:15 np0005542249 nova_compute[254900]: 2025-12-02 11:30:15.118 254904 DEBUG oslo_concurrency.processutils [None req-2804c428-b44a-4d42-ba01-8c3a632c5f38 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:15 np0005542249 nova_compute[254900]: 2025-12-02 11:30:15.120 254904 DEBUG os_brick.initiator.connectors.lightos [None req-2804c428-b44a-4d42-ba01-8c3a632c5f38 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:30:15 np0005542249 nova_compute[254900]: 2025-12-02 11:30:15.121 254904 DEBUG os_brick.initiator.connectors.lightos [None req-2804c428-b44a-4d42-ba01-8c3a632c5f38 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:30:15 np0005542249 nova_compute[254900]: 2025-12-02 11:30:15.121 254904 DEBUG os_brick.initiator.connectors.lightos [None req-2804c428-b44a-4d42-ba01-8c3a632c5f38 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:30:15 np0005542249 nova_compute[254900]: 2025-12-02 11:30:15.121 254904 DEBUG os_brick.utils [None req-2804c428-b44a-4d42-ba01-8c3a632c5f38 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] <== get_connector_properties: return (79ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:30:15 np0005542249 nova_compute[254900]: 2025-12-02 11:30:15.122 254904 DEBUG nova.virt.block_device [None req-2804c428-b44a-4d42-ba01-8c3a632c5f38 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Updating existing volume attachment record: 2f14f64a-c9ba-4030-8606-97142dbb5419 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:30:15 np0005542249 nova_compute[254900]: 2025-12-02 11:30:15.290 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:30:15 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3619934994' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:30:15 np0005542249 nova_compute[254900]: 2025-12-02 11:30:15.934 254904 DEBUG nova.objects.instance [None req-2804c428-b44a-4d42-ba01-8c3a632c5f38 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lazy-loading 'flavor' on Instance uuid a8aab2b3-e5a2-451d-b77a-9d977f1dd00f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:30:15 np0005542249 nova_compute[254900]: 2025-12-02 11:30:15.980 254904 DEBUG nova.virt.libvirt.driver [None req-2804c428-b44a-4d42-ba01-8c3a632c5f38 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Attempting to attach volume 70bc7c46-7f34-42aa-a07f-5c0ad238526d with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Dec  2 06:30:15 np0005542249 nova_compute[254900]: 2025-12-02 11:30:15.985 254904 DEBUG nova.virt.libvirt.guest [None req-2804c428-b44a-4d42-ba01-8c3a632c5f38 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] attach device xml: <disk type="network" device="disk">
Dec  2 06:30:15 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:30:15 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-70bc7c46-7f34-42aa-a07f-5c0ad238526d">
Dec  2 06:30:15 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:30:15 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:30:15 np0005542249 nova_compute[254900]:  <auth username="openstack">
Dec  2 06:30:15 np0005542249 nova_compute[254900]:    <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:30:15 np0005542249 nova_compute[254900]:  </auth>
Dec  2 06:30:15 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:30:15 np0005542249 nova_compute[254900]:  <serial>70bc7c46-7f34-42aa-a07f-5c0ad238526d</serial>
Dec  2 06:30:15 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:30:15 np0005542249 nova_compute[254900]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  2 06:30:16 np0005542249 nova_compute[254900]: 2025-12-02 11:30:16.140 254904 DEBUG nova.virt.libvirt.driver [None req-2804c428-b44a-4d42-ba01-8c3a632c5f38 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:30:16 np0005542249 nova_compute[254900]: 2025-12-02 11:30:16.141 254904 DEBUG nova.virt.libvirt.driver [None req-2804c428-b44a-4d42-ba01-8c3a632c5f38 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:30:16 np0005542249 nova_compute[254900]: 2025-12-02 11:30:16.142 254904 DEBUG nova.virt.libvirt.driver [None req-2804c428-b44a-4d42-ba01-8c3a632c5f38 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:30:16 np0005542249 nova_compute[254900]: 2025-12-02 11:30:16.142 254904 DEBUG nova.virt.libvirt.driver [None req-2804c428-b44a-4d42-ba01-8c3a632c5f38 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] No VIF found with MAC fa:16:3e:c4:74:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:30:16 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1614: 321 pgs: 321 active+clean; 432 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.0 MiB/s wr, 114 op/s
Dec  2 06:30:16 np0005542249 nova_compute[254900]: 2025-12-02 11:30:16.384 254904 DEBUG oslo_concurrency.lockutils [None req-2804c428-b44a-4d42-ba01-8c3a632c5f38 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.490s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:16 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:16Z|00044|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.12
Dec  2 06:30:16 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:16Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:9b:43:9e 10.100.0.12
Dec  2 06:30:17 np0005542249 podman[289729]: 2025-12-02 11:30:17.050332726 +0000 UTC m=+0.107629345 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  2 06:30:17 np0005542249 nova_compute[254900]: 2025-12-02 11:30:17.722 254904 DEBUG oslo_concurrency.lockutils [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "bdac108b-bf09-468a-9c93-c72b5128519b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:17 np0005542249 nova_compute[254900]: 2025-12-02 11:30:17.722 254904 DEBUG oslo_concurrency.lockutils [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "bdac108b-bf09-468a-9c93-c72b5128519b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:17 np0005542249 nova_compute[254900]: 2025-12-02 11:30:17.723 254904 DEBUG oslo_concurrency.lockutils [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "bdac108b-bf09-468a-9c93-c72b5128519b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:17 np0005542249 nova_compute[254900]: 2025-12-02 11:30:17.723 254904 DEBUG oslo_concurrency.lockutils [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "bdac108b-bf09-468a-9c93-c72b5128519b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:17 np0005542249 nova_compute[254900]: 2025-12-02 11:30:17.723 254904 DEBUG oslo_concurrency.lockutils [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "bdac108b-bf09-468a-9c93-c72b5128519b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:17 np0005542249 nova_compute[254900]: 2025-12-02 11:30:17.725 254904 INFO nova.compute.manager [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Terminating instance#033[00m
Dec  2 06:30:17 np0005542249 nova_compute[254900]: 2025-12-02 11:30:17.726 254904 DEBUG nova.compute.manager [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:30:17 np0005542249 kernel: tap88d722e2-fd (unregistering): left promiscuous mode
Dec  2 06:30:17 np0005542249 NetworkManager[48987]: <info>  [1764675017.7929] device (tap88d722e2-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:30:17 np0005542249 nova_compute[254900]: 2025-12-02 11:30:17.812 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:17 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:17Z|00216|binding|INFO|Releasing lport 88d722e2-fd4e-4803-b606-992ec0618074 from this chassis (sb_readonly=0)
Dec  2 06:30:17 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:17Z|00217|binding|INFO|Setting lport 88d722e2-fd4e-4803-b606-992ec0618074 down in Southbound
Dec  2 06:30:17 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:17Z|00218|binding|INFO|Removing iface tap88d722e2-fd ovn-installed in OVS
Dec  2 06:30:17 np0005542249 nova_compute[254900]: 2025-12-02 11:30:17.817 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:17 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:17.827 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:2c:a8 10.100.0.3'], port_security=['fa:16:3e:cb:2c:a8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'bdac108b-bf09-468a-9c93-c72b5128519b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '625a6939c31646a4a83ea851774cf28c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bf93629e-6336-4a9c-a41d-6ce19e6b6662', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.207'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cc823ef0-3e69-4062-a488-a82f483f10bb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=88d722e2-fd4e-4803-b606-992ec0618074) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:30:17 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:17.829 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 88d722e2-fd4e-4803-b606-992ec0618074 in datapath acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 unbound from our chassis#033[00m
Dec  2 06:30:17 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:17.834 163757 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 06:30:17 np0005542249 nova_compute[254900]: 2025-12-02 11:30:17.840 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:17 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:17.837 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[70b31dc7-77c7-4f87-9e03-9a105799ab9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:17 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:17.839 163757 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 namespace which is not needed anymore#033[00m
Dec  2 06:30:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:30:17 np0005542249 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000016.scope: Deactivated successfully.
Dec  2 06:30:17 np0005542249 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000016.scope: Consumed 14.858s CPU time.
Dec  2 06:30:17 np0005542249 systemd-machined[216222]: Machine qemu-22-instance-00000016 terminated.
Dec  2 06:30:17 np0005542249 nova_compute[254900]: 2025-12-02 11:30:17.954 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:17 np0005542249 nova_compute[254900]: 2025-12-02 11:30:17.963 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:17 np0005542249 nova_compute[254900]: 2025-12-02 11:30:17.969 254904 INFO nova.virt.libvirt.driver [-] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Instance destroyed successfully.#033[00m
Dec  2 06:30:17 np0005542249 nova_compute[254900]: 2025-12-02 11:30:17.969 254904 DEBUG nova.objects.instance [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lazy-loading 'resources' on Instance uuid bdac108b-bf09-468a-9c93-c72b5128519b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:30:18 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[288467]: [NOTICE]   (288471) : haproxy version is 2.8.14-c23fe91
Dec  2 06:30:18 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[288467]: [NOTICE]   (288471) : path to executable is /usr/sbin/haproxy
Dec  2 06:30:18 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[288467]: [WARNING]  (288471) : Exiting Master process...
Dec  2 06:30:18 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[288467]: [ALERT]    (288471) : Current worker (288473) exited with code 143 (Terminated)
Dec  2 06:30:18 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[288467]: [WARNING]  (288471) : All workers exited. Exiting... (0)
Dec  2 06:30:18 np0005542249 systemd[1]: libpod-94996d27b6399a1f056d438a20a154ae45dbc34084e49f4a4166e4d8e67f4572.scope: Deactivated successfully.
Dec  2 06:30:18 np0005542249 podman[289777]: 2025-12-02 11:30:18.035168972 +0000 UTC m=+0.055849563 container died 94996d27b6399a1f056d438a20a154ae45dbc34084e49f4a4166e4d8e67f4572 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  2 06:30:18 np0005542249 nova_compute[254900]: 2025-12-02 11:30:18.049 254904 DEBUG nova.virt.libvirt.vif [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:29:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1542057907',display_name='tempest-TestVolumeBootPattern-server-1542057907',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1542057907',id=22,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDl9Chitcp+6ZZ9O/so1iQpbrg+ZOVWOrATMsWTbgaWcZg2lFiQK4KEUyaqp5+G/z2wPorJssN622GdMYPRLScxIeivbRrFeE5q310MfETTcDT4f8HB9OmcWcicW5ZF4QA==',key_name='tempest-TestVolumeBootPattern-765108891',keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:29:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='625a6939c31646a4a83ea851774cf28c',ramdisk_id='',reservation_id='r-chlfp0yu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1396850361',owner_user_name='tempest-TestVolumeBootPattern-1396850361-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:29:55Z,user_data=None,user_id='6ccb73a613554d938221b4bf46d7ae83',uuid=bdac108b-bf09-468a-9c93-c72b5128519b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88d722e2-fd4e-4803-b606-992ec0618074", "address": "fa:16:3e:cb:2c:a8", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d722e2-fd", "ovs_interfaceid": "88d722e2-fd4e-4803-b606-992ec0618074", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:30:18 np0005542249 nova_compute[254900]: 2025-12-02 11:30:18.049 254904 DEBUG nova.network.os_vif_util [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converting VIF {"id": "88d722e2-fd4e-4803-b606-992ec0618074", "address": "fa:16:3e:cb:2c:a8", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d722e2-fd", "ovs_interfaceid": "88d722e2-fd4e-4803-b606-992ec0618074", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:30:18 np0005542249 nova_compute[254900]: 2025-12-02 11:30:18.050 254904 DEBUG nova.network.os_vif_util [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cb:2c:a8,bridge_name='br-int',has_traffic_filtering=True,id=88d722e2-fd4e-4803-b606-992ec0618074,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d722e2-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:30:18 np0005542249 nova_compute[254900]: 2025-12-02 11:30:18.051 254904 DEBUG os_vif [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:2c:a8,bridge_name='br-int',has_traffic_filtering=True,id=88d722e2-fd4e-4803-b606-992ec0618074,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d722e2-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:30:18 np0005542249 nova_compute[254900]: 2025-12-02 11:30:18.053 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:18 np0005542249 nova_compute[254900]: 2025-12-02 11:30:18.053 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88d722e2-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:30:18 np0005542249 nova_compute[254900]: 2025-12-02 11:30:18.056 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:18 np0005542249 nova_compute[254900]: 2025-12-02 11:30:18.059 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:30:18 np0005542249 nova_compute[254900]: 2025-12-02 11:30:18.063 254904 INFO os_vif [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:2c:a8,bridge_name='br-int',has_traffic_filtering=True,id=88d722e2-fd4e-4803-b606-992ec0618074,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d722e2-fd')#033[00m
Dec  2 06:30:18 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-94996d27b6399a1f056d438a20a154ae45dbc34084e49f4a4166e4d8e67f4572-userdata-shm.mount: Deactivated successfully.
Dec  2 06:30:18 np0005542249 systemd[1]: var-lib-containers-storage-overlay-471a968f155a9bb1a9de02c5d89bdb5057a837f5d745c535a690b463bdf887a1-merged.mount: Deactivated successfully.
Dec  2 06:30:18 np0005542249 podman[289777]: 2025-12-02 11:30:18.087231992 +0000 UTC m=+0.107912583 container cleanup 94996d27b6399a1f056d438a20a154ae45dbc34084e49f4a4166e4d8e67f4572 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec  2 06:30:18 np0005542249 systemd[1]: libpod-conmon-94996d27b6399a1f056d438a20a154ae45dbc34084e49f4a4166e4d8e67f4572.scope: Deactivated successfully.
Dec  2 06:30:18 np0005542249 podman[289826]: 2025-12-02 11:30:18.177684601 +0000 UTC m=+0.055788732 container remove 94996d27b6399a1f056d438a20a154ae45dbc34084e49f4a4166e4d8e67f4572 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:30:18 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:18.183 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[f5913ab7-659d-44ba-af98-f31217ceb4e6]: (4, ('Tue Dec  2 11:30:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 (94996d27b6399a1f056d438a20a154ae45dbc34084e49f4a4166e4d8e67f4572)\n94996d27b6399a1f056d438a20a154ae45dbc34084e49f4a4166e4d8e67f4572\nTue Dec  2 11:30:18 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 (94996d27b6399a1f056d438a20a154ae45dbc34084e49f4a4166e4d8e67f4572)\n94996d27b6399a1f056d438a20a154ae45dbc34084e49f4a4166e4d8e67f4572\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:18 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:18.186 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[f42d3163-6c93-4793-b35c-62ef2b732d3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:18 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:18.187 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacfaa8ac-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:30:18 np0005542249 kernel: tapacfaa8ac-00: left promiscuous mode
Dec  2 06:30:18 np0005542249 nova_compute[254900]: 2025-12-02 11:30:18.190 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:18 np0005542249 nova_compute[254900]: 2025-12-02 11:30:18.204 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:18 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:18.210 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[f53dd78e-b8cf-4485-b907-513ed8771c39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:18 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:18.227 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[ebbf8811-3475-4e2d-b951-9543ada7173e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:18 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:18.230 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[342a39dc-86f5-48a2-8910-07d339a6e17b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:18 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:18.250 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[30bc4ec5-894b-4e7e-be84-842d1b5aea6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518567, 'reachable_time': 18704, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289846, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:18 np0005542249 systemd[1]: run-netns-ovnmeta\x2dacfaa8ac\x2d0b3c\x2d4cdd\x2da6b8\x2da70a713ae754.mount: Deactivated successfully.
Dec  2 06:30:18 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:18.255 164036 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 06:30:18 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:18.256 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[035d464c-8df1-4f4f-86b1-dc5934b14dc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:18 np0005542249 nova_compute[254900]: 2025-12-02 11:30:18.268 254904 INFO nova.virt.libvirt.driver [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Deleting instance files /var/lib/nova/instances/bdac108b-bf09-468a-9c93-c72b5128519b_del#033[00m
Dec  2 06:30:18 np0005542249 nova_compute[254900]: 2025-12-02 11:30:18.269 254904 INFO nova.virt.libvirt.driver [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Deletion of /var/lib/nova/instances/bdac108b-bf09-468a-9c93-c72b5128519b_del complete#033[00m
Dec  2 06:30:18 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1615: 321 pgs: 321 active+clean; 432 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 753 KiB/s rd, 1.7 MiB/s wr, 160 op/s
Dec  2 06:30:18 np0005542249 nova_compute[254900]: 2025-12-02 11:30:18.315 254904 INFO nova.compute.manager [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:30:18 np0005542249 nova_compute[254900]: 2025-12-02 11:30:18.316 254904 DEBUG oslo.service.loopingcall [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:30:18 np0005542249 nova_compute[254900]: 2025-12-02 11:30:18.316 254904 DEBUG nova.compute.manager [-] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:30:18 np0005542249 nova_compute[254900]: 2025-12-02 11:30:18.316 254904 DEBUG nova.network.neutron [-] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:30:18 np0005542249 nova_compute[254900]: 2025-12-02 11:30:18.914 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:18 np0005542249 nova_compute[254900]: 2025-12-02 11:30:18.951 254904 DEBUG nova.compute.manager [req-a11978d0-15cc-49ab-bdfc-c605c711b05e req-d9407d49-0d21-4ebd-8ca1-86151f671b16 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Received event network-vif-unplugged-88d722e2-fd4e-4803-b606-992ec0618074 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:30:18 np0005542249 nova_compute[254900]: 2025-12-02 11:30:18.951 254904 DEBUG oslo_concurrency.lockutils [req-a11978d0-15cc-49ab-bdfc-c605c711b05e req-d9407d49-0d21-4ebd-8ca1-86151f671b16 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "bdac108b-bf09-468a-9c93-c72b5128519b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:18 np0005542249 nova_compute[254900]: 2025-12-02 11:30:18.951 254904 DEBUG oslo_concurrency.lockutils [req-a11978d0-15cc-49ab-bdfc-c605c711b05e req-d9407d49-0d21-4ebd-8ca1-86151f671b16 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "bdac108b-bf09-468a-9c93-c72b5128519b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:18 np0005542249 nova_compute[254900]: 2025-12-02 11:30:18.952 254904 DEBUG oslo_concurrency.lockutils [req-a11978d0-15cc-49ab-bdfc-c605c711b05e req-d9407d49-0d21-4ebd-8ca1-86151f671b16 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "bdac108b-bf09-468a-9c93-c72b5128519b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:18 np0005542249 nova_compute[254900]: 2025-12-02 11:30:18.952 254904 DEBUG nova.compute.manager [req-a11978d0-15cc-49ab-bdfc-c605c711b05e req-d9407d49-0d21-4ebd-8ca1-86151f671b16 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] No waiting events found dispatching network-vif-unplugged-88d722e2-fd4e-4803-b606-992ec0618074 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:30:18 np0005542249 nova_compute[254900]: 2025-12-02 11:30:18.952 254904 DEBUG nova.compute.manager [req-a11978d0-15cc-49ab-bdfc-c605c711b05e req-d9407d49-0d21-4ebd-8ca1-86151f671b16 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Received event network-vif-unplugged-88d722e2-fd4e-4803-b606-992ec0618074 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 06:30:19 np0005542249 nova_compute[254900]: 2025-12-02 11:30:19.188 254904 DEBUG oslo_concurrency.lockutils [None req-7c12cf51-9d79-440e-b37e-89e3ab610070 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Acquiring lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:19 np0005542249 nova_compute[254900]: 2025-12-02 11:30:19.188 254904 DEBUG oslo_concurrency.lockutils [None req-7c12cf51-9d79-440e-b37e-89e3ab610070 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:19 np0005542249 nova_compute[254900]: 2025-12-02 11:30:19.211 254904 INFO nova.compute.manager [None req-7c12cf51-9d79-440e-b37e-89e3ab610070 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Detaching volume 70bc7c46-7f34-42aa-a07f-5c0ad238526d#033[00m
Dec  2 06:30:19 np0005542249 nova_compute[254900]: 2025-12-02 11:30:19.214 254904 DEBUG nova.network.neutron [-] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:30:19 np0005542249 nova_compute[254900]: 2025-12-02 11:30:19.247 254904 INFO nova.compute.manager [-] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Took 0.93 seconds to deallocate network for instance.#033[00m
Dec  2 06:30:19 np0005542249 nova_compute[254900]: 2025-12-02 11:30:19.341 254904 DEBUG nova.compute.manager [req-58ce88c5-77b9-4d9f-8c45-88f1a3771c66 req-f8be2cb5-048c-4270-88ef-8ad30676b41a 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Received event network-vif-deleted-88d722e2-fd4e-4803-b606-992ec0618074 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:30:19 np0005542249 nova_compute[254900]: 2025-12-02 11:30:19.396 254904 INFO nova.virt.block_device [None req-7c12cf51-9d79-440e-b37e-89e3ab610070 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Attempting to driver detach volume 70bc7c46-7f34-42aa-a07f-5c0ad238526d from mountpoint /dev/vdb#033[00m
Dec  2 06:30:19 np0005542249 nova_compute[254900]: 2025-12-02 11:30:19.411 254904 DEBUG nova.virt.libvirt.driver [None req-7c12cf51-9d79-440e-b37e-89e3ab610070 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Attempting to detach device vdb from instance a8aab2b3-e5a2-451d-b77a-9d977f1dd00f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  2 06:30:19 np0005542249 nova_compute[254900]: 2025-12-02 11:30:19.412 254904 DEBUG nova.virt.libvirt.guest [None req-7c12cf51-9d79-440e-b37e-89e3ab610070 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] detach device xml: <disk type="network" device="disk">
Dec  2 06:30:19 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:30:19 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-70bc7c46-7f34-42aa-a07f-5c0ad238526d">
Dec  2 06:30:19 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:30:19 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:30:19 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:30:19 np0005542249 nova_compute[254900]:  <serial>70bc7c46-7f34-42aa-a07f-5c0ad238526d</serial>
Dec  2 06:30:19 np0005542249 nova_compute[254900]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  2 06:30:19 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:30:19 np0005542249 nova_compute[254900]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  2 06:30:19 np0005542249 nova_compute[254900]: 2025-12-02 11:30:19.423 254904 INFO nova.virt.libvirt.driver [None req-7c12cf51-9d79-440e-b37e-89e3ab610070 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Successfully detached device vdb from instance a8aab2b3-e5a2-451d-b77a-9d977f1dd00f from the persistent domain config.#033[00m
Dec  2 06:30:19 np0005542249 nova_compute[254900]: 2025-12-02 11:30:19.424 254904 DEBUG nova.virt.libvirt.driver [None req-7c12cf51-9d79-440e-b37e-89e3ab610070 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance a8aab2b3-e5a2-451d-b77a-9d977f1dd00f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  2 06:30:19 np0005542249 nova_compute[254900]: 2025-12-02 11:30:19.425 254904 DEBUG nova.virt.libvirt.guest [None req-7c12cf51-9d79-440e-b37e-89e3ab610070 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] detach device xml: <disk type="network" device="disk">
Dec  2 06:30:19 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:30:19 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-70bc7c46-7f34-42aa-a07f-5c0ad238526d">
Dec  2 06:30:19 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:30:19 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:30:19 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:30:19 np0005542249 nova_compute[254900]:  <serial>70bc7c46-7f34-42aa-a07f-5c0ad238526d</serial>
Dec  2 06:30:19 np0005542249 nova_compute[254900]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  2 06:30:19 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:30:19 np0005542249 nova_compute[254900]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  2 06:30:19 np0005542249 nova_compute[254900]: 2025-12-02 11:30:19.449 254904 INFO nova.compute.manager [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Took 0.20 seconds to detach 1 volumes for instance.#033[00m
Dec  2 06:30:19 np0005542249 nova_compute[254900]: 2025-12-02 11:30:19.510 254904 DEBUG oslo_concurrency.lockutils [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:19 np0005542249 nova_compute[254900]: 2025-12-02 11:30:19.511 254904 DEBUG oslo_concurrency.lockutils [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:19 np0005542249 nova_compute[254900]: 2025-12-02 11:30:19.570 254904 DEBUG nova.virt.libvirt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Received event <DeviceRemovedEvent: 1764675019.5695179, a8aab2b3-e5a2-451d-b77a-9d977f1dd00f => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  2 06:30:19 np0005542249 nova_compute[254900]: 2025-12-02 11:30:19.578 254904 DEBUG nova.virt.libvirt.driver [None req-7c12cf51-9d79-440e-b37e-89e3ab610070 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance a8aab2b3-e5a2-451d-b77a-9d977f1dd00f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  2 06:30:19 np0005542249 nova_compute[254900]: 2025-12-02 11:30:19.582 254904 INFO nova.virt.libvirt.driver [None req-7c12cf51-9d79-440e-b37e-89e3ab610070 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Successfully detached device vdb from instance a8aab2b3-e5a2-451d-b77a-9d977f1dd00f from the live domain config.#033[00m
Dec  2 06:30:19 np0005542249 nova_compute[254900]: 2025-12-02 11:30:19.622 254904 DEBUG oslo_concurrency.processutils [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:19 np0005542249 nova_compute[254900]: 2025-12-02 11:30:19.750 254904 DEBUG nova.objects.instance [None req-7c12cf51-9d79-440e-b37e-89e3ab610070 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lazy-loading 'flavor' on Instance uuid a8aab2b3-e5a2-451d-b77a-9d977f1dd00f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:30:19 np0005542249 nova_compute[254900]: 2025-12-02 11:30:19.796 254904 DEBUG oslo_concurrency.lockutils [None req-7c12cf51-9d79-440e-b37e-89e3ab610070 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:19.845 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:19.846 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:19.848 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:30:20 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/137397053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:30:20 np0005542249 nova_compute[254900]: 2025-12-02 11:30:20.105 254904 DEBUG oslo_concurrency.processutils [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:20 np0005542249 nova_compute[254900]: 2025-12-02 11:30:20.113 254904 DEBUG nova.compute.provider_tree [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:30:20 np0005542249 nova_compute[254900]: 2025-12-02 11:30:20.131 254904 DEBUG nova.scheduler.client.report [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:30:20 np0005542249 nova_compute[254900]: 2025-12-02 11:30:20.155 254904 DEBUG oslo_concurrency.lockutils [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:20 np0005542249 nova_compute[254900]: 2025-12-02 11:30:20.182 254904 INFO nova.scheduler.client.report [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Deleted allocations for instance bdac108b-bf09-468a-9c93-c72b5128519b#033[00m
Dec  2 06:30:20 np0005542249 nova_compute[254900]: 2025-12-02 11:30:20.253 254904 DEBUG oslo_concurrency.lockutils [None req-c3be0319-b2d8-4db4-8e41-968107aa55ad 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "bdac108b-bf09-468a-9c93-c72b5128519b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:20 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1616: 321 pgs: 321 active+clean; 432 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 943 KiB/s rd, 1.0 MiB/s wr, 148 op/s
Dec  2 06:30:21 np0005542249 nova_compute[254900]: 2025-12-02 11:30:21.028 254904 DEBUG nova.compute.manager [req-487a5714-ffb5-4514-af72-a24bdaf27a77 req-1c1f9eb2-874d-4e01-8455-69060eb8b1e1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Received event network-vif-plugged-88d722e2-fd4e-4803-b606-992ec0618074 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:30:21 np0005542249 nova_compute[254900]: 2025-12-02 11:30:21.028 254904 DEBUG oslo_concurrency.lockutils [req-487a5714-ffb5-4514-af72-a24bdaf27a77 req-1c1f9eb2-874d-4e01-8455-69060eb8b1e1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "bdac108b-bf09-468a-9c93-c72b5128519b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:21 np0005542249 nova_compute[254900]: 2025-12-02 11:30:21.029 254904 DEBUG oslo_concurrency.lockutils [req-487a5714-ffb5-4514-af72-a24bdaf27a77 req-1c1f9eb2-874d-4e01-8455-69060eb8b1e1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "bdac108b-bf09-468a-9c93-c72b5128519b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:21 np0005542249 nova_compute[254900]: 2025-12-02 11:30:21.029 254904 DEBUG oslo_concurrency.lockutils [req-487a5714-ffb5-4514-af72-a24bdaf27a77 req-1c1f9eb2-874d-4e01-8455-69060eb8b1e1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "bdac108b-bf09-468a-9c93-c72b5128519b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:21 np0005542249 nova_compute[254900]: 2025-12-02 11:30:21.029 254904 DEBUG nova.compute.manager [req-487a5714-ffb5-4514-af72-a24bdaf27a77 req-1c1f9eb2-874d-4e01-8455-69060eb8b1e1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] No waiting events found dispatching network-vif-plugged-88d722e2-fd4e-4803-b606-992ec0618074 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:30:21 np0005542249 nova_compute[254900]: 2025-12-02 11:30:21.030 254904 WARNING nova.compute.manager [req-487a5714-ffb5-4514-af72-a24bdaf27a77 req-1c1f9eb2-874d-4e01-8455-69060eb8b1e1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Received unexpected event network-vif-plugged-88d722e2-fd4e-4803-b606-992ec0618074 for instance with vm_state deleted and task_state None.#033[00m
Dec  2 06:30:21 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:21Z|00046|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.12
Dec  2 06:30:21 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:21Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:9b:43:9e 10.100.0.12
Dec  2 06:30:21 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:21Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9b:43:9e 10.100.0.12
Dec  2 06:30:21 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:21Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9b:43:9e 10.100.0.12
Dec  2 06:30:22 np0005542249 nova_compute[254900]: 2025-12-02 11:30:22.273 254904 DEBUG oslo_concurrency.lockutils [None req-370cf25d-b688-4552-ba71-81a567af186c 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Acquiring lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:22 np0005542249 nova_compute[254900]: 2025-12-02 11:30:22.274 254904 DEBUG oslo_concurrency.lockutils [None req-370cf25d-b688-4552-ba71-81a567af186c 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:22 np0005542249 nova_compute[254900]: 2025-12-02 11:30:22.290 254904 DEBUG nova.objects.instance [None req-370cf25d-b688-4552-ba71-81a567af186c 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lazy-loading 'flavor' on Instance uuid a8aab2b3-e5a2-451d-b77a-9d977f1dd00f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:30:22 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1617: 321 pgs: 321 active+clean; 432 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 840 KiB/s rd, 990 KiB/s wr, 149 op/s
Dec  2 06:30:22 np0005542249 nova_compute[254900]: 2025-12-02 11:30:22.317 254904 DEBUG oslo_concurrency.lockutils [None req-370cf25d-b688-4552-ba71-81a567af186c 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:22 np0005542249 nova_compute[254900]: 2025-12-02 11:30:22.546 254904 DEBUG oslo_concurrency.lockutils [None req-370cf25d-b688-4552-ba71-81a567af186c 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Acquiring lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:22 np0005542249 nova_compute[254900]: 2025-12-02 11:30:22.547 254904 DEBUG oslo_concurrency.lockutils [None req-370cf25d-b688-4552-ba71-81a567af186c 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:22 np0005542249 nova_compute[254900]: 2025-12-02 11:30:22.548 254904 INFO nova.compute.manager [None req-370cf25d-b688-4552-ba71-81a567af186c 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Attaching volume 4047b8dc-6681-4efe-a36e-920154488d60 to /dev/vdb#033[00m
Dec  2 06:30:22 np0005542249 nova_compute[254900]: 2025-12-02 11:30:22.653 254904 DEBUG os_brick.utils [None req-370cf25d-b688-4552-ba71-81a567af186c 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:30:22 np0005542249 nova_compute[254900]: 2025-12-02 11:30:22.655 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:22 np0005542249 nova_compute[254900]: 2025-12-02 11:30:22.668 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:22 np0005542249 nova_compute[254900]: 2025-12-02 11:30:22.668 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[1d3f7329-8e9b-4a8e-8425-bb566765c64d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:22 np0005542249 nova_compute[254900]: 2025-12-02 11:30:22.670 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:22 np0005542249 nova_compute[254900]: 2025-12-02 11:30:22.682 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:22 np0005542249 nova_compute[254900]: 2025-12-02 11:30:22.682 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[dda4eed3-2540-494c-b207-1ef42af6d984]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:22 np0005542249 nova_compute[254900]: 2025-12-02 11:30:22.686 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:22 np0005542249 nova_compute[254900]: 2025-12-02 11:30:22.701 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:22 np0005542249 nova_compute[254900]: 2025-12-02 11:30:22.702 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[7d40b99a-9ab1-410a-ab8d-f0a55e623961]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:22 np0005542249 nova_compute[254900]: 2025-12-02 11:30:22.704 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[1640966d-b1a6-4007-9069-81406b0d6f93]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:22 np0005542249 nova_compute[254900]: 2025-12-02 11:30:22.705 254904 DEBUG oslo_concurrency.processutils [None req-370cf25d-b688-4552-ba71-81a567af186c 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:22 np0005542249 nova_compute[254900]: 2025-12-02 11:30:22.742 254904 DEBUG oslo_concurrency.processutils [None req-370cf25d-b688-4552-ba71-81a567af186c 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] CMD "nvme version" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:22 np0005542249 nova_compute[254900]: 2025-12-02 11:30:22.747 254904 DEBUG os_brick.initiator.connectors.lightos [None req-370cf25d-b688-4552-ba71-81a567af186c 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:30:22 np0005542249 nova_compute[254900]: 2025-12-02 11:30:22.747 254904 DEBUG os_brick.initiator.connectors.lightos [None req-370cf25d-b688-4552-ba71-81a567af186c 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:30:22 np0005542249 nova_compute[254900]: 2025-12-02 11:30:22.747 254904 DEBUG os_brick.initiator.connectors.lightos [None req-370cf25d-b688-4552-ba71-81a567af186c 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:30:22 np0005542249 nova_compute[254900]: 2025-12-02 11:30:22.748 254904 DEBUG os_brick.utils [None req-370cf25d-b688-4552-ba71-81a567af186c 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] <== get_connector_properties: return (93ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:30:22 np0005542249 nova_compute[254900]: 2025-12-02 11:30:22.748 254904 DEBUG nova.virt.block_device [None req-370cf25d-b688-4552-ba71-81a567af186c 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Updating existing volume attachment record: 7ce78f0f-7b4e-4fe9-a6e7-12770333e8ce _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:30:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:30:23 np0005542249 nova_compute[254900]: 2025-12-02 11:30:23.057 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:30:23 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3771050873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:30:23 np0005542249 nova_compute[254900]: 2025-12-02 11:30:23.473 254904 DEBUG nova.objects.instance [None req-370cf25d-b688-4552-ba71-81a567af186c 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lazy-loading 'flavor' on Instance uuid a8aab2b3-e5a2-451d-b77a-9d977f1dd00f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:30:23 np0005542249 nova_compute[254900]: 2025-12-02 11:30:23.500 254904 DEBUG nova.virt.libvirt.driver [None req-370cf25d-b688-4552-ba71-81a567af186c 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Attempting to attach volume 4047b8dc-6681-4efe-a36e-920154488d60 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Dec  2 06:30:23 np0005542249 nova_compute[254900]: 2025-12-02 11:30:23.503 254904 DEBUG nova.virt.libvirt.guest [None req-370cf25d-b688-4552-ba71-81a567af186c 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] attach device xml: <disk type="network" device="disk">
Dec  2 06:30:23 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:30:23 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-4047b8dc-6681-4efe-a36e-920154488d60">
Dec  2 06:30:23 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:30:23 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:30:23 np0005542249 nova_compute[254900]:  <auth username="openstack">
Dec  2 06:30:23 np0005542249 nova_compute[254900]:    <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:30:23 np0005542249 nova_compute[254900]:  </auth>
Dec  2 06:30:23 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:30:23 np0005542249 nova_compute[254900]:  <serial>4047b8dc-6681-4efe-a36e-920154488d60</serial>
Dec  2 06:30:23 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:30:23 np0005542249 nova_compute[254900]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  2 06:30:23 np0005542249 nova_compute[254900]: 2025-12-02 11:30:23.650 254904 DEBUG nova.virt.libvirt.driver [None req-370cf25d-b688-4552-ba71-81a567af186c 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:30:23 np0005542249 nova_compute[254900]: 2025-12-02 11:30:23.650 254904 DEBUG nova.virt.libvirt.driver [None req-370cf25d-b688-4552-ba71-81a567af186c 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:30:23 np0005542249 nova_compute[254900]: 2025-12-02 11:30:23.651 254904 DEBUG nova.virt.libvirt.driver [None req-370cf25d-b688-4552-ba71-81a567af186c 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:30:23 np0005542249 nova_compute[254900]: 2025-12-02 11:30:23.652 254904 DEBUG nova.virt.libvirt.driver [None req-370cf25d-b688-4552-ba71-81a567af186c 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] No VIF found with MAC fa:16:3e:c4:74:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:30:23 np0005542249 nova_compute[254900]: 2025-12-02 11:30:23.878 254904 DEBUG oslo_concurrency.lockutils [None req-370cf25d-b688-4552-ba71-81a567af186c 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:23 np0005542249 nova_compute[254900]: 2025-12-02 11:30:23.917 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:24 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1618: 321 pgs: 321 active+clean; 432 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 727 KiB/s rd, 226 KiB/s wr, 115 op/s
Dec  2 06:30:24 np0005542249 nova_compute[254900]: 2025-12-02 11:30:24.656 254904 DEBUG oslo_concurrency.lockutils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "64fbd54d-f574-44e6-a788-53938d2219e8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:24 np0005542249 nova_compute[254900]: 2025-12-02 11:30:24.657 254904 DEBUG oslo_concurrency.lockutils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "64fbd54d-f574-44e6-a788-53938d2219e8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:24 np0005542249 nova_compute[254900]: 2025-12-02 11:30:24.674 254904 DEBUG nova.compute.manager [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:30:24 np0005542249 nova_compute[254900]: 2025-12-02 11:30:24.746 254904 DEBUG oslo_concurrency.lockutils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:24 np0005542249 nova_compute[254900]: 2025-12-02 11:30:24.747 254904 DEBUG oslo_concurrency.lockutils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:24 np0005542249 nova_compute[254900]: 2025-12-02 11:30:24.758 254904 DEBUG nova.virt.hardware [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:30:24 np0005542249 nova_compute[254900]: 2025-12-02 11:30:24.758 254904 INFO nova.compute.claims [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:30:24 np0005542249 nova_compute[254900]: 2025-12-02 11:30:24.767 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:30:24 np0005542249 nova_compute[254900]: 2025-12-02 11:30:24.804 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Triggering sync for uuid a8aab2b3-e5a2-451d-b77a-9d977f1dd00f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  2 06:30:24 np0005542249 nova_compute[254900]: 2025-12-02 11:30:24.804 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Triggering sync for uuid 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  2 06:30:24 np0005542249 nova_compute[254900]: 2025-12-02 11:30:24.805 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:24 np0005542249 nova_compute[254900]: 2025-12-02 11:30:24.806 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:24 np0005542249 nova_compute[254900]: 2025-12-02 11:30:24.807 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:24 np0005542249 nova_compute[254900]: 2025-12-02 11:30:24.807 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:24 np0005542249 nova_compute[254900]: 2025-12-02 11:30:24.857 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:24 np0005542249 nova_compute[254900]: 2025-12-02 11:30:24.858 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:24 np0005542249 ceph-osd[88961]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  2 06:30:24 np0005542249 ceph-osd[88961]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 25K writes, 100K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s#012Cumulative WAL: 25K writes, 9287 syncs, 2.79 writes per sync, written: 0.07 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 14K writes, 59K keys, 14K commit groups, 1.0 writes per commit group, ingest: 41.85 MB, 0.07 MB/s#012Interval WAL: 14K writes, 5993 syncs, 2.47 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  2 06:30:24 np0005542249 nova_compute[254900]: 2025-12-02 11:30:24.949 254904 DEBUG oslo_concurrency.processutils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:30:25 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/330031924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.477 254904 DEBUG oslo_concurrency.processutils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.488 254904 DEBUG nova.compute.provider_tree [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.510 254904 DEBUG nova.scheduler.client.report [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.541 254904 DEBUG oslo_concurrency.lockutils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.543 254904 DEBUG nova.compute.manager [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.587 254904 DEBUG nova.compute.manager [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.588 254904 DEBUG nova.network.neutron [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.614 254904 INFO nova.virt.libvirt.driver [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.636 254904 DEBUG nova.compute.manager [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.695 254904 INFO nova.virt.block_device [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Booting with volume 8385edec-40f0-49d0-85a2-65e771001e39 at /dev/vda#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.768 254904 DEBUG nova.policy [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6ccb73a613554d938221b4bf46d7ae83', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '625a6939c31646a4a83ea851774cf28c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.880 254904 DEBUG os_brick.utils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.882 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.901 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.901 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[e8050f5f-7b47-4a35-a601-9b8917fa3636]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.903 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.917 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.918 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[ce557a1e-9c5d-49e4-adc0-4851ade59bf4]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.920 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.937 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.938 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[780d331e-cbe9-4dd5-8380-b5339f32f51d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.940 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[c61b8074-c6c5-48c2-aacc-477753918fbd]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.941 254904 DEBUG oslo_concurrency.processutils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.974 254904 DEBUG oslo_concurrency.processutils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.979 254904 DEBUG os_brick.initiator.connectors.lightos [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.979 254904 DEBUG os_brick.initiator.connectors.lightos [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.980 254904 DEBUG os_brick.initiator.connectors.lightos [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.980 254904 DEBUG os_brick.utils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] <== get_connector_properties: return (99ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:30:25 np0005542249 nova_compute[254900]: 2025-12-02 11:30:25.981 254904 DEBUG nova.virt.block_device [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Updating existing volume attachment record: 4a0badaf-a075-4687-be18-35fc92e80ca9 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:30:26 np0005542249 podman[289924]: 2025-12-02 11:30:26.031752361 +0000 UTC m=+0.093797761 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  2 06:30:26 np0005542249 podman[289927]: 2025-12-02 11:30:26.132868879 +0000 UTC m=+0.196720437 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec  2 06:30:26 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1619: 321 pgs: 321 active+clean; 432 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 700 KiB/s rd, 122 KiB/s wr, 102 op/s
Dec  2 06:30:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:30:26
Dec  2 06:30:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:30:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:30:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'vms', 'default.rgw.control', 'backups', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'default.rgw.log']
Dec  2 06:30:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:30:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:30:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:30:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:30:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:30:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:30:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:30:26 np0005542249 nova_compute[254900]: 2025-12-02 11:30:26.492 254904 DEBUG nova.network.neutron [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Successfully created port: 1c8dc20d-b649-4107-a95c-d427a6c8f59a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:30:26 np0005542249 nova_compute[254900]: 2025-12-02 11:30:26.633 254904 DEBUG oslo_concurrency.lockutils [None req-e25ad5c9-a71f-4a82-809c-0142729f93a1 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Acquiring lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:26 np0005542249 nova_compute[254900]: 2025-12-02 11:30:26.634 254904 DEBUG oslo_concurrency.lockutils [None req-e25ad5c9-a71f-4a82-809c-0142729f93a1 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:26 np0005542249 nova_compute[254900]: 2025-12-02 11:30:26.647 254904 INFO nova.compute.manager [None req-e25ad5c9-a71f-4a82-809c-0142729f93a1 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Detaching volume 4047b8dc-6681-4efe-a36e-920154488d60#033[00m
Dec  2 06:30:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:30:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/968301220' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:30:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:30:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:30:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:30:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:30:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:30:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:30:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:30:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:30:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:30:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:30:26 np0005542249 nova_compute[254900]: 2025-12-02 11:30:26.823 254904 INFO nova.virt.block_device [None req-e25ad5c9-a71f-4a82-809c-0142729f93a1 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Attempting to driver detach volume 4047b8dc-6681-4efe-a36e-920154488d60 from mountpoint /dev/vdb#033[00m
Dec  2 06:30:26 np0005542249 nova_compute[254900]: 2025-12-02 11:30:26.838 254904 DEBUG nova.virt.libvirt.driver [None req-e25ad5c9-a71f-4a82-809c-0142729f93a1 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Attempting to detach device vdb from instance a8aab2b3-e5a2-451d-b77a-9d977f1dd00f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  2 06:30:26 np0005542249 nova_compute[254900]: 2025-12-02 11:30:26.839 254904 DEBUG nova.virt.libvirt.guest [None req-e25ad5c9-a71f-4a82-809c-0142729f93a1 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] detach device xml: <disk type="network" device="disk">
Dec  2 06:30:26 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:30:26 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-4047b8dc-6681-4efe-a36e-920154488d60">
Dec  2 06:30:26 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:30:26 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:30:26 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:30:26 np0005542249 nova_compute[254900]:  <serial>4047b8dc-6681-4efe-a36e-920154488d60</serial>
Dec  2 06:30:26 np0005542249 nova_compute[254900]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  2 06:30:26 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:30:26 np0005542249 nova_compute[254900]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  2 06:30:26 np0005542249 nova_compute[254900]: 2025-12-02 11:30:26.848 254904 INFO nova.virt.libvirt.driver [None req-e25ad5c9-a71f-4a82-809c-0142729f93a1 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Successfully detached device vdb from instance a8aab2b3-e5a2-451d-b77a-9d977f1dd00f from the persistent domain config.#033[00m
Dec  2 06:30:26 np0005542249 nova_compute[254900]: 2025-12-02 11:30:26.849 254904 DEBUG nova.virt.libvirt.driver [None req-e25ad5c9-a71f-4a82-809c-0142729f93a1 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance a8aab2b3-e5a2-451d-b77a-9d977f1dd00f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  2 06:30:26 np0005542249 nova_compute[254900]: 2025-12-02 11:30:26.850 254904 DEBUG nova.virt.libvirt.guest [None req-e25ad5c9-a71f-4a82-809c-0142729f93a1 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] detach device xml: <disk type="network" device="disk">
Dec  2 06:30:26 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:30:26 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-4047b8dc-6681-4efe-a36e-920154488d60">
Dec  2 06:30:26 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:30:26 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:30:26 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:30:26 np0005542249 nova_compute[254900]:  <serial>4047b8dc-6681-4efe-a36e-920154488d60</serial>
Dec  2 06:30:26 np0005542249 nova_compute[254900]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  2 06:30:26 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:30:26 np0005542249 nova_compute[254900]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  2 06:30:27 np0005542249 nova_compute[254900]: 2025-12-02 11:30:27.004 254904 DEBUG nova.virt.libvirt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Received event <DeviceRemovedEvent: 1764675027.0041397, a8aab2b3-e5a2-451d-b77a-9d977f1dd00f => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  2 06:30:27 np0005542249 nova_compute[254900]: 2025-12-02 11:30:27.006 254904 DEBUG nova.virt.libvirt.driver [None req-e25ad5c9-a71f-4a82-809c-0142729f93a1 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance a8aab2b3-e5a2-451d-b77a-9d977f1dd00f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  2 06:30:27 np0005542249 nova_compute[254900]: 2025-12-02 11:30:27.010 254904 INFO nova.virt.libvirt.driver [None req-e25ad5c9-a71f-4a82-809c-0142729f93a1 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Successfully detached device vdb from instance a8aab2b3-e5a2-451d-b77a-9d977f1dd00f from the live domain config.#033[00m
Dec  2 06:30:27 np0005542249 nova_compute[254900]: 2025-12-02 11:30:27.033 254904 DEBUG nova.compute.manager [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:30:27 np0005542249 nova_compute[254900]: 2025-12-02 11:30:27.036 254904 DEBUG nova.virt.libvirt.driver [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:30:27 np0005542249 nova_compute[254900]: 2025-12-02 11:30:27.037 254904 INFO nova.virt.libvirt.driver [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Creating image(s)#033[00m
Dec  2 06:30:27 np0005542249 nova_compute[254900]: 2025-12-02 11:30:27.038 254904 DEBUG nova.virt.libvirt.driver [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  2 06:30:27 np0005542249 nova_compute[254900]: 2025-12-02 11:30:27.038 254904 DEBUG nova.virt.libvirt.driver [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Ensure instance console log exists: /var/lib/nova/instances/64fbd54d-f574-44e6-a788-53938d2219e8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:30:27 np0005542249 nova_compute[254900]: 2025-12-02 11:30:27.039 254904 DEBUG oslo_concurrency.lockutils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:27 np0005542249 nova_compute[254900]: 2025-12-02 11:30:27.040 254904 DEBUG oslo_concurrency.lockutils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:27 np0005542249 nova_compute[254900]: 2025-12-02 11:30:27.040 254904 DEBUG oslo_concurrency.lockutils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:27 np0005542249 nova_compute[254900]: 2025-12-02 11:30:27.197 254904 DEBUG nova.objects.instance [None req-e25ad5c9-a71f-4a82-809c-0142729f93a1 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lazy-loading 'flavor' on Instance uuid a8aab2b3-e5a2-451d-b77a-9d977f1dd00f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:30:27 np0005542249 nova_compute[254900]: 2025-12-02 11:30:27.239 254904 DEBUG oslo_concurrency.lockutils [None req-e25ad5c9-a71f-4a82-809c-0142729f93a1 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:27 np0005542249 nova_compute[254900]: 2025-12-02 11:30:27.413 254904 DEBUG nova.network.neutron [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Successfully updated port: 1c8dc20d-b649-4107-a95c-d427a6c8f59a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:30:27 np0005542249 nova_compute[254900]: 2025-12-02 11:30:27.439 254904 DEBUG oslo_concurrency.lockutils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "refresh_cache-64fbd54d-f574-44e6-a788-53938d2219e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:30:27 np0005542249 nova_compute[254900]: 2025-12-02 11:30:27.439 254904 DEBUG oslo_concurrency.lockutils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquired lock "refresh_cache-64fbd54d-f574-44e6-a788-53938d2219e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:30:27 np0005542249 nova_compute[254900]: 2025-12-02 11:30:27.440 254904 DEBUG nova.network.neutron [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:30:27 np0005542249 nova_compute[254900]: 2025-12-02 11:30:27.493 254904 DEBUG nova.compute.manager [req-4e907761-4ff3-4334-bc00-e9c76b2b5fce req-edc32d61-6359-4e20-9aa5-d15b40c7a5e4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Received event network-changed-1c8dc20d-b649-4107-a95c-d427a6c8f59a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:30:27 np0005542249 nova_compute[254900]: 2025-12-02 11:30:27.494 254904 DEBUG nova.compute.manager [req-4e907761-4ff3-4334-bc00-e9c76b2b5fce req-edc32d61-6359-4e20-9aa5-d15b40c7a5e4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Refreshing instance network info cache due to event network-changed-1c8dc20d-b649-4107-a95c-d427a6c8f59a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:30:27 np0005542249 nova_compute[254900]: 2025-12-02 11:30:27.494 254904 DEBUG oslo_concurrency.lockutils [req-4e907761-4ff3-4334-bc00-e9c76b2b5fce req-edc32d61-6359-4e20-9aa5-d15b40c7a5e4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-64fbd54d-f574-44e6-a788-53938d2219e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:30:27 np0005542249 nova_compute[254900]: 2025-12-02 11:30:27.627 254904 DEBUG nova.network.neutron [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:30:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.061 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:28 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1620: 321 pgs: 321 active+clean; 433 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 763 KiB/s rd, 174 KiB/s wr, 112 op/s
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.614 254904 DEBUG nova.network.neutron [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Updating instance_info_cache with network_info: [{"id": "1c8dc20d-b649-4107-a95c-d427a6c8f59a", "address": "fa:16:3e:e1:5e:cc", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c8dc20d-b6", "ovs_interfaceid": "1c8dc20d-b649-4107-a95c-d427a6c8f59a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.630 254904 DEBUG oslo_concurrency.lockutils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Releasing lock "refresh_cache-64fbd54d-f574-44e6-a788-53938d2219e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.630 254904 DEBUG nova.compute.manager [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Instance network_info: |[{"id": "1c8dc20d-b649-4107-a95c-d427a6c8f59a", "address": "fa:16:3e:e1:5e:cc", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c8dc20d-b6", "ovs_interfaceid": "1c8dc20d-b649-4107-a95c-d427a6c8f59a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.630 254904 DEBUG oslo_concurrency.lockutils [req-4e907761-4ff3-4334-bc00-e9c76b2b5fce req-edc32d61-6359-4e20-9aa5-d15b40c7a5e4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-64fbd54d-f574-44e6-a788-53938d2219e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.631 254904 DEBUG nova.network.neutron [req-4e907761-4ff3-4334-bc00-e9c76b2b5fce req-edc32d61-6359-4e20-9aa5-d15b40c7a5e4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Refreshing network info cache for port 1c8dc20d-b649-4107-a95c-d427a6c8f59a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.634 254904 DEBUG nova.virt.libvirt.driver [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Start _get_guest_xml network_info=[{"id": "1c8dc20d-b649-4107-a95c-d427a6c8f59a", "address": "fa:16:3e:e1:5e:cc", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c8dc20d-b6", "ovs_interfaceid": "1c8dc20d-b649-4107-a95c-d427a6c8f59a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-8385edec-40f0-49d0-85a2-65e771001e39', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '8385edec-40f0-49d0-85a2-65e771001e39', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '64fbd54d-f574-44e6-a788-53938d2219e8', 'attached_at': '', 'detached_at': '', 'volume_id': '8385edec-40f0-49d0-85a2-65e771001e39', 'serial': '8385edec-40f0-49d0-85a2-65e771001e39'}, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'attachment_id': '4a0badaf-a075-4687-be18-35fc92e80ca9', 'delete_on_termination': False, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.639 254904 WARNING nova.virt.libvirt.driver [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.650 254904 DEBUG nova.virt.libvirt.host [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.651 254904 DEBUG nova.virt.libvirt.host [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.656 254904 DEBUG nova.virt.libvirt.host [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.656 254904 DEBUG nova.virt.libvirt.host [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.657 254904 DEBUG nova.virt.libvirt.driver [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.657 254904 DEBUG nova.virt.hardware [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.658 254904 DEBUG nova.virt.hardware [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.658 254904 DEBUG nova.virt.hardware [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.658 254904 DEBUG nova.virt.hardware [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.659 254904 DEBUG nova.virt.hardware [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.659 254904 DEBUG nova.virt.hardware [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.659 254904 DEBUG nova.virt.hardware [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.659 254904 DEBUG nova.virt.hardware [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.660 254904 DEBUG nova.virt.hardware [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.660 254904 DEBUG nova.virt.hardware [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.660 254904 DEBUG nova.virt.hardware [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.689 254904 DEBUG nova.storage.rbd_utils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] rbd image 64fbd54d-f574-44e6-a788-53938d2219e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.694 254904 DEBUG oslo_concurrency.processutils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:28 np0005542249 nova_compute[254900]: 2025-12-02 11:30:28.922 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:30:29 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3138270533' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.151 254904 DEBUG oslo_concurrency.processutils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.194 254904 DEBUG nova.virt.libvirt.vif [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:30:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1228569889',display_name='tempest-TestVolumeBootPattern-server-1228569889',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1228569889',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDl9Chitcp+6ZZ9O/so1iQpbrg+ZOVWOrATMsWTbgaWcZg2lFiQK4KEUyaqp5+G/z2wPorJssN622GdMYPRLScxIeivbRrFeE5q310MfETTcDT4f8HB9OmcWcicW5ZF4QA==',key_name='tempest-TestVolumeBootPattern-765108891',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='625a6939c31646a4a83ea851774cf28c',ramdisk_id='',reservation_id='r-ltnonuu0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1396850361',owner_user_name='tempest-TestVolumeBootPattern-1396850361-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:30:25Z,user_data=None,user_id='6ccb73a613554d938221b4bf46d7ae83',uuid=64fbd54d-f574-44e6-a788-53938d2219e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c8dc20d-b649-4107-a95c-d427a6c8f59a", "address": "fa:16:3e:e1:5e:cc", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c8dc20d-b6", "ovs_interfaceid": "1c8dc20d-b649-4107-a95c-d427a6c8f59a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.194 254904 DEBUG nova.network.os_vif_util [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converting VIF {"id": "1c8dc20d-b649-4107-a95c-d427a6c8f59a", "address": "fa:16:3e:e1:5e:cc", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c8dc20d-b6", "ovs_interfaceid": "1c8dc20d-b649-4107-a95c-d427a6c8f59a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.196 254904 DEBUG nova.network.os_vif_util [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:5e:cc,bridge_name='br-int',has_traffic_filtering=True,id=1c8dc20d-b649-4107-a95c-d427a6c8f59a,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c8dc20d-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.198 254904 DEBUG nova.objects.instance [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lazy-loading 'pci_devices' on Instance uuid 64fbd54d-f574-44e6-a788-53938d2219e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.222 254904 DEBUG nova.virt.libvirt.driver [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:30:29 np0005542249 nova_compute[254900]:  <uuid>64fbd54d-f574-44e6-a788-53938d2219e8</uuid>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:  <name>instance-00000018</name>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <nova:name>tempest-TestVolumeBootPattern-server-1228569889</nova:name>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:30:28</nova:creationTime>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:30:29 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:        <nova:user uuid="6ccb73a613554d938221b4bf46d7ae83">tempest-TestVolumeBootPattern-1396850361-project-member</nova:user>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:        <nova:project uuid="625a6939c31646a4a83ea851774cf28c">tempest-TestVolumeBootPattern-1396850361</nova:project>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:        <nova:port uuid="1c8dc20d-b649-4107-a95c-d427a6c8f59a">
Dec  2 06:30:29 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <entry name="serial">64fbd54d-f574-44e6-a788-53938d2219e8</entry>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <entry name="uuid">64fbd54d-f574-44e6-a788-53938d2219e8</entry>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/64fbd54d-f574-44e6-a788-53938d2219e8_disk.config">
Dec  2 06:30:29 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:30:29 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="volumes/volume-8385edec-40f0-49d0-85a2-65e771001e39">
Dec  2 06:30:29 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:30:29 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <serial>8385edec-40f0-49d0-85a2-65e771001e39</serial>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:e1:5e:cc"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <target dev="tap1c8dc20d-b6"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/64fbd54d-f574-44e6-a788-53938d2219e8/console.log" append="off"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:30:29 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:30:29 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:30:29 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:30:29 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.223 254904 DEBUG nova.compute.manager [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Preparing to wait for external event network-vif-plugged-1c8dc20d-b649-4107-a95c-d427a6c8f59a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.224 254904 DEBUG oslo_concurrency.lockutils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "64fbd54d-f574-44e6-a788-53938d2219e8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.224 254904 DEBUG oslo_concurrency.lockutils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "64fbd54d-f574-44e6-a788-53938d2219e8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.224 254904 DEBUG oslo_concurrency.lockutils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "64fbd54d-f574-44e6-a788-53938d2219e8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.226 254904 DEBUG nova.virt.libvirt.vif [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:30:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1228569889',display_name='tempest-TestVolumeBootPattern-server-1228569889',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1228569889',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDl9Chitcp+6ZZ9O/so1iQpbrg+ZOVWOrATMsWTbgaWcZg2lFiQK4KEUyaqp5+G/z2wPorJssN622GdMYPRLScxIeivbRrFeE5q310MfETTcDT4f8HB9OmcWcicW5ZF4QA==',key_name='tempest-TestVolumeBootPattern-765108891',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='625a6939c31646a4a83ea851774cf28c',ramdisk_id='',reservation_id='r-ltnonuu0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1396850361',owner_user_name='tempest-TestVolumeBootPattern-1396850361-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:30:25Z,user_data=None,user_id='6ccb73a613554d938221b4bf46d7ae83',uuid=64fbd54d-f574-44e6-a788-53938d2219e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c8dc20d-b649-4107-a95c-d427a6c8f59a", "address": "fa:16:3e:e1:5e:cc", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c8dc20d-b6", "ovs_interfaceid": "1c8dc20d-b649-4107-a95c-d427a6c8f59a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.226 254904 DEBUG nova.network.os_vif_util [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converting VIF {"id": "1c8dc20d-b649-4107-a95c-d427a6c8f59a", "address": "fa:16:3e:e1:5e:cc", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c8dc20d-b6", "ovs_interfaceid": "1c8dc20d-b649-4107-a95c-d427a6c8f59a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.227 254904 DEBUG nova.network.os_vif_util [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:5e:cc,bridge_name='br-int',has_traffic_filtering=True,id=1c8dc20d-b649-4107-a95c-d427a6c8f59a,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c8dc20d-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.228 254904 DEBUG os_vif [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:5e:cc,bridge_name='br-int',has_traffic_filtering=True,id=1c8dc20d-b649-4107-a95c-d427a6c8f59a,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c8dc20d-b6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.229 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.230 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.230 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.235 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.235 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c8dc20d-b6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.236 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1c8dc20d-b6, col_values=(('external_ids', {'iface-id': '1c8dc20d-b649-4107-a95c-d427a6c8f59a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e1:5e:cc', 'vm-uuid': '64fbd54d-f574-44e6-a788-53938d2219e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.238 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:29 np0005542249 NetworkManager[48987]: <info>  [1764675029.2407] manager: (tap1c8dc20d-b6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/121)
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.241 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.248 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.249 254904 INFO os_vif [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:5e:cc,bridge_name='br-int',has_traffic_filtering=True,id=1c8dc20d-b649-4107-a95c-d427a6c8f59a,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c8dc20d-b6')#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.316 254904 DEBUG nova.virt.libvirt.driver [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.317 254904 DEBUG nova.virt.libvirt.driver [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.317 254904 DEBUG nova.virt.libvirt.driver [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] No VIF found with MAC fa:16:3e:e1:5e:cc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.318 254904 INFO nova.virt.libvirt.driver [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Using config drive#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.350 254904 DEBUG nova.storage.rbd_utils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] rbd image 64fbd54d-f574-44e6-a788-53938d2219e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.722 254904 INFO nova.virt.libvirt.driver [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Creating config drive at /var/lib/nova/instances/64fbd54d-f574-44e6-a788-53938d2219e8/disk.config#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.732 254904 DEBUG oslo_concurrency.processutils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/64fbd54d-f574-44e6-a788-53938d2219e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcefw7etq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.871 254904 DEBUG oslo_concurrency.processutils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/64fbd54d-f574-44e6-a788-53938d2219e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcefw7etq" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.910 254904 DEBUG nova.storage.rbd_utils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] rbd image 64fbd54d-f574-44e6-a788-53938d2219e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.915 254904 DEBUG oslo_concurrency.processutils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/64fbd54d-f574-44e6-a788-53938d2219e8/disk.config 64fbd54d-f574-44e6-a788-53938d2219e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.958 254904 DEBUG oslo_concurrency.lockutils [None req-2c0dd758-60ee-41c9-8679-61920a22a98b 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Acquiring lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.958 254904 DEBUG oslo_concurrency.lockutils [None req-2c0dd758-60ee-41c9-8679-61920a22a98b 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:29 np0005542249 nova_compute[254900]: 2025-12-02 11:30:29.983 254904 DEBUG nova.objects.instance [None req-2c0dd758-60ee-41c9-8679-61920a22a98b 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lazy-loading 'flavor' on Instance uuid a8aab2b3-e5a2-451d-b77a-9d977f1dd00f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.049 254904 DEBUG oslo_concurrency.lockutils [None req-2c0dd758-60ee-41c9-8679-61920a22a98b 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.079 254904 DEBUG oslo_concurrency.processutils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/64fbd54d-f574-44e6-a788-53938d2219e8/disk.config 64fbd54d-f574-44e6-a788-53938d2219e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.079 254904 INFO nova.virt.libvirt.driver [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Deleting local config drive /var/lib/nova/instances/64fbd54d-f574-44e6-a788-53938d2219e8/disk.config because it was imported into RBD.#033[00m
Dec  2 06:30:30 np0005542249 kernel: tap1c8dc20d-b6: entered promiscuous mode
Dec  2 06:30:30 np0005542249 NetworkManager[48987]: <info>  [1764675030.1567] manager: (tap1c8dc20d-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/122)
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.156 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:30 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:30Z|00219|binding|INFO|Claiming lport 1c8dc20d-b649-4107-a95c-d427a6c8f59a for this chassis.
Dec  2 06:30:30 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:30Z|00220|binding|INFO|1c8dc20d-b649-4107-a95c-d427a6c8f59a: Claiming fa:16:3e:e1:5e:cc 10.100.0.6
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.165 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:5e:cc 10.100.0.6'], port_security=['fa:16:3e:e1:5e:cc 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '64fbd54d-f574-44e6-a788-53938d2219e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '625a6939c31646a4a83ea851774cf28c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bf93629e-6336-4a9c-a41d-6ce19e6b6662', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cc823ef0-3e69-4062-a488-a82f483f10bb, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=1c8dc20d-b649-4107-a95c-d427a6c8f59a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.167 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 1c8dc20d-b649-4107-a95c-d427a6c8f59a in datapath acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 bound to our chassis#033[00m
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.169 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754#033[00m
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.183 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[3478c4f2-5c34-4e49-bdcd-f2eb33e5d421]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.184 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapacfaa8ac-01 in ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.187 262398 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapacfaa8ac-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.187 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[3404ee33-dc44-4e1b-903a-48333552f8f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.189 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[e6fb5bf9-65ce-4722-a9f4-e0f69adc9457]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:30 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:30Z|00221|binding|INFO|Setting lport 1c8dc20d-b649-4107-a95c-d427a6c8f59a ovn-installed in OVS
Dec  2 06:30:30 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:30Z|00222|binding|INFO|Setting lport 1c8dc20d-b649-4107-a95c-d427a6c8f59a up in Southbound
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.196 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:30 np0005542249 systemd-machined[216222]: New machine qemu-24-instance-00000018.
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.201 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:30 np0005542249 systemd[1]: Started Virtual Machine qemu-24-instance-00000018.
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.218 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[933375a4-9727-45be-8d7b-050226c92fc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:30 np0005542249 systemd-udevd[290094]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.255 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[701d449c-5f1b-4f8a-8c75-76e40c086f73]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:30 np0005542249 NetworkManager[48987]: <info>  [1764675030.2722] device (tap1c8dc20d-b6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:30:30 np0005542249 NetworkManager[48987]: <info>  [1764675030.2745] device (tap1c8dc20d-b6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.307 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[86f70694-551d-4608-bd88-72b0708fc303]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:30 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1621: 321 pgs: 321 active+clean; 433 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 154 KiB/s wr, 65 op/s
Dec  2 06:30:30 np0005542249 NetworkManager[48987]: <info>  [1764675030.3153] manager: (tapacfaa8ac-00): new Veth device (/org/freedesktop/NetworkManager/Devices/123)
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.314 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[b3e94a83-6b30-4827-8dd2-708f5427a64c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:30 np0005542249 systemd-udevd[290098]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:30:30 np0005542249 ceph-osd[89966]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  2 06:30:30 np0005542249 ceph-osd[89966]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 24K writes, 97K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s#012Cumulative WAL: 24K writes, 8490 syncs, 2.83 writes per sync, written: 0.06 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 49K keys, 11K commit groups, 1.0 writes per commit group, ingest: 31.12 MB, 0.05 MB/s#012Interval WAL: 11K writes, 4809 syncs, 2.40 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.363 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[7ace9cdb-5ef0-485f-83b8-bfb8f836d4af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.367 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[6e970466-d39f-46b8-97fe-6ce9868dbf56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:30 np0005542249 NetworkManager[48987]: <info>  [1764675030.3944] device (tapacfaa8ac-00): carrier: link connected
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.399 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[ed51bb09-717c-49f9-a71c-ea6d63fe38a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.419 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[58e8c999-1b94-4b6b-886b-f1ee65c6e69d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacfaa8ac-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:73:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522321, 'reachable_time': 34711, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290124, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.439 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[5bf574c1-01a8-456f-91ec-751434792ee6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fece:73a3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522321, 'tstamp': 522321}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290125, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.456 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[c7f6d1b9-3622-4b5f-b289-bea198effaa8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacfaa8ac-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:73:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522321, 'reachable_time': 34711, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290126, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.459 254904 DEBUG oslo_concurrency.lockutils [None req-2c0dd758-60ee-41c9-8679-61920a22a98b 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Acquiring lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.459 254904 DEBUG oslo_concurrency.lockutils [None req-2c0dd758-60ee-41c9-8679-61920a22a98b 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.460 254904 INFO nova.compute.manager [None req-2c0dd758-60ee-41c9-8679-61920a22a98b 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Attaching volume b36a6a0f-b264-4083-ad1c-f8fdc99d2841 to /dev/vdb#033[00m
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.495 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[32de788d-4223-45ee-a635-a84134558cc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.556 254904 DEBUG nova.compute.manager [req-296cddef-5870-4bb9-839d-8394fca56034 req-e64af363-bf49-4836-b58f-cbf7a21c1bc3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Received event network-vif-plugged-1c8dc20d-b649-4107-a95c-d427a6c8f59a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.556 254904 DEBUG oslo_concurrency.lockutils [req-296cddef-5870-4bb9-839d-8394fca56034 req-e64af363-bf49-4836-b58f-cbf7a21c1bc3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "64fbd54d-f574-44e6-a788-53938d2219e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.558 254904 DEBUG oslo_concurrency.lockutils [req-296cddef-5870-4bb9-839d-8394fca56034 req-e64af363-bf49-4836-b58f-cbf7a21c1bc3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "64fbd54d-f574-44e6-a788-53938d2219e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.559 254904 DEBUG oslo_concurrency.lockutils [req-296cddef-5870-4bb9-839d-8394fca56034 req-e64af363-bf49-4836-b58f-cbf7a21c1bc3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "64fbd54d-f574-44e6-a788-53938d2219e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.560 254904 DEBUG nova.compute.manager [req-296cddef-5870-4bb9-839d-8394fca56034 req-e64af363-bf49-4836-b58f-cbf7a21c1bc3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Processing event network-vif-plugged-1c8dc20d-b649-4107-a95c-d427a6c8f59a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.591 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[3fa93c0a-1d4f-452c-b2c7-dc76973ee57e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.592 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacfaa8ac-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.593 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.593 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapacfaa8ac-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.595 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:30 np0005542249 NetworkManager[48987]: <info>  [1764675030.5967] manager: (tapacfaa8ac-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/124)
Dec  2 06:30:30 np0005542249 kernel: tapacfaa8ac-00: entered promiscuous mode
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.599 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.600 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapacfaa8ac-00, col_values=(('external_ids', {'iface-id': '1636ad30-406d-4138-823e-abbe7f4d87ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.601 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:30 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:30Z|00223|binding|INFO|Releasing lport 1636ad30-406d-4138-823e-abbe7f4d87ac from this chassis (sb_readonly=0)
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.615 254904 DEBUG os_brick.utils [None req-2c0dd758-60ee-41c9-8679-61920a22a98b 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.616 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.631 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.633 163757 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.637 254904 DEBUG nova.network.neutron [req-4e907761-4ff3-4334-bc00-e9c76b2b5fce req-edc32d61-6359-4e20-9aa5-d15b40c7a5e4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Updated VIF entry in instance network info cache for port 1c8dc20d-b649-4107-a95c-d427a6c8f59a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.636 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe7abf4-5286-4560-8bee-a777e1977f0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.637 254904 DEBUG nova.network.neutron [req-4e907761-4ff3-4334-bc00-e9c76b2b5fce req-edc32d61-6359-4e20-9aa5-d15b40c7a5e4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Updating instance_info_cache with network_info: [{"id": "1c8dc20d-b649-4107-a95c-d427a6c8f59a", "address": "fa:16:3e:e1:5e:cc", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c8dc20d-b6", "ovs_interfaceid": "1c8dc20d-b649-4107-a95c-d427a6c8f59a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.638 163757 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: global
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]:    log         /dev/log local0 debug
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]:    log-tag     haproxy-metadata-proxy-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]:    user        root
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]:    group       root
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]:    maxconn     1024
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]:    pidfile     /var/lib/neutron/external/pids/acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754.pid.haproxy
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]:    daemon
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: defaults
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]:    log global
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]:    mode http
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]:    option httplog
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]:    option dontlognull
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]:    option http-server-close
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]:    option forwardfor
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]:    retries                 3
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]:    timeout http-request    30s
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]:    timeout connect         30s
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]:    timeout client          32s
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]:    timeout server          32s
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]:    timeout http-keep-alive 30s
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: listen listener
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]:    bind 169.254.169.254:80
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]:    http-request add-header X-OVN-Network-ID acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 06:30:30 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:30.638 163757 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'env', 'PROCESS_TAG=haproxy-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.637 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.637 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[d61994ff-8a5c-4a59-afa2-7618efaa7e88]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.639 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.655 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.656 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[01cf9766-cc24-4ced-a95b-9ae850f4232f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.657 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.660 254904 DEBUG oslo_concurrency.lockutils [req-4e907761-4ff3-4334-bc00-e9c76b2b5fce req-edc32d61-6359-4e20-9aa5-d15b40c7a5e4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-64fbd54d-f574-44e6-a788-53938d2219e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.667 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.667 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[489b3a40-bbaf-4da9-9248-12396d1bde88]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.668 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[d6bf9584-81bb-434c-abfa-60b268e486b2]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.669 254904 DEBUG oslo_concurrency.processutils [None req-2c0dd758-60ee-41c9-8679-61920a22a98b 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.694 254904 DEBUG oslo_concurrency.processutils [None req-2c0dd758-60ee-41c9-8679-61920a22a98b 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.696 254904 DEBUG os_brick.initiator.connectors.lightos [None req-2c0dd758-60ee-41c9-8679-61920a22a98b 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.697 254904 DEBUG os_brick.initiator.connectors.lightos [None req-2c0dd758-60ee-41c9-8679-61920a22a98b 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.697 254904 DEBUG os_brick.initiator.connectors.lightos [None req-2c0dd758-60ee-41c9-8679-61920a22a98b 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.697 254904 DEBUG os_brick.utils [None req-2c0dd758-60ee-41c9-8679-61920a22a98b 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] <== get_connector_properties: return (81ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:30:30 np0005542249 nova_compute[254900]: 2025-12-02 11:30:30.697 254904 DEBUG nova.virt.block_device [None req-2c0dd758-60ee-41c9-8679-61920a22a98b 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Updating existing volume attachment record: e67494f1-bbdd-4380-944f-daee17257b4f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:30:31 np0005542249 podman[290166]: 2025-12-02 11:30:31.046129234 +0000 UTC m=+0.048750761 container create 9d220dcd5324280330a5da7a7b12b176dc46590d6267ae6eec14ae16f02ed717 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  2 06:30:31 np0005542249 systemd[1]: Started libpod-conmon-9d220dcd5324280330a5da7a7b12b176dc46590d6267ae6eec14ae16f02ed717.scope.
Dec  2 06:30:31 np0005542249 podman[290166]: 2025-12-02 11:30:31.022784402 +0000 UTC m=+0.025405949 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:30:31 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:30:31 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974aa7c9684c4968cf9a078d2a523b381430a175240095e528c6354c116af2e7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 06:30:31 np0005542249 podman[290166]: 2025-12-02 11:30:31.148202897 +0000 UTC m=+0.150824444 container init 9d220dcd5324280330a5da7a7b12b176dc46590d6267ae6eec14ae16f02ed717 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:30:31 np0005542249 podman[290166]: 2025-12-02 11:30:31.155103084 +0000 UTC m=+0.157724621 container start 9d220dcd5324280330a5da7a7b12b176dc46590d6267ae6eec14ae16f02ed717 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:30:31 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[290181]: [NOTICE]   (290185) : New worker (290187) forked
Dec  2 06:30:31 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[290181]: [NOTICE]   (290185) : Loading success.
Dec  2 06:30:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:30:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/820732455' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:30:31 np0005542249 nova_compute[254900]: 2025-12-02 11:30:31.356 254904 DEBUG nova.objects.instance [None req-2c0dd758-60ee-41c9-8679-61920a22a98b 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lazy-loading 'flavor' on Instance uuid a8aab2b3-e5a2-451d-b77a-9d977f1dd00f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:30:31 np0005542249 nova_compute[254900]: 2025-12-02 11:30:31.383 254904 DEBUG nova.virt.libvirt.driver [None req-2c0dd758-60ee-41c9-8679-61920a22a98b 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Attempting to attach volume b36a6a0f-b264-4083-ad1c-f8fdc99d2841 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Dec  2 06:30:31 np0005542249 nova_compute[254900]: 2025-12-02 11:30:31.386 254904 DEBUG nova.virt.libvirt.guest [None req-2c0dd758-60ee-41c9-8679-61920a22a98b 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] attach device xml: <disk type="network" device="disk">
Dec  2 06:30:31 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:30:31 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-b36a6a0f-b264-4083-ad1c-f8fdc99d2841">
Dec  2 06:30:31 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:30:31 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:30:31 np0005542249 nova_compute[254900]:  <auth username="openstack">
Dec  2 06:30:31 np0005542249 nova_compute[254900]:    <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:30:31 np0005542249 nova_compute[254900]:  </auth>
Dec  2 06:30:31 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:30:31 np0005542249 nova_compute[254900]:  <serial>b36a6a0f-b264-4083-ad1c-f8fdc99d2841</serial>
Dec  2 06:30:31 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:30:31 np0005542249 nova_compute[254900]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  2 06:30:31 np0005542249 nova_compute[254900]: 2025-12-02 11:30:31.510 254904 DEBUG nova.virt.libvirt.driver [None req-2c0dd758-60ee-41c9-8679-61920a22a98b 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:30:31 np0005542249 nova_compute[254900]: 2025-12-02 11:30:31.511 254904 DEBUG nova.virt.libvirt.driver [None req-2c0dd758-60ee-41c9-8679-61920a22a98b 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:30:31 np0005542249 nova_compute[254900]: 2025-12-02 11:30:31.511 254904 DEBUG nova.virt.libvirt.driver [None req-2c0dd758-60ee-41c9-8679-61920a22a98b 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:30:31 np0005542249 nova_compute[254900]: 2025-12-02 11:30:31.511 254904 DEBUG nova.virt.libvirt.driver [None req-2c0dd758-60ee-41c9-8679-61920a22a98b 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] No VIF found with MAC fa:16:3e:c4:74:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:30:31 np0005542249 nova_compute[254900]: 2025-12-02 11:30:31.748 254904 DEBUG oslo_concurrency.lockutils [None req-2c0dd758-60ee-41c9-8679-61920a22a98b 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.289s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.030 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764675032.0300922, 64fbd54d-f574-44e6-a788-53938d2219e8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.031 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] VM Started (Lifecycle Event)#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.034 254904 DEBUG nova.compute.manager [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.046 254904 DEBUG nova.virt.libvirt.driver [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.051 254904 INFO nova.virt.libvirt.driver [-] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Instance spawned successfully.#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.051 254904 DEBUG nova.virt.libvirt.driver [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.056 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.061 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.080 254904 DEBUG nova.virt.libvirt.driver [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.080 254904 DEBUG nova.virt.libvirt.driver [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.081 254904 DEBUG nova.virt.libvirt.driver [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.082 254904 DEBUG nova.virt.libvirt.driver [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.082 254904 DEBUG nova.virt.libvirt.driver [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.083 254904 DEBUG nova.virt.libvirt.driver [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.089 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.089 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764675032.0304322, 64fbd54d-f574-44e6-a788-53938d2219e8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.090 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.137 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.142 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764675032.0450778, 64fbd54d-f574-44e6-a788-53938d2219e8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.142 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.170 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.174 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.178 254904 INFO nova.compute.manager [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Took 5.14 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.179 254904 DEBUG nova.compute.manager [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.224 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.298 254904 INFO nova.compute.manager [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Took 7.57 seconds to build instance.#033[00m
Dec  2 06:30:32 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1622: 321 pgs: 321 active+clean; 433 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 164 KiB/s rd, 144 KiB/s wr, 73 op/s
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.316 254904 DEBUG oslo_concurrency.lockutils [None req-3b9883e4-b51c-45eb-bea0-2f42ee500411 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "64fbd54d-f574-44e6-a788-53938d2219e8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.664 254904 DEBUG nova.compute.manager [req-e9b72ac1-da16-46a4-98e3-fbd8e5b960b1 req-5b975df3-5250-4f0d-92d9-ed01fbeac306 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Received event network-vif-plugged-1c8dc20d-b649-4107-a95c-d427a6c8f59a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.666 254904 DEBUG oslo_concurrency.lockutils [req-e9b72ac1-da16-46a4-98e3-fbd8e5b960b1 req-5b975df3-5250-4f0d-92d9-ed01fbeac306 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "64fbd54d-f574-44e6-a788-53938d2219e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.666 254904 DEBUG oslo_concurrency.lockutils [req-e9b72ac1-da16-46a4-98e3-fbd8e5b960b1 req-5b975df3-5250-4f0d-92d9-ed01fbeac306 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "64fbd54d-f574-44e6-a788-53938d2219e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.666 254904 DEBUG oslo_concurrency.lockutils [req-e9b72ac1-da16-46a4-98e3-fbd8e5b960b1 req-5b975df3-5250-4f0d-92d9-ed01fbeac306 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "64fbd54d-f574-44e6-a788-53938d2219e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.667 254904 DEBUG nova.compute.manager [req-e9b72ac1-da16-46a4-98e3-fbd8e5b960b1 req-5b975df3-5250-4f0d-92d9-ed01fbeac306 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] No waiting events found dispatching network-vif-plugged-1c8dc20d-b649-4107-a95c-d427a6c8f59a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.667 254904 WARNING nova.compute.manager [req-e9b72ac1-da16-46a4-98e3-fbd8e5b960b1 req-5b975df3-5250-4f0d-92d9-ed01fbeac306 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Received unexpected event network-vif-plugged-1c8dc20d-b649-4107-a95c-d427a6c8f59a for instance with vm_state active and task_state None.#033[00m
Dec  2 06:30:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.967 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764675017.9663785, bdac108b-bf09-468a-9c93-c72b5128519b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.968 254904 INFO nova.compute.manager [-] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:30:32 np0005542249 nova_compute[254900]: 2025-12-02 11:30:32.990 254904 DEBUG nova.compute.manager [None req-fe45ffc5-19df-4b13-a90a-fe72aa55b812 - - - - - -] [instance: bdac108b-bf09-468a-9c93-c72b5128519b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:30:33 np0005542249 nova_compute[254900]: 2025-12-02 11:30:33.923 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:34 np0005542249 nova_compute[254900]: 2025-12-02 11:30:34.238 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:34 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1623: 321 pgs: 321 active+clean; 433 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 666 KiB/s rd, 89 KiB/s wr, 75 op/s
Dec  2 06:30:34 np0005542249 nova_compute[254900]: 2025-12-02 11:30:34.370 254904 DEBUG oslo_concurrency.lockutils [None req-f304c3fb-e240-4867-bfba-c5bfa1d19bb1 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Acquiring lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:34 np0005542249 nova_compute[254900]: 2025-12-02 11:30:34.371 254904 DEBUG oslo_concurrency.lockutils [None req-f304c3fb-e240-4867-bfba-c5bfa1d19bb1 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:34 np0005542249 nova_compute[254900]: 2025-12-02 11:30:34.399 254904 INFO nova.compute.manager [None req-f304c3fb-e240-4867-bfba-c5bfa1d19bb1 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Detaching volume b36a6a0f-b264-4083-ad1c-f8fdc99d2841#033[00m
Dec  2 06:30:34 np0005542249 nova_compute[254900]: 2025-12-02 11:30:34.533 254904 INFO nova.virt.block_device [None req-f304c3fb-e240-4867-bfba-c5bfa1d19bb1 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Attempting to driver detach volume b36a6a0f-b264-4083-ad1c-f8fdc99d2841 from mountpoint /dev/vdb#033[00m
Dec  2 06:30:34 np0005542249 nova_compute[254900]: 2025-12-02 11:30:34.543 254904 DEBUG nova.virt.libvirt.driver [None req-f304c3fb-e240-4867-bfba-c5bfa1d19bb1 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Attempting to detach device vdb from instance a8aab2b3-e5a2-451d-b77a-9d977f1dd00f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  2 06:30:34 np0005542249 nova_compute[254900]: 2025-12-02 11:30:34.544 254904 DEBUG nova.virt.libvirt.guest [None req-f304c3fb-e240-4867-bfba-c5bfa1d19bb1 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] detach device xml: <disk type="network" device="disk">
Dec  2 06:30:34 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:30:34 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-b36a6a0f-b264-4083-ad1c-f8fdc99d2841">
Dec  2 06:30:34 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:30:34 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:30:34 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:30:34 np0005542249 nova_compute[254900]:  <serial>b36a6a0f-b264-4083-ad1c-f8fdc99d2841</serial>
Dec  2 06:30:34 np0005542249 nova_compute[254900]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  2 06:30:34 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:30:34 np0005542249 nova_compute[254900]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  2 06:30:34 np0005542249 nova_compute[254900]: 2025-12-02 11:30:34.551 254904 INFO nova.virt.libvirt.driver [None req-f304c3fb-e240-4867-bfba-c5bfa1d19bb1 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Successfully detached device vdb from instance a8aab2b3-e5a2-451d-b77a-9d977f1dd00f from the persistent domain config.#033[00m
Dec  2 06:30:34 np0005542249 nova_compute[254900]: 2025-12-02 11:30:34.552 254904 DEBUG nova.virt.libvirt.driver [None req-f304c3fb-e240-4867-bfba-c5bfa1d19bb1 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance a8aab2b3-e5a2-451d-b77a-9d977f1dd00f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  2 06:30:34 np0005542249 nova_compute[254900]: 2025-12-02 11:30:34.552 254904 DEBUG nova.virt.libvirt.guest [None req-f304c3fb-e240-4867-bfba-c5bfa1d19bb1 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] detach device xml: <disk type="network" device="disk">
Dec  2 06:30:34 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:30:34 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-b36a6a0f-b264-4083-ad1c-f8fdc99d2841">
Dec  2 06:30:34 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:30:34 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:30:34 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:30:34 np0005542249 nova_compute[254900]:  <serial>b36a6a0f-b264-4083-ad1c-f8fdc99d2841</serial>
Dec  2 06:30:34 np0005542249 nova_compute[254900]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  2 06:30:34 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:30:34 np0005542249 nova_compute[254900]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  2 06:30:34 np0005542249 nova_compute[254900]: 2025-12-02 11:30:34.664 254904 DEBUG nova.virt.libvirt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Received event <DeviceRemovedEvent: 1764675034.6640954, a8aab2b3-e5a2-451d-b77a-9d977f1dd00f => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  2 06:30:34 np0005542249 nova_compute[254900]: 2025-12-02 11:30:34.666 254904 DEBUG nova.virt.libvirt.driver [None req-f304c3fb-e240-4867-bfba-c5bfa1d19bb1 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance a8aab2b3-e5a2-451d-b77a-9d977f1dd00f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  2 06:30:34 np0005542249 nova_compute[254900]: 2025-12-02 11:30:34.670 254904 INFO nova.virt.libvirt.driver [None req-f304c3fb-e240-4867-bfba-c5bfa1d19bb1 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Successfully detached device vdb from instance a8aab2b3-e5a2-451d-b77a-9d977f1dd00f from the live domain config.#033[00m
Dec  2 06:30:34 np0005542249 nova_compute[254900]: 2025-12-02 11:30:34.923 254904 DEBUG nova.objects.instance [None req-f304c3fb-e240-4867-bfba-c5bfa1d19bb1 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lazy-loading 'flavor' on Instance uuid a8aab2b3-e5a2-451d-b77a-9d977f1dd00f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:30:34 np0005542249 nova_compute[254900]: 2025-12-02 11:30:34.969 254904 DEBUG oslo_concurrency.lockutils [None req-f304c3fb-e240-4867-bfba-c5bfa1d19bb1 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:35 np0005542249 nova_compute[254900]: 2025-12-02 11:30:35.857 254904 DEBUG nova.compute.manager [req-02d639ab-c778-4d72-a080-74c00ae8b57e req-216f2ff0-b44b-412c-a9c4-1010e4fff423 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Received event network-changed-1c8dc20d-b649-4107-a95c-d427a6c8f59a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:30:35 np0005542249 nova_compute[254900]: 2025-12-02 11:30:35.858 254904 DEBUG nova.compute.manager [req-02d639ab-c778-4d72-a080-74c00ae8b57e req-216f2ff0-b44b-412c-a9c4-1010e4fff423 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Refreshing instance network info cache due to event network-changed-1c8dc20d-b649-4107-a95c-d427a6c8f59a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:30:35 np0005542249 nova_compute[254900]: 2025-12-02 11:30:35.858 254904 DEBUG oslo_concurrency.lockutils [req-02d639ab-c778-4d72-a080-74c00ae8b57e req-216f2ff0-b44b-412c-a9c4-1010e4fff423 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-64fbd54d-f574-44e6-a788-53938d2219e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:30:35 np0005542249 nova_compute[254900]: 2025-12-02 11:30:35.858 254904 DEBUG oslo_concurrency.lockutils [req-02d639ab-c778-4d72-a080-74c00ae8b57e req-216f2ff0-b44b-412c-a9c4-1010e4fff423 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-64fbd54d-f574-44e6-a788-53938d2219e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:30:35 np0005542249 nova_compute[254900]: 2025-12-02 11:30:35.859 254904 DEBUG nova.network.neutron [req-02d639ab-c778-4d72-a080-74c00ae8b57e req-216f2ff0-b44b-412c-a9c4-1010e4fff423 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Refreshing network info cache for port 1c8dc20d-b649-4107-a95c-d427a6c8f59a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:30:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:30:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:30:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:30:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:30:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007672218898547894 of space, bias 1.0, pg target 0.2301665669564368 quantized to 32 (current 32)
Dec  2 06:30:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:30:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0036841912482906817 of space, bias 1.0, pg target 1.1052573744872045 quantized to 32 (current 32)
Dec  2 06:30:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:30:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.9013621638340822e-05 quantized to 32 (current 32)
Dec  2 06:30:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:30:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.19918670028325844 quantized to 32 (current 32)
Dec  2 06:30:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:30:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Dec  2 06:30:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:30:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:30:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:30:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Dec  2 06:30:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:30:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Dec  2 06:30:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:30:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:30:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:30:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Dec  2 06:30:36 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  2 06:30:36 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 21K writes, 85K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 21K writes, 7462 syncs, 2.83 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 43K keys, 10K commit groups, 1.0 writes per commit group, ingest: 29.18 MB, 0.05 MB/s#012Interval WAL: 10K writes, 4367 syncs, 2.37 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  2 06:30:36 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1624: 321 pgs: 321 active+clean; 433 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 601 KiB/s rd, 76 KiB/s wr, 59 op/s
Dec  2 06:30:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:30:36 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3626627744' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:30:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:30:36 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3626627744' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:30:36 np0005542249 nova_compute[254900]: 2025-12-02 11:30:36.852 254904 DEBUG nova.network.neutron [req-02d639ab-c778-4d72-a080-74c00ae8b57e req-216f2ff0-b44b-412c-a9c4-1010e4fff423 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Updated VIF entry in instance network info cache for port 1c8dc20d-b649-4107-a95c-d427a6c8f59a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:30:36 np0005542249 nova_compute[254900]: 2025-12-02 11:30:36.853 254904 DEBUG nova.network.neutron [req-02d639ab-c778-4d72-a080-74c00ae8b57e req-216f2ff0-b44b-412c-a9c4-1010e4fff423 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Updating instance_info_cache with network_info: [{"id": "1c8dc20d-b649-4107-a95c-d427a6c8f59a", "address": "fa:16:3e:e1:5e:cc", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c8dc20d-b6", "ovs_interfaceid": "1c8dc20d-b649-4107-a95c-d427a6c8f59a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:30:36 np0005542249 nova_compute[254900]: 2025-12-02 11:30:36.873 254904 DEBUG oslo_concurrency.lockutils [req-02d639ab-c778-4d72-a080-74c00ae8b57e req-216f2ff0-b44b-412c-a9c4-1010e4fff423 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-64fbd54d-f574-44e6-a788-53938d2219e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:30:37 np0005542249 ceph-mgr[75372]: [devicehealth INFO root] Check health
Dec  2 06:30:37 np0005542249 nova_compute[254900]: 2025-12-02 11:30:37.577 254904 DEBUG oslo_concurrency.lockutils [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:37 np0005542249 nova_compute[254900]: 2025-12-02 11:30:37.578 254904 DEBUG oslo_concurrency.lockutils [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:37 np0005542249 nova_compute[254900]: 2025-12-02 11:30:37.578 254904 DEBUG oslo_concurrency.lockutils [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:37 np0005542249 nova_compute[254900]: 2025-12-02 11:30:37.579 254904 DEBUG oslo_concurrency.lockutils [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:37 np0005542249 nova_compute[254900]: 2025-12-02 11:30:37.579 254904 DEBUG oslo_concurrency.lockutils [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:37 np0005542249 nova_compute[254900]: 2025-12-02 11:30:37.581 254904 INFO nova.compute.manager [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Terminating instance#033[00m
Dec  2 06:30:37 np0005542249 nova_compute[254900]: 2025-12-02 11:30:37.582 254904 DEBUG nova.compute.manager [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:30:37 np0005542249 kernel: tap0d534c62-22 (unregistering): left promiscuous mode
Dec  2 06:30:37 np0005542249 NetworkManager[48987]: <info>  [1764675037.6560] device (tap0d534c62-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:30:37 np0005542249 nova_compute[254900]: 2025-12-02 11:30:37.667 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:37 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:37Z|00224|binding|INFO|Releasing lport 0d534c62-22f0-40f5-b1f1-48ae2645c541 from this chassis (sb_readonly=0)
Dec  2 06:30:37 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:37Z|00225|binding|INFO|Setting lport 0d534c62-22f0-40f5-b1f1-48ae2645c541 down in Southbound
Dec  2 06:30:37 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:37Z|00226|binding|INFO|Removing iface tap0d534c62-22 ovn-installed in OVS
Dec  2 06:30:37 np0005542249 nova_compute[254900]: 2025-12-02 11:30:37.670 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:37.687 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:43:9e 10.100.0.12'], port_security=['fa:16:3e:9b:43:9e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2b4b4b76-ca4f-438b-a3e5-c5d4b3583290', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a893d0c223f746328e706d7491d73b20', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5f414683-a8ce-4d9a-ad73-51d7a84d1e7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9a246c4-d9fe-402e-8fa6-6099b55c4866, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=0d534c62-22f0-40f5-b1f1-48ae2645c541) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:30:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:37.689 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 0d534c62-22f0-40f5-b1f1-48ae2645c541 in datapath 4f9f73cb-9730-4829-ae15-1f03b97e60f8 unbound from our chassis#033[00m
Dec  2 06:30:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:37.690 163757 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4f9f73cb-9730-4829-ae15-1f03b97e60f8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 06:30:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:37.692 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[7e5359e8-3445-4bf4-94a9-5643ee03a8c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:37.692 163757 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8 namespace which is not needed anymore#033[00m
Dec  2 06:30:37 np0005542249 nova_compute[254900]: 2025-12-02 11:30:37.698 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:37 np0005542249 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000017.scope: Deactivated successfully.
Dec  2 06:30:37 np0005542249 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000017.scope: Consumed 18.573s CPU time.
Dec  2 06:30:37 np0005542249 systemd-machined[216222]: Machine qemu-23-instance-00000017 terminated.
Dec  2 06:30:37 np0005542249 nova_compute[254900]: 2025-12-02 11:30:37.806 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:37 np0005542249 nova_compute[254900]: 2025-12-02 11:30:37.814 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:37 np0005542249 nova_compute[254900]: 2025-12-02 11:30:37.821 254904 INFO nova.virt.libvirt.driver [-] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Instance destroyed successfully.#033[00m
Dec  2 06:30:37 np0005542249 nova_compute[254900]: 2025-12-02 11:30:37.821 254904 DEBUG nova.objects.instance [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lazy-loading 'resources' on Instance uuid 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:30:37 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[288760]: [NOTICE]   (288764) : haproxy version is 2.8.14-c23fe91
Dec  2 06:30:37 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[288760]: [NOTICE]   (288764) : path to executable is /usr/sbin/haproxy
Dec  2 06:30:37 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[288760]: [WARNING]  (288764) : Exiting Master process...
Dec  2 06:30:37 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[288760]: [ALERT]    (288764) : Current worker (288766) exited with code 143 (Terminated)
Dec  2 06:30:37 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[288760]: [WARNING]  (288764) : All workers exited. Exiting... (0)
Dec  2 06:30:37 np0005542249 systemd[1]: libpod-9462c0e2651785855caf76a83c07df96547126df7119dbcd8c05ecf8637b4e13.scope: Deactivated successfully.
Dec  2 06:30:37 np0005542249 podman[290283]: 2025-12-02 11:30:37.837887491 +0000 UTC m=+0.052128022 container died 9462c0e2651785855caf76a83c07df96547126df7119dbcd8c05ecf8637b4e13 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  2 06:30:37 np0005542249 nova_compute[254900]: 2025-12-02 11:30:37.841 254904 DEBUG nova.virt.libvirt.vif [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:29:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-377789930',display_name='tempest-TransferEncryptedVolumeTest-server-377789930',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-377789930',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBPd/5CNJJJVCM7bF71nyziMenMlWa5ulXBeejobfPYAVvOOigTWuMR262ZOPGYLdqIJzc7AMWApUvqaDK/XzxMpH8d3L2DeOAIkexGDCsfnTgIhEJIcaLGeYmajYRiu/w==',key_name='tempest-TransferEncryptedVolumeTest-1560588090',keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:30:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a893d0c223f746328e706d7491d73b20',ramdisk_id='',reservation_id='r-pprvpknh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1499588457',owner_user_name='tempest-TransferEncryptedVolumeTest-1499588457-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:30:00Z,user_data=None,user_id='1caa62e7ee8b42be98bc34780a7197f9',uuid=2b4b4b76-ca4f-438b-a3e5-c5d4b3583290,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0d534c62-22f0-40f5-b1f1-48ae2645c541", "address": "fa:16:3e:9b:43:9e", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d534c62-22", "ovs_interfaceid": "0d534c62-22f0-40f5-b1f1-48ae2645c541", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:30:37 np0005542249 nova_compute[254900]: 2025-12-02 11:30:37.842 254904 DEBUG nova.network.os_vif_util [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Converting VIF {"id": "0d534c62-22f0-40f5-b1f1-48ae2645c541", "address": "fa:16:3e:9b:43:9e", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d534c62-22", "ovs_interfaceid": "0d534c62-22f0-40f5-b1f1-48ae2645c541", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:30:37 np0005542249 nova_compute[254900]: 2025-12-02 11:30:37.843 254904 DEBUG nova.network.os_vif_util [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9b:43:9e,bridge_name='br-int',has_traffic_filtering=True,id=0d534c62-22f0-40f5-b1f1-48ae2645c541,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d534c62-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:30:37 np0005542249 nova_compute[254900]: 2025-12-02 11:30:37.844 254904 DEBUG os_vif [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:43:9e,bridge_name='br-int',has_traffic_filtering=True,id=0d534c62-22f0-40f5-b1f1-48ae2645c541,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d534c62-22') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:30:37 np0005542249 nova_compute[254900]: 2025-12-02 11:30:37.845 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:37 np0005542249 nova_compute[254900]: 2025-12-02 11:30:37.846 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d534c62-22, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:30:37 np0005542249 nova_compute[254900]: 2025-12-02 11:30:37.847 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:37 np0005542249 nova_compute[254900]: 2025-12-02 11:30:37.849 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:37 np0005542249 nova_compute[254900]: 2025-12-02 11:30:37.852 254904 INFO os_vif [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:43:9e,bridge_name='br-int',has_traffic_filtering=True,id=0d534c62-22f0-40f5-b1f1-48ae2645c541,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d534c62-22')#033[00m
Dec  2 06:30:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:30:37 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9462c0e2651785855caf76a83c07df96547126df7119dbcd8c05ecf8637b4e13-userdata-shm.mount: Deactivated successfully.
Dec  2 06:30:37 np0005542249 systemd[1]: var-lib-containers-storage-overlay-4dc35e506505137e74533495fd4885cfc9cc137626dd8055aee89b60d077d862-merged.mount: Deactivated successfully.
Dec  2 06:30:37 np0005542249 podman[290283]: 2025-12-02 11:30:37.891574565 +0000 UTC m=+0.105815096 container cleanup 9462c0e2651785855caf76a83c07df96547126df7119dbcd8c05ecf8637b4e13 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  2 06:30:37 np0005542249 systemd[1]: libpod-conmon-9462c0e2651785855caf76a83c07df96547126df7119dbcd8c05ecf8637b4e13.scope: Deactivated successfully.
Dec  2 06:30:37 np0005542249 podman[290336]: 2025-12-02 11:30:37.961468458 +0000 UTC m=+0.044686311 container remove 9462c0e2651785855caf76a83c07df96547126df7119dbcd8c05ecf8637b4e13 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  2 06:30:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:37.968 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[845b46fc-414e-4aea-90c9-2e61c38b99dc]: (4, ('Tue Dec  2 11:30:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8 (9462c0e2651785855caf76a83c07df96547126df7119dbcd8c05ecf8637b4e13)\n9462c0e2651785855caf76a83c07df96547126df7119dbcd8c05ecf8637b4e13\nTue Dec  2 11:30:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8 (9462c0e2651785855caf76a83c07df96547126df7119dbcd8c05ecf8637b4e13)\n9462c0e2651785855caf76a83c07df96547126df7119dbcd8c05ecf8637b4e13\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:37.970 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[53b4ecb8-3d76-47f2-a81f-8fdd58ef1296]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:37.971 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f9f73cb-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:30:37 np0005542249 kernel: tap4f9f73cb-90: left promiscuous mode
Dec  2 06:30:37 np0005542249 nova_compute[254900]: 2025-12-02 11:30:37.972 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:37 np0005542249 nova_compute[254900]: 2025-12-02 11:30:37.988 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:37 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:37.995 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[986fe7c7-9063-4a51-97aa-95edba308020]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:38 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:38.013 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[360b028f-52cc-449c-a434-fab1f3048d69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:38 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:38.015 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[cf7dce27-aef4-4aa1-994b-052e69c34b07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:38 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:38.038 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[64a03839-d8a8-4808-9699-8402570b2398]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519090, 'reachable_time': 27280, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290350, 'error': None, 'target': 'ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:38 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:38.041 164036 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 06:30:38 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:38.041 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[610b4d44-554f-425b-8432-7bf39402ed1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:38 np0005542249 systemd[1]: run-netns-ovnmeta\x2d4f9f73cb\x2d9730\x2d4829\x2dae15\x2d1f03b97e60f8.mount: Deactivated successfully.
Dec  2 06:30:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:30:38 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4148000221' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:30:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:30:38 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4148000221' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:30:38 np0005542249 nova_compute[254900]: 2025-12-02 11:30:38.071 254904 INFO nova.virt.libvirt.driver [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Deleting instance files /var/lib/nova/instances/2b4b4b76-ca4f-438b-a3e5-c5d4b3583290_del#033[00m
Dec  2 06:30:38 np0005542249 nova_compute[254900]: 2025-12-02 11:30:38.072 254904 INFO nova.virt.libvirt.driver [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Deletion of /var/lib/nova/instances/2b4b4b76-ca4f-438b-a3e5-c5d4b3583290_del complete#033[00m
Dec  2 06:30:38 np0005542249 nova_compute[254900]: 2025-12-02 11:30:38.169 254904 INFO nova.compute.manager [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:30:38 np0005542249 nova_compute[254900]: 2025-12-02 11:30:38.170 254904 DEBUG oslo.service.loopingcall [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:30:38 np0005542249 nova_compute[254900]: 2025-12-02 11:30:38.171 254904 DEBUG nova.compute.manager [-] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:30:38 np0005542249 nova_compute[254900]: 2025-12-02 11:30:38.171 254904 DEBUG nova.network.neutron [-] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:30:38 np0005542249 nova_compute[254900]: 2025-12-02 11:30:38.175 254904 DEBUG nova.compute.manager [req-94aa66c7-dd69-4468-8629-583361011baf req-21862f20-dc48-4e1f-8317-a47565f373a7 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Received event network-vif-unplugged-0d534c62-22f0-40f5-b1f1-48ae2645c541 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:30:38 np0005542249 nova_compute[254900]: 2025-12-02 11:30:38.176 254904 DEBUG oslo_concurrency.lockutils [req-94aa66c7-dd69-4468-8629-583361011baf req-21862f20-dc48-4e1f-8317-a47565f373a7 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:38 np0005542249 nova_compute[254900]: 2025-12-02 11:30:38.176 254904 DEBUG oslo_concurrency.lockutils [req-94aa66c7-dd69-4468-8629-583361011baf req-21862f20-dc48-4e1f-8317-a47565f373a7 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:38 np0005542249 nova_compute[254900]: 2025-12-02 11:30:38.176 254904 DEBUG oslo_concurrency.lockutils [req-94aa66c7-dd69-4468-8629-583361011baf req-21862f20-dc48-4e1f-8317-a47565f373a7 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:38 np0005542249 nova_compute[254900]: 2025-12-02 11:30:38.177 254904 DEBUG nova.compute.manager [req-94aa66c7-dd69-4468-8629-583361011baf req-21862f20-dc48-4e1f-8317-a47565f373a7 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] No waiting events found dispatching network-vif-unplugged-0d534c62-22f0-40f5-b1f1-48ae2645c541 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:30:38 np0005542249 nova_compute[254900]: 2025-12-02 11:30:38.177 254904 DEBUG nova.compute.manager [req-94aa66c7-dd69-4468-8629-583361011baf req-21862f20-dc48-4e1f-8317-a47565f373a7 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Received event network-vif-unplugged-0d534c62-22f0-40f5-b1f1-48ae2645c541 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 06:30:38 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1625: 321 pgs: 321 active+clean; 433 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 134 KiB/s wr, 143 op/s
Dec  2 06:30:38 np0005542249 nova_compute[254900]: 2025-12-02 11:30:38.925 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:39 np0005542249 nova_compute[254900]: 2025-12-02 11:30:39.043 254904 DEBUG nova.network.neutron [-] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:30:39 np0005542249 nova_compute[254900]: 2025-12-02 11:30:39.067 254904 INFO nova.compute.manager [-] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Took 0.90 seconds to deallocate network for instance.#033[00m
Dec  2 06:30:39 np0005542249 nova_compute[254900]: 2025-12-02 11:30:39.136 254904 DEBUG nova.compute.manager [req-59704486-6b92-485e-88f4-a001b00b50b2 req-8695ed8f-6096-46b1-a0ad-5888410943c7 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Received event network-vif-deleted-0d534c62-22f0-40f5-b1f1-48ae2645c541 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:30:39 np0005542249 nova_compute[254900]: 2025-12-02 11:30:39.317 254904 INFO nova.compute.manager [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Took 0.25 seconds to detach 1 volumes for instance.#033[00m
Dec  2 06:30:39 np0005542249 nova_compute[254900]: 2025-12-02 11:30:39.384 254904 DEBUG oslo_concurrency.lockutils [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:39 np0005542249 nova_compute[254900]: 2025-12-02 11:30:39.384 254904 DEBUG oslo_concurrency.lockutils [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:39 np0005542249 nova_compute[254900]: 2025-12-02 11:30:39.477 254904 DEBUG oslo_concurrency.processutils [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:30:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3255722926' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:30:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:30:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3255722926' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:30:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:30:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/771335293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:30:39 np0005542249 nova_compute[254900]: 2025-12-02 11:30:39.922 254904 DEBUG oslo_concurrency.processutils [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:39 np0005542249 nova_compute[254900]: 2025-12-02 11:30:39.930 254904 DEBUG nova.compute.provider_tree [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:30:39 np0005542249 nova_compute[254900]: 2025-12-02 11:30:39.946 254904 DEBUG nova.scheduler.client.report [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:30:39 np0005542249 nova_compute[254900]: 2025-12-02 11:30:39.966 254904 DEBUG oslo_concurrency.lockutils [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:40 np0005542249 nova_compute[254900]: 2025-12-02 11:30:40.003 254904 INFO nova.scheduler.client.report [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Deleted allocations for instance 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290#033[00m
Dec  2 06:30:40 np0005542249 nova_compute[254900]: 2025-12-02 11:30:40.076 254904 DEBUG oslo_concurrency.lockutils [None req-ae2ad26e-e454-437d-8b6f-8372ec8fd090 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:40 np0005542249 nova_compute[254900]: 2025-12-02 11:30:40.254 254904 DEBUG nova.compute.manager [req-3706bf88-1be4-40b7-82d8-4c8eb2ce292d req-1083fbdd-d922-4bb5-beb0-8b1a6129ce21 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Received event network-vif-plugged-0d534c62-22f0-40f5-b1f1-48ae2645c541 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:30:40 np0005542249 nova_compute[254900]: 2025-12-02 11:30:40.255 254904 DEBUG oslo_concurrency.lockutils [req-3706bf88-1be4-40b7-82d8-4c8eb2ce292d req-1083fbdd-d922-4bb5-beb0-8b1a6129ce21 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:40 np0005542249 nova_compute[254900]: 2025-12-02 11:30:40.255 254904 DEBUG oslo_concurrency.lockutils [req-3706bf88-1be4-40b7-82d8-4c8eb2ce292d req-1083fbdd-d922-4bb5-beb0-8b1a6129ce21 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:40 np0005542249 nova_compute[254900]: 2025-12-02 11:30:40.256 254904 DEBUG oslo_concurrency.lockutils [req-3706bf88-1be4-40b7-82d8-4c8eb2ce292d req-1083fbdd-d922-4bb5-beb0-8b1a6129ce21 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "2b4b4b76-ca4f-438b-a3e5-c5d4b3583290-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:40 np0005542249 nova_compute[254900]: 2025-12-02 11:30:40.256 254904 DEBUG nova.compute.manager [req-3706bf88-1be4-40b7-82d8-4c8eb2ce292d req-1083fbdd-d922-4bb5-beb0-8b1a6129ce21 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] No waiting events found dispatching network-vif-plugged-0d534c62-22f0-40f5-b1f1-48ae2645c541 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:30:40 np0005542249 nova_compute[254900]: 2025-12-02 11:30:40.256 254904 WARNING nova.compute.manager [req-3706bf88-1be4-40b7-82d8-4c8eb2ce292d req-1083fbdd-d922-4bb5-beb0-8b1a6129ce21 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Received unexpected event network-vif-plugged-0d534c62-22f0-40f5-b1f1-48ae2645c541 for instance with vm_state deleted and task_state None.#033[00m
Dec  2 06:30:40 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1626: 321 pgs: 321 active+clean; 433 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 83 KiB/s wr, 141 op/s
Dec  2 06:30:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e399 do_prune osdmap full prune enabled
Dec  2 06:30:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e400 e400: 3 total, 3 up, 3 in
Dec  2 06:30:41 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e400: 3 total, 3 up, 3 in
Dec  2 06:30:42 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1628: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 432 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 90 KiB/s wr, 178 op/s
Dec  2 06:30:42 np0005542249 nova_compute[254900]: 2025-12-02 11:30:42.850 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:30:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e400 do_prune osdmap full prune enabled
Dec  2 06:30:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e401 e401: 3 total, 3 up, 3 in
Dec  2 06:30:43 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e401: 3 total, 3 up, 3 in
Dec  2 06:30:43 np0005542249 nova_compute[254900]: 2025-12-02 11:30:43.964 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:44 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1630: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 371 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 94 KiB/s wr, 224 op/s
Dec  2 06:30:44 np0005542249 nova_compute[254900]: 2025-12-02 11:30:44.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:30:44 np0005542249 nova_compute[254900]: 2025-12-02 11:30:44.381 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:30:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:30:44 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4111825462' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:30:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:30:44 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4111825462' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:30:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e401 do_prune osdmap full prune enabled
Dec  2 06:30:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e402 e402: 3 total, 3 up, 3 in
Dec  2 06:30:44 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e402: 3 total, 3 up, 3 in
Dec  2 06:30:45 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:45Z|00050|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.6
Dec  2 06:30:45 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:45Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:e1:5e:cc 10.100.0.6
Dec  2 06:30:46 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1632: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 371 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 410 KiB/s rd, 6.3 KiB/s wr, 115 op/s
Dec  2 06:30:46 np0005542249 nova_compute[254900]: 2025-12-02 11:30:46.383 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:30:46 np0005542249 nova_compute[254900]: 2025-12-02 11:30:46.384 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:30:46 np0005542249 nova_compute[254900]: 2025-12-02 11:30:46.384 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 06:30:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:30:46 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2301974127' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:30:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:30:46 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2301974127' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:30:46 np0005542249 nova_compute[254900]: 2025-12-02 11:30:46.563 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "refresh_cache-a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:30:46 np0005542249 nova_compute[254900]: 2025-12-02 11:30:46.563 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquired lock "refresh_cache-a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:30:46 np0005542249 nova_compute[254900]: 2025-12-02 11:30:46.564 254904 DEBUG nova.network.neutron [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 06:30:46 np0005542249 nova_compute[254900]: 2025-12-02 11:30:46.564 254904 DEBUG nova.objects.instance [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a8aab2b3-e5a2-451d-b77a-9d977f1dd00f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.057 254904 DEBUG oslo_concurrency.lockutils [None req-9341684c-ae39-4c80-bdac-e746784dfd2e 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Acquiring lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.058 254904 DEBUG oslo_concurrency.lockutils [None req-9341684c-ae39-4c80-bdac-e746784dfd2e 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.058 254904 DEBUG oslo_concurrency.lockutils [None req-9341684c-ae39-4c80-bdac-e746784dfd2e 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Acquiring lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.058 254904 DEBUG oslo_concurrency.lockutils [None req-9341684c-ae39-4c80-bdac-e746784dfd2e 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.059 254904 DEBUG oslo_concurrency.lockutils [None req-9341684c-ae39-4c80-bdac-e746784dfd2e 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.060 254904 INFO nova.compute.manager [None req-9341684c-ae39-4c80-bdac-e746784dfd2e 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Terminating instance#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.061 254904 DEBUG nova.compute.manager [None req-9341684c-ae39-4c80-bdac-e746784dfd2e 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:30:47 np0005542249 kernel: tap1b9be7b0-ee (unregistering): left promiscuous mode
Dec  2 06:30:47 np0005542249 NetworkManager[48987]: <info>  [1764675047.1300] device (tap1b9be7b0-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:30:47 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:47Z|00227|binding|INFO|Releasing lport 1b9be7b0-eed1-4455-a1fa-5418c0d54261 from this chassis (sb_readonly=0)
Dec  2 06:30:47 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:47Z|00228|binding|INFO|Setting lport 1b9be7b0-eed1-4455-a1fa-5418c0d54261 down in Southbound
Dec  2 06:30:47 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:47Z|00229|binding|INFO|Removing iface tap1b9be7b0-ee ovn-installed in OVS
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.136 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.139 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:47 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:47.153 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:74:f9 10.100.0.8'], port_security=['fa:16:3e:c4:74:f9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a8aab2b3-e5a2-451d-b77a-9d977f1dd00f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9e99d38-205a-48b3-a3a5-ce9f2004f29f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bb74d6d8597c490e967d98a6a783175e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e1f14e34-c0e9-4193-8b35-e0db4664ab56 fe3876a3-1fb3-4ce2-9e8c-cda4930e5403', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c775ae36-b6c4-4114-a80c-240c856ae41b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=1b9be7b0-eed1-4455-a1fa-5418c0d54261) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:30:47 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:47.155 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 1b9be7b0-eed1-4455-a1fa-5418c0d54261 in datapath c9e99d38-205a-48b3-a3a5-ce9f2004f29f unbound from our chassis#033[00m
Dec  2 06:30:47 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:47.158 163757 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c9e99d38-205a-48b3-a3a5-ce9f2004f29f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 06:30:47 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:47.161 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[c9b327bc-0a34-4452-9cdd-1561e2dda2fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:47 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:47.162 163757 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c9e99d38-205a-48b3-a3a5-ce9f2004f29f namespace which is not needed anymore#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.174 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:47 np0005542249 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Deactivated successfully.
Dec  2 06:30:47 np0005542249 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Consumed 19.523s CPU time.
Dec  2 06:30:47 np0005542249 systemd-machined[216222]: Machine qemu-21-instance-00000015 terminated.
Dec  2 06:30:47 np0005542249 podman[290379]: 2025-12-02 11:30:47.26031405 +0000 UTC m=+0.091377905 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd)
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.282 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.292 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.301 254904 INFO nova.virt.libvirt.driver [-] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Instance destroyed successfully.#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.302 254904 DEBUG nova.objects.instance [None req-9341684c-ae39-4c80-bdac-e746784dfd2e 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lazy-loading 'resources' on Instance uuid a8aab2b3-e5a2-451d-b77a-9d977f1dd00f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.316 254904 DEBUG nova.virt.libvirt.vif [None req-9341684c-ae39-4c80-bdac-e746784dfd2e 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:29:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-126628184',display_name='tempest-SnapshotDataIntegrityTests-server-126628184',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-126628184',id=21,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA1dSphAQZ2jfKBzlGAu9edcO3UD9dt2xlesom22AZBB6h7sYaoLNyRUI6o7THwlIVSzyFUotzDDcp2T+yKMByLGwVw02W9JR0EYyH59E+9V8vLLDXvI1oTHk5o/kZ3q0g==',key_name='tempest-keypair-927487953',keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:29:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bb74d6d8597c490e967d98a6a783175e',ramdisk_id='',reservation_id='r-ip7p9r8g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SnapshotDataIntegrityTests-359161340',owner_user_name='tempest-SnapshotDataIntegrityTests-359161340-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:29:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4326297d589c4e5cafa95e1e95585b57',uuid=a8aab2b3-e5a2-451d-b77a-9d977f1dd00f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b9be7b0-eed1-4455-a1fa-5418c0d54261", "address": "fa:16:3e:c4:74:f9", "network": {"id": "c9e99d38-205a-48b3-a3a5-ce9f2004f29f", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1523779374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb74d6d8597c490e967d98a6a783175e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b9be7b0-ee", "ovs_interfaceid": "1b9be7b0-eed1-4455-a1fa-5418c0d54261", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.317 254904 DEBUG nova.network.os_vif_util [None req-9341684c-ae39-4c80-bdac-e746784dfd2e 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Converting VIF {"id": "1b9be7b0-eed1-4455-a1fa-5418c0d54261", "address": "fa:16:3e:c4:74:f9", "network": {"id": "c9e99d38-205a-48b3-a3a5-ce9f2004f29f", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1523779374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb74d6d8597c490e967d98a6a783175e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b9be7b0-ee", "ovs_interfaceid": "1b9be7b0-eed1-4455-a1fa-5418c0d54261", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.318 254904 DEBUG nova.network.os_vif_util [None req-9341684c-ae39-4c80-bdac-e746784dfd2e 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c4:74:f9,bridge_name='br-int',has_traffic_filtering=True,id=1b9be7b0-eed1-4455-a1fa-5418c0d54261,network=Network(c9e99d38-205a-48b3-a3a5-ce9f2004f29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b9be7b0-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.318 254904 DEBUG os_vif [None req-9341684c-ae39-4c80-bdac-e746784dfd2e 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:74:f9,bridge_name='br-int',has_traffic_filtering=True,id=1b9be7b0-eed1-4455-a1fa-5418c0d54261,network=Network(c9e99d38-205a-48b3-a3a5-ce9f2004f29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b9be7b0-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.321 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.321 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b9be7b0-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.348 254904 DEBUG nova.compute.manager [req-23e83864-8558-49ea-ad69-50184ae95121 req-9b3cfd4c-1d9f-4e99-9ade-58b3e81ecc80 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Received event network-vif-unplugged-1b9be7b0-eed1-4455-a1fa-5418c0d54261 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.349 254904 DEBUG oslo_concurrency.lockutils [req-23e83864-8558-49ea-ad69-50184ae95121 req-9b3cfd4c-1d9f-4e99-9ade-58b3e81ecc80 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.349 254904 DEBUG oslo_concurrency.lockutils [req-23e83864-8558-49ea-ad69-50184ae95121 req-9b3cfd4c-1d9f-4e99-9ade-58b3e81ecc80 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.349 254904 DEBUG oslo_concurrency.lockutils [req-23e83864-8558-49ea-ad69-50184ae95121 req-9b3cfd4c-1d9f-4e99-9ade-58b3e81ecc80 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.349 254904 DEBUG nova.compute.manager [req-23e83864-8558-49ea-ad69-50184ae95121 req-9b3cfd4c-1d9f-4e99-9ade-58b3e81ecc80 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] No waiting events found dispatching network-vif-unplugged-1b9be7b0-eed1-4455-a1fa-5418c0d54261 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.350 254904 DEBUG nova.compute.manager [req-23e83864-8558-49ea-ad69-50184ae95121 req-9b3cfd4c-1d9f-4e99-9ade-58b3e81ecc80 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Received event network-vif-unplugged-1b9be7b0-eed1-4455-a1fa-5418c0d54261 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.366 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:47 np0005542249 neutron-haproxy-ovnmeta-c9e99d38-205a-48b3-a3a5-ce9f2004f29f[287904]: [NOTICE]   (287908) : haproxy version is 2.8.14-c23fe91
Dec  2 06:30:47 np0005542249 neutron-haproxy-ovnmeta-c9e99d38-205a-48b3-a3a5-ce9f2004f29f[287904]: [NOTICE]   (287908) : path to executable is /usr/sbin/haproxy
Dec  2 06:30:47 np0005542249 neutron-haproxy-ovnmeta-c9e99d38-205a-48b3-a3a5-ce9f2004f29f[287904]: [WARNING]  (287908) : Exiting Master process...
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.369 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:47 np0005542249 neutron-haproxy-ovnmeta-c9e99d38-205a-48b3-a3a5-ce9f2004f29f[287904]: [ALERT]    (287908) : Current worker (287910) exited with code 143 (Terminated)
Dec  2 06:30:47 np0005542249 neutron-haproxy-ovnmeta-c9e99d38-205a-48b3-a3a5-ce9f2004f29f[287904]: [WARNING]  (287908) : All workers exited. Exiting... (0)
Dec  2 06:30:47 np0005542249 systemd[1]: libpod-d4533fd04311b1019f7c04ce5f4145d036e21ac76e9d61a862a9b666bbb9ee69.scope: Deactivated successfully.
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.374 254904 INFO os_vif [None req-9341684c-ae39-4c80-bdac-e746784dfd2e 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:74:f9,bridge_name='br-int',has_traffic_filtering=True,id=1b9be7b0-eed1-4455-a1fa-5418c0d54261,network=Network(c9e99d38-205a-48b3-a3a5-ce9f2004f29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b9be7b0-ee')#033[00m
Dec  2 06:30:47 np0005542249 podman[290419]: 2025-12-02 11:30:47.379763194 +0000 UTC m=+0.095517117 container died d4533fd04311b1019f7c04ce5f4145d036e21ac76e9d61a862a9b666bbb9ee69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9e99d38-205a-48b3-a3a5-ce9f2004f29f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 06:30:47 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d4533fd04311b1019f7c04ce5f4145d036e21ac76e9d61a862a9b666bbb9ee69-userdata-shm.mount: Deactivated successfully.
Dec  2 06:30:47 np0005542249 systemd[1]: var-lib-containers-storage-overlay-e3f2c1643f39088df85f8111a24d5894a2a780fc5ada4b8fd60c3997cb126675-merged.mount: Deactivated successfully.
Dec  2 06:30:47 np0005542249 podman[290419]: 2025-12-02 11:30:47.431736391 +0000 UTC m=+0.147490354 container cleanup d4533fd04311b1019f7c04ce5f4145d036e21ac76e9d61a862a9b666bbb9ee69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9e99d38-205a-48b3-a3a5-ce9f2004f29f, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  2 06:30:47 np0005542249 systemd[1]: libpod-conmon-d4533fd04311b1019f7c04ce5f4145d036e21ac76e9d61a862a9b666bbb9ee69.scope: Deactivated successfully.
Dec  2 06:30:47 np0005542249 podman[290472]: 2025-12-02 11:30:47.521420049 +0000 UTC m=+0.061963118 container remove d4533fd04311b1019f7c04ce5f4145d036e21ac76e9d61a862a9b666bbb9ee69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9e99d38-205a-48b3-a3a5-ce9f2004f29f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  2 06:30:47 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:47.529 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[5d68c09e-cc6f-4b8c-91aa-38d18feb40b3]: (4, ('Tue Dec  2 11:30:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c9e99d38-205a-48b3-a3a5-ce9f2004f29f (d4533fd04311b1019f7c04ce5f4145d036e21ac76e9d61a862a9b666bbb9ee69)\nd4533fd04311b1019f7c04ce5f4145d036e21ac76e9d61a862a9b666bbb9ee69\nTue Dec  2 11:30:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c9e99d38-205a-48b3-a3a5-ce9f2004f29f (d4533fd04311b1019f7c04ce5f4145d036e21ac76e9d61a862a9b666bbb9ee69)\nd4533fd04311b1019f7c04ce5f4145d036e21ac76e9d61a862a9b666bbb9ee69\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:47 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:47.532 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[0b680768-cbfe-41dd-b4f6-7b48b11e34aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:47 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:47.534 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9e99d38-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.537 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:47 np0005542249 kernel: tapc9e99d38-20: left promiscuous mode
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.566 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:47 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:47.573 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[14e43b5e-bb6f-4244-83c1-10b5e0152fb0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:47 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:47.591 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[f32abfed-7acc-4059-8632-8bdd7865a114]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:47 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:47.593 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[4d2c944e-bfb7-4a69-ba9f-a24b204c1c91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:47 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:47.624 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[af19a012-aa30-4c81-8e10-83e49f372859]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 515216, 'reachable_time': 44130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290488, 'error': None, 'target': 'ovnmeta-c9e99d38-205a-48b3-a3a5-ce9f2004f29f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:47 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:47.628 164036 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c9e99d38-205a-48b3-a3a5-ce9f2004f29f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 06:30:47 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:30:47.628 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[30e5cc78-d5c8-4244-98a8-8182ea273fb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:30:47 np0005542249 systemd[1]: run-netns-ovnmeta\x2dc9e99d38\x2d205a\x2d48b3\x2da3a5\x2dce9f2004f29f.mount: Deactivated successfully.
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.679 254904 DEBUG nova.network.neutron [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Updating instance_info_cache with network_info: [{"id": "1b9be7b0-eed1-4455-a1fa-5418c0d54261", "address": "fa:16:3e:c4:74:f9", "network": {"id": "c9e99d38-205a-48b3-a3a5-ce9f2004f29f", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1523779374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb74d6d8597c490e967d98a6a783175e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b9be7b0-ee", "ovs_interfaceid": "1b9be7b0-eed1-4455-a1fa-5418c0d54261", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.693 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Releasing lock "refresh_cache-a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.694 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.695 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.810 254904 INFO nova.virt.libvirt.driver [None req-9341684c-ae39-4c80-bdac-e746784dfd2e 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Deleting instance files /var/lib/nova/instances/a8aab2b3-e5a2-451d-b77a-9d977f1dd00f_del#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.812 254904 INFO nova.virt.libvirt.driver [None req-9341684c-ae39-4c80-bdac-e746784dfd2e 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Deletion of /var/lib/nova/instances/a8aab2b3-e5a2-451d-b77a-9d977f1dd00f_del complete#033[00m
Dec  2 06:30:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.868 254904 INFO nova.compute.manager [None req-9341684c-ae39-4c80-bdac-e746784dfd2e 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.869 254904 DEBUG oslo.service.loopingcall [None req-9341684c-ae39-4c80-bdac-e746784dfd2e 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.869 254904 DEBUG nova.compute.manager [-] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:30:47 np0005542249 nova_compute[254900]: 2025-12-02 11:30:47.869 254904 DEBUG nova.network.neutron [-] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:30:48 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1633: 321 pgs: 321 active+clean; 218 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 31 KiB/s wr, 280 op/s
Dec  2 06:30:48 np0005542249 nova_compute[254900]: 2025-12-02 11:30:48.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:30:48 np0005542249 nova_compute[254900]: 2025-12-02 11:30:48.707 254904 DEBUG nova.network.neutron [-] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:30:48 np0005542249 nova_compute[254900]: 2025-12-02 11:30:48.723 254904 INFO nova.compute.manager [-] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Took 0.85 seconds to deallocate network for instance.#033[00m
Dec  2 06:30:48 np0005542249 nova_compute[254900]: 2025-12-02 11:30:48.788 254904 DEBUG nova.compute.manager [req-30c29a68-39fd-4b28-83e7-e488f04dda11 req-f41db24e-d60c-47bd-b2eb-d801f0f83c8c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Received event network-vif-deleted-1b9be7b0-eed1-4455-a1fa-5418c0d54261 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:30:48 np0005542249 nova_compute[254900]: 2025-12-02 11:30:48.789 254904 DEBUG oslo_concurrency.lockutils [None req-9341684c-ae39-4c80-bdac-e746784dfd2e 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:48 np0005542249 nova_compute[254900]: 2025-12-02 11:30:48.790 254904 DEBUG oslo_concurrency.lockutils [None req-9341684c-ae39-4c80-bdac-e746784dfd2e 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:48 np0005542249 nova_compute[254900]: 2025-12-02 11:30:48.872 254904 DEBUG oslo_concurrency.processutils [None req-9341684c-ae39-4c80-bdac-e746784dfd2e 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:48 np0005542249 nova_compute[254900]: 2025-12-02 11:30:48.968 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:30:49 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1639087677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:30:49 np0005542249 nova_compute[254900]: 2025-12-02 11:30:49.327 254904 DEBUG oslo_concurrency.processutils [None req-9341684c-ae39-4c80-bdac-e746784dfd2e 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:49 np0005542249 nova_compute[254900]: 2025-12-02 11:30:49.336 254904 DEBUG nova.compute.provider_tree [None req-9341684c-ae39-4c80-bdac-e746784dfd2e 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:30:49 np0005542249 nova_compute[254900]: 2025-12-02 11:30:49.360 254904 DEBUG nova.scheduler.client.report [None req-9341684c-ae39-4c80-bdac-e746784dfd2e 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:30:49 np0005542249 nova_compute[254900]: 2025-12-02 11:30:49.381 254904 DEBUG oslo_concurrency.lockutils [None req-9341684c-ae39-4c80-bdac-e746784dfd2e 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:49 np0005542249 nova_compute[254900]: 2025-12-02 11:30:49.412 254904 INFO nova.scheduler.client.report [None req-9341684c-ae39-4c80-bdac-e746784dfd2e 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Deleted allocations for instance a8aab2b3-e5a2-451d-b77a-9d977f1dd00f#033[00m
Dec  2 06:30:49 np0005542249 nova_compute[254900]: 2025-12-02 11:30:49.461 254904 DEBUG nova.compute.manager [req-ab79227a-faab-4876-a41f-2ff70dd2d21b req-10dedbbc-f636-4350-8c64-af3d29119a46 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Received event network-vif-plugged-1b9be7b0-eed1-4455-a1fa-5418c0d54261 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:30:49 np0005542249 nova_compute[254900]: 2025-12-02 11:30:49.461 254904 DEBUG oslo_concurrency.lockutils [req-ab79227a-faab-4876-a41f-2ff70dd2d21b req-10dedbbc-f636-4350-8c64-af3d29119a46 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:49 np0005542249 nova_compute[254900]: 2025-12-02 11:30:49.462 254904 DEBUG oslo_concurrency.lockutils [req-ab79227a-faab-4876-a41f-2ff70dd2d21b req-10dedbbc-f636-4350-8c64-af3d29119a46 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:49 np0005542249 nova_compute[254900]: 2025-12-02 11:30:49.463 254904 DEBUG oslo_concurrency.lockutils [req-ab79227a-faab-4876-a41f-2ff70dd2d21b req-10dedbbc-f636-4350-8c64-af3d29119a46 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:49 np0005542249 nova_compute[254900]: 2025-12-02 11:30:49.463 254904 DEBUG nova.compute.manager [req-ab79227a-faab-4876-a41f-2ff70dd2d21b req-10dedbbc-f636-4350-8c64-af3d29119a46 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] No waiting events found dispatching network-vif-plugged-1b9be7b0-eed1-4455-a1fa-5418c0d54261 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:30:49 np0005542249 nova_compute[254900]: 2025-12-02 11:30:49.463 254904 WARNING nova.compute.manager [req-ab79227a-faab-4876-a41f-2ff70dd2d21b req-10dedbbc-f636-4350-8c64-af3d29119a46 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Received unexpected event network-vif-plugged-1b9be7b0-eed1-4455-a1fa-5418c0d54261 for instance with vm_state deleted and task_state None.#033[00m
Dec  2 06:30:49 np0005542249 nova_compute[254900]: 2025-12-02 11:30:49.478 254904 DEBUG oslo_concurrency.lockutils [None req-9341684c-ae39-4c80-bdac-e746784dfd2e 4326297d589c4e5cafa95e1e95585b57 bb74d6d8597c490e967d98a6a783175e - - default default] Lock "a8aab2b3-e5a2-451d-b77a-9d977f1dd00f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:49 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:49Z|00052|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.6
Dec  2 06:30:49 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:49Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:e1:5e:cc 10.100.0.6
Dec  2 06:30:50 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:50Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e1:5e:cc 10.100.0.6
Dec  2 06:30:50 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:50Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e1:5e:cc 10.100.0.6
Dec  2 06:30:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:30:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1125579213' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:30:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:30:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1125579213' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:30:50 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1634: 321 pgs: 321 active+clean; 193 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 897 KiB/s rd, 25 KiB/s wr, 208 op/s
Dec  2 06:30:51 np0005542249 nova_compute[254900]: 2025-12-02 11:30:51.378 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:30:51 np0005542249 nova_compute[254900]: 2025-12-02 11:30:51.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:30:51 np0005542249 nova_compute[254900]: 2025-12-02 11:30:51.413 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:51 np0005542249 nova_compute[254900]: 2025-12-02 11:30:51.413 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:51 np0005542249 nova_compute[254900]: 2025-12-02 11:30:51.414 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:51 np0005542249 nova_compute[254900]: 2025-12-02 11:30:51.414 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:30:51 np0005542249 nova_compute[254900]: 2025-12-02 11:30:51.414 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:30:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3886284214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:30:51 np0005542249 nova_compute[254900]: 2025-12-02 11:30:51.883 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:51 np0005542249 nova_compute[254900]: 2025-12-02 11:30:51.969 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:30:51 np0005542249 nova_compute[254900]: 2025-12-02 11:30:51.969 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:30:52 np0005542249 nova_compute[254900]: 2025-12-02 11:30:52.271 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:30:52 np0005542249 nova_compute[254900]: 2025-12-02 11:30:52.273 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4133MB free_disk=59.971920013427734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:30:52 np0005542249 nova_compute[254900]: 2025-12-02 11:30:52.273 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:30:52 np0005542249 nova_compute[254900]: 2025-12-02 11:30:52.274 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:30:52 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1635: 321 pgs: 321 active+clean; 167 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 803 KiB/s rd, 22 KiB/s wr, 170 op/s
Dec  2 06:30:52 np0005542249 nova_compute[254900]: 2025-12-02 11:30:52.367 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:52 np0005542249 nova_compute[254900]: 2025-12-02 11:30:52.498 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Instance 64fbd54d-f574-44e6-a788-53938d2219e8 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 06:30:52 np0005542249 nova_compute[254900]: 2025-12-02 11:30:52.498 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:30:52 np0005542249 nova_compute[254900]: 2025-12-02 11:30:52.498 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:30:52 np0005542249 nova_compute[254900]: 2025-12-02 11:30:52.534 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:30:52 np0005542249 nova_compute[254900]: 2025-12-02 11:30:52.822 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764675037.8208134, 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:30:52 np0005542249 nova_compute[254900]: 2025-12-02 11:30:52.824 254904 INFO nova.compute.manager [-] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:30:52 np0005542249 nova_compute[254900]: 2025-12-02 11:30:52.845 254904 DEBUG nova.compute.manager [None req-30c707d9-25a7-4711-9b9d-e6396bbb3b87 - - - - - -] [instance: 2b4b4b76-ca4f-438b-a3e5-c5d4b3583290] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:30:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:30:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e402 do_prune osdmap full prune enabled
Dec  2 06:30:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e403 e403: 3 total, 3 up, 3 in
Dec  2 06:30:52 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e403: 3 total, 3 up, 3 in
Dec  2 06:30:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:30:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/744453115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:30:53 np0005542249 nova_compute[254900]: 2025-12-02 11:30:53.028 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:30:53 np0005542249 nova_compute[254900]: 2025-12-02 11:30:53.036 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:30:53 np0005542249 nova_compute[254900]: 2025-12-02 11:30:53.067 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:30:53 np0005542249 nova_compute[254900]: 2025-12-02 11:30:53.100 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:30:53 np0005542249 nova_compute[254900]: 2025-12-02 11:30:53.100 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:30:53 np0005542249 nova_compute[254900]: 2025-12-02 11:30:53.972 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:54 np0005542249 nova_compute[254900]: 2025-12-02 11:30:54.102 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:30:54 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1637: 321 pgs: 321 active+clean; 169 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 35 KiB/s wr, 158 op/s
Dec  2 06:30:54 np0005542249 nova_compute[254900]: 2025-12-02 11:30:54.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:30:55 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:55Z|00230|binding|INFO|Releasing lport 1636ad30-406d-4138-823e-abbe7f4d87ac from this chassis (sb_readonly=0)
Dec  2 06:30:55 np0005542249 nova_compute[254900]: 2025-12-02 11:30:55.755 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:56 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1638: 321 pgs: 321 active+clean; 169 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 34 KiB/s wr, 154 op/s
Dec  2 06:30:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:30:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:30:56 np0005542249 nova_compute[254900]: 2025-12-02 11:30:56.383 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:30:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:30:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:30:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:30:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:30:56 np0005542249 podman[290557]: 2025-12-02 11:30:56.986131361 +0000 UTC m=+0.060294404 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  2 06:30:57 np0005542249 podman[290558]: 2025-12-02 11:30:57.066335522 +0000 UTC m=+0.136163077 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Dec  2 06:30:57 np0005542249 nova_compute[254900]: 2025-12-02 11:30:57.370 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:30:58 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1639: 321 pgs: 321 active+clean; 169 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 17 KiB/s wr, 33 op/s
Dec  2 06:30:58 np0005542249 nova_compute[254900]: 2025-12-02 11:30:58.975 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:30:59 np0005542249 ovn_controller[153849]: 2025-12-02T11:30:59Z|00231|binding|INFO|Releasing lport 1636ad30-406d-4138-823e-abbe7f4d87ac from this chassis (sb_readonly=0)
Dec  2 06:30:59 np0005542249 nova_compute[254900]: 2025-12-02 11:30:59.272 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:00 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1640: 321 pgs: 321 active+clean; 169 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 47 KiB/s wr, 15 op/s
Dec  2 06:31:02 np0005542249 nova_compute[254900]: 2025-12-02 11:31:02.300 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764675047.2986705, a8aab2b3-e5a2-451d-b77a-9d977f1dd00f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:31:02 np0005542249 nova_compute[254900]: 2025-12-02 11:31:02.301 254904 INFO nova.compute.manager [-] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:31:02 np0005542249 nova_compute[254900]: 2025-12-02 11:31:02.324 254904 DEBUG nova.compute.manager [None req-0bedc0e1-ebb8-4588-b5c0-33136a7dffaf - - - - - -] [instance: a8aab2b3-e5a2-451d-b77a-9d977f1dd00f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:31:02 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1641: 321 pgs: 321 active+clean; 169 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 47 KiB/s wr, 17 op/s
Dec  2 06:31:02 np0005542249 nova_compute[254900]: 2025-12-02 11:31:02.373 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:31:04 np0005542249 nova_compute[254900]: 2025-12-02 11:31:04.015 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:04 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1642: 321 pgs: 321 active+clean; 169 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 42 KiB/s wr, 12 op/s
Dec  2 06:31:06 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1643: 321 pgs: 321 active+clean; 169 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 26 KiB/s wr, 5 op/s
Dec  2 06:31:07 np0005542249 nova_compute[254900]: 2025-12-02 11:31:07.376 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:31:08 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1644: 321 pgs: 321 active+clean; 169 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 33 KiB/s wr, 8 op/s
Dec  2 06:31:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e403 do_prune osdmap full prune enabled
Dec  2 06:31:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e404 e404: 3 total, 3 up, 3 in
Dec  2 06:31:08 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e404: 3 total, 3 up, 3 in
Dec  2 06:31:09 np0005542249 nova_compute[254900]: 2025-12-02 11:31:09.018 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:10 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1646: 321 pgs: 321 active+clean; 169 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 112 KiB/s rd, 13 KiB/s wr, 11 op/s
Dec  2 06:31:12 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1647: 321 pgs: 321 active+clean; 169 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 254 KiB/s rd, 13 KiB/s wr, 10 op/s
Dec  2 06:31:12 np0005542249 nova_compute[254900]: 2025-12-02 11:31:12.379 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:31:12 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:31:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:31:12 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:31:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:31:13 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:31:13 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:31:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:31:13 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:31:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:31:13 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:31:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:31:13 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:31:13 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 5521e7f9-a140-4365-ae78-16534602f7f9 does not exist
Dec  2 06:31:13 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev c3ed3dbc-939f-4dcd-8f87-049e917da341 does not exist
Dec  2 06:31:13 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 53c88ddd-6c22-4441-8f09-f4ef7429402a does not exist
Dec  2 06:31:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:31:13 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:31:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:31:13 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:31:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:31:13 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:31:14 np0005542249 nova_compute[254900]: 2025-12-02 11:31:14.020 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:14 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1648: 321 pgs: 321 active+clean; 233 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 273 KiB/s rd, 6.4 MiB/s wr, 37 op/s
Dec  2 06:31:14 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:31:14 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:31:14 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:31:14 np0005542249 podman[290993]: 2025-12-02 11:31:14.571657436 +0000 UTC m=+0.079809682 container create d7f638bb8abd080cfbf31a54ac60315298ca17dd7a6fde849667bb4c071d3bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  2 06:31:14 np0005542249 systemd[1]: Started libpod-conmon-d7f638bb8abd080cfbf31a54ac60315298ca17dd7a6fde849667bb4c071d3bd6.scope.
Dec  2 06:31:14 np0005542249 podman[290993]: 2025-12-02 11:31:14.538457197 +0000 UTC m=+0.046609503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:31:14 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:31:14 np0005542249 podman[290993]: 2025-12-02 11:31:14.686149126 +0000 UTC m=+0.194301372 container init d7f638bb8abd080cfbf31a54ac60315298ca17dd7a6fde849667bb4c071d3bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:31:14 np0005542249 podman[290993]: 2025-12-02 11:31:14.701126471 +0000 UTC m=+0.209278687 container start d7f638bb8abd080cfbf31a54ac60315298ca17dd7a6fde849667bb4c071d3bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  2 06:31:14 np0005542249 podman[290993]: 2025-12-02 11:31:14.704880403 +0000 UTC m=+0.213032629 container attach d7f638bb8abd080cfbf31a54ac60315298ca17dd7a6fde849667bb4c071d3bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 06:31:14 np0005542249 nice_euclid[291009]: 167 167
Dec  2 06:31:14 np0005542249 systemd[1]: libpod-d7f638bb8abd080cfbf31a54ac60315298ca17dd7a6fde849667bb4c071d3bd6.scope: Deactivated successfully.
Dec  2 06:31:14 np0005542249 conmon[291009]: conmon d7f638bb8abd080cfbf3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d7f638bb8abd080cfbf31a54ac60315298ca17dd7a6fde849667bb4c071d3bd6.scope/container/memory.events
Dec  2 06:31:14 np0005542249 podman[290993]: 2025-12-02 11:31:14.710768902 +0000 UTC m=+0.218921118 container died d7f638bb8abd080cfbf31a54ac60315298ca17dd7a6fde849667bb4c071d3bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:31:14 np0005542249 systemd[1]: var-lib-containers-storage-overlay-b82c871976975b023678da965ea21c2df3b788a3d253678eda87784c728d17fb-merged.mount: Deactivated successfully.
Dec  2 06:31:14 np0005542249 podman[290993]: 2025-12-02 11:31:14.765600487 +0000 UTC m=+0.273752703 container remove d7f638bb8abd080cfbf31a54ac60315298ca17dd7a6fde849667bb4c071d3bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  2 06:31:14 np0005542249 systemd[1]: libpod-conmon-d7f638bb8abd080cfbf31a54ac60315298ca17dd7a6fde849667bb4c071d3bd6.scope: Deactivated successfully.
Dec  2 06:31:14 np0005542249 nova_compute[254900]: 2025-12-02 11:31:14.951 254904 DEBUG oslo_concurrency.lockutils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "300cc277-5780-4174-88ed-a942194a10b9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:31:14 np0005542249 nova_compute[254900]: 2025-12-02 11:31:14.952 254904 DEBUG oslo_concurrency.lockutils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "300cc277-5780-4174-88ed-a942194a10b9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:31:14 np0005542249 nova_compute[254900]: 2025-12-02 11:31:14.966 254904 DEBUG nova.compute.manager [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:31:15 np0005542249 podman[291034]: 2025-12-02 11:31:15.019586235 +0000 UTC m=+0.060397867 container create 278331ac414445fda648393dc2deff58b805d69b3d278f12ace151d6a7b8fa73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shaw, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  2 06:31:15 np0005542249 nova_compute[254900]: 2025-12-02 11:31:15.044 254904 DEBUG oslo_concurrency.lockutils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:31:15 np0005542249 nova_compute[254900]: 2025-12-02 11:31:15.045 254904 DEBUG oslo_concurrency.lockutils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:31:15 np0005542249 nova_compute[254900]: 2025-12-02 11:31:15.056 254904 DEBUG nova.virt.hardware [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:31:15 np0005542249 nova_compute[254900]: 2025-12-02 11:31:15.057 254904 INFO nova.compute.claims [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:31:15 np0005542249 systemd[1]: Started libpod-conmon-278331ac414445fda648393dc2deff58b805d69b3d278f12ace151d6a7b8fa73.scope.
Dec  2 06:31:15 np0005542249 podman[291034]: 2025-12-02 11:31:14.990076846 +0000 UTC m=+0.030888578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:31:15 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:31:15 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1660590b0b11e1a5239a41f7a41e378a564a45d9a65ad8869c59cdcdb49925/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:31:15 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1660590b0b11e1a5239a41f7a41e378a564a45d9a65ad8869c59cdcdb49925/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:31:15 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1660590b0b11e1a5239a41f7a41e378a564a45d9a65ad8869c59cdcdb49925/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:31:15 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1660590b0b11e1a5239a41f7a41e378a564a45d9a65ad8869c59cdcdb49925/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:31:15 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1660590b0b11e1a5239a41f7a41e378a564a45d9a65ad8869c59cdcdb49925/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:31:15 np0005542249 podman[291034]: 2025-12-02 11:31:15.12359654 +0000 UTC m=+0.164408212 container init 278331ac414445fda648393dc2deff58b805d69b3d278f12ace151d6a7b8fa73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shaw, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:31:15 np0005542249 podman[291034]: 2025-12-02 11:31:15.13022322 +0000 UTC m=+0.171034872 container start 278331ac414445fda648393dc2deff58b805d69b3d278f12ace151d6a7b8fa73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shaw, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  2 06:31:15 np0005542249 podman[291034]: 2025-12-02 11:31:15.133714034 +0000 UTC m=+0.174525716 container attach 278331ac414445fda648393dc2deff58b805d69b3d278f12ace151d6a7b8fa73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shaw, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:31:15 np0005542249 nova_compute[254900]: 2025-12-02 11:31:15.205 254904 DEBUG oslo_concurrency.processutils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:31:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:31:15 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/589054796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:31:15 np0005542249 nova_compute[254900]: 2025-12-02 11:31:15.719 254904 DEBUG oslo_concurrency.processutils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:31:15 np0005542249 nova_compute[254900]: 2025-12-02 11:31:15.731 254904 DEBUG nova.compute.provider_tree [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:31:15 np0005542249 nova_compute[254900]: 2025-12-02 11:31:15.750 254904 DEBUG nova.scheduler.client.report [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:31:15 np0005542249 nova_compute[254900]: 2025-12-02 11:31:15.775 254904 DEBUG oslo_concurrency.lockutils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:31:15 np0005542249 nova_compute[254900]: 2025-12-02 11:31:15.777 254904 DEBUG nova.compute.manager [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:31:15 np0005542249 nova_compute[254900]: 2025-12-02 11:31:15.847 254904 DEBUG nova.compute.manager [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:31:15 np0005542249 nova_compute[254900]: 2025-12-02 11:31:15.849 254904 DEBUG nova.network.neutron [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:31:15 np0005542249 nova_compute[254900]: 2025-12-02 11:31:15.876 254904 INFO nova.virt.libvirt.driver [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:31:15 np0005542249 nova_compute[254900]: 2025-12-02 11:31:15.894 254904 DEBUG nova.compute.manager [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:31:15 np0005542249 nova_compute[254900]: 2025-12-02 11:31:15.953 254904 INFO nova.virt.block_device [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Booting with volume 12023da5-5883-4d9a-868f-46e516f8d4bb at /dev/vda#033[00m
Dec  2 06:31:16 np0005542249 nova_compute[254900]: 2025-12-02 11:31:16.036 254904 DEBUG nova.policy [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6ccb73a613554d938221b4bf46d7ae83', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '625a6939c31646a4a83ea851774cf28c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:31:16 np0005542249 nova_compute[254900]: 2025-12-02 11:31:16.071 254904 DEBUG os_brick.utils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:31:16 np0005542249 nova_compute[254900]: 2025-12-02 11:31:16.072 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:31:16 np0005542249 nova_compute[254900]: 2025-12-02 11:31:16.089 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:31:16 np0005542249 nova_compute[254900]: 2025-12-02 11:31:16.089 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[dfd238b5-f1aa-4ded-ba4d-b75a45e7da5c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:16 np0005542249 nova_compute[254900]: 2025-12-02 11:31:16.091 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:31:16 np0005542249 nova_compute[254900]: 2025-12-02 11:31:16.102 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:31:16 np0005542249 nova_compute[254900]: 2025-12-02 11:31:16.103 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[562951b6-33dd-41bc-acef-f7b887fcbbc7]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:16 np0005542249 nova_compute[254900]: 2025-12-02 11:31:16.106 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:31:16 np0005542249 nova_compute[254900]: 2025-12-02 11:31:16.120 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:31:16 np0005542249 nova_compute[254900]: 2025-12-02 11:31:16.121 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[23d94ff2-57ac-4f96-813e-e63a9f564d06]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:16 np0005542249 nova_compute[254900]: 2025-12-02 11:31:16.122 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[823e2ebb-aacd-451d-9a2b-116f1f07a4fc]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:16 np0005542249 nova_compute[254900]: 2025-12-02 11:31:16.123 254904 DEBUG oslo_concurrency.processutils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:31:16 np0005542249 nova_compute[254900]: 2025-12-02 11:31:16.163 254904 DEBUG oslo_concurrency.processutils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "nvme version" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:31:16 np0005542249 nova_compute[254900]: 2025-12-02 11:31:16.171 254904 DEBUG os_brick.initiator.connectors.lightos [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:31:16 np0005542249 nova_compute[254900]: 2025-12-02 11:31:16.173 254904 DEBUG os_brick.initiator.connectors.lightos [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:31:16 np0005542249 nova_compute[254900]: 2025-12-02 11:31:16.174 254904 DEBUG os_brick.initiator.connectors.lightos [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:31:16 np0005542249 nova_compute[254900]: 2025-12-02 11:31:16.174 254904 DEBUG os_brick.utils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] <== get_connector_properties: return (103ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:31:16 np0005542249 nova_compute[254900]: 2025-12-02 11:31:16.175 254904 DEBUG nova.virt.block_device [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Updating existing volume attachment record: b77f2f0d-c2d5-41b8-a1e8-ea911376fb65 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:31:16 np0005542249 fervent_shaw[291050]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:31:16 np0005542249 fervent_shaw[291050]: --> relative data size: 1.0
Dec  2 06:31:16 np0005542249 fervent_shaw[291050]: --> All data devices are unavailable
Dec  2 06:31:16 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1649: 321 pgs: 321 active+clean; 233 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 273 KiB/s rd, 6.4 MiB/s wr, 37 op/s
Dec  2 06:31:16 np0005542249 systemd[1]: libpod-278331ac414445fda648393dc2deff58b805d69b3d278f12ace151d6a7b8fa73.scope: Deactivated successfully.
Dec  2 06:31:16 np0005542249 systemd[1]: libpod-278331ac414445fda648393dc2deff58b805d69b3d278f12ace151d6a7b8fa73.scope: Consumed 1.173s CPU time.
Dec  2 06:31:16 np0005542249 podman[291034]: 2025-12-02 11:31:16.362308001 +0000 UTC m=+1.403119673 container died 278331ac414445fda648393dc2deff58b805d69b3d278f12ace151d6a7b8fa73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shaw, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:31:16 np0005542249 systemd[1]: var-lib-containers-storage-overlay-7d1660590b0b11e1a5239a41f7a41e378a564a45d9a65ad8869c59cdcdb49925-merged.mount: Deactivated successfully.
Dec  2 06:31:16 np0005542249 podman[291034]: 2025-12-02 11:31:16.444900197 +0000 UTC m=+1.485711859 container remove 278331ac414445fda648393dc2deff58b805d69b3d278f12ace151d6a7b8fa73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shaw, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  2 06:31:16 np0005542249 systemd[1]: libpod-conmon-278331ac414445fda648393dc2deff58b805d69b3d278f12ace151d6a7b8fa73.scope: Deactivated successfully.
Dec  2 06:31:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:31:16 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/388806355' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:31:16 np0005542249 nova_compute[254900]: 2025-12-02 11:31:16.886 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:16.887 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:23:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:25:d0:a1:75:ed'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:31:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:16.888 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 06:31:16 np0005542249 nova_compute[254900]: 2025-12-02 11:31:16.984 254904 DEBUG nova.network.neutron [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Successfully created port: 8ac6e5bf-779d-409f-90e8-7ee2dbf72367 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:31:17 np0005542249 nova_compute[254900]: 2025-12-02 11:31:17.190 254904 DEBUG oslo_concurrency.lockutils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "8c463218-c639-4256-b580-f16aaa113a7f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:31:17 np0005542249 nova_compute[254900]: 2025-12-02 11:31:17.190 254904 DEBUG oslo_concurrency.lockutils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "8c463218-c639-4256-b580-f16aaa113a7f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:31:17 np0005542249 nova_compute[254900]: 2025-12-02 11:31:17.200 254904 DEBUG nova.compute.manager [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:31:17 np0005542249 nova_compute[254900]: 2025-12-02 11:31:17.201 254904 DEBUG nova.virt.libvirt.driver [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:31:17 np0005542249 nova_compute[254900]: 2025-12-02 11:31:17.202 254904 INFO nova.virt.libvirt.driver [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Creating image(s)#033[00m
Dec  2 06:31:17 np0005542249 nova_compute[254900]: 2025-12-02 11:31:17.202 254904 DEBUG nova.virt.libvirt.driver [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  2 06:31:17 np0005542249 nova_compute[254900]: 2025-12-02 11:31:17.202 254904 DEBUG nova.virt.libvirt.driver [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Ensure instance console log exists: /var/lib/nova/instances/300cc277-5780-4174-88ed-a942194a10b9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:31:17 np0005542249 nova_compute[254900]: 2025-12-02 11:31:17.202 254904 DEBUG oslo_concurrency.lockutils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:31:17 np0005542249 nova_compute[254900]: 2025-12-02 11:31:17.203 254904 DEBUG oslo_concurrency.lockutils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:31:17 np0005542249 nova_compute[254900]: 2025-12-02 11:31:17.203 254904 DEBUG oslo_concurrency.lockutils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:31:17 np0005542249 nova_compute[254900]: 2025-12-02 11:31:17.210 254904 DEBUG nova.compute.manager [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:31:17 np0005542249 nova_compute[254900]: 2025-12-02 11:31:17.283 254904 DEBUG oslo_concurrency.lockutils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:31:17 np0005542249 nova_compute[254900]: 2025-12-02 11:31:17.283 254904 DEBUG oslo_concurrency.lockutils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:31:17 np0005542249 nova_compute[254900]: 2025-12-02 11:31:17.292 254904 DEBUG nova.virt.hardware [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:31:17 np0005542249 nova_compute[254900]: 2025-12-02 11:31:17.292 254904 INFO nova.compute.claims [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:31:17 np0005542249 nova_compute[254900]: 2025-12-02 11:31:17.380 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:17 np0005542249 nova_compute[254900]: 2025-12-02 11:31:17.413 254904 DEBUG oslo_concurrency.processutils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:31:17 np0005542249 podman[291263]: 2025-12-02 11:31:17.438209192 +0000 UTC m=+0.061793944 container create 16a4301237d47311bad662cbaa23443edd22ba039d2819519361b83dc0656f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_borg, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:31:17 np0005542249 systemd[1]: Started libpod-conmon-16a4301237d47311bad662cbaa23443edd22ba039d2819519361b83dc0656f2a.scope.
Dec  2 06:31:17 np0005542249 podman[291263]: 2025-12-02 11:31:17.407437539 +0000 UTC m=+0.031022271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:31:17 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:31:17 np0005542249 podman[291263]: 2025-12-02 11:31:17.537241674 +0000 UTC m=+0.160826416 container init 16a4301237d47311bad662cbaa23443edd22ba039d2819519361b83dc0656f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_borg, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:31:17 np0005542249 podman[291263]: 2025-12-02 11:31:17.544605373 +0000 UTC m=+0.168190105 container start 16a4301237d47311bad662cbaa23443edd22ba039d2819519361b83dc0656f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:31:17 np0005542249 podman[291263]: 2025-12-02 11:31:17.548305273 +0000 UTC m=+0.171890005 container attach 16a4301237d47311bad662cbaa23443edd22ba039d2819519361b83dc0656f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  2 06:31:17 np0005542249 systemd[1]: libpod-16a4301237d47311bad662cbaa23443edd22ba039d2819519361b83dc0656f2a.scope: Deactivated successfully.
Dec  2 06:31:17 np0005542249 frosty_borg[291286]: 167 167
Dec  2 06:31:17 np0005542249 conmon[291286]: conmon 16a4301237d47311bad6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-16a4301237d47311bad662cbaa23443edd22ba039d2819519361b83dc0656f2a.scope/container/memory.events
Dec  2 06:31:17 np0005542249 podman[291263]: 2025-12-02 11:31:17.555252821 +0000 UTC m=+0.178837573 container died 16a4301237d47311bad662cbaa23443edd22ba039d2819519361b83dc0656f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_borg, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:31:17 np0005542249 podman[291278]: 2025-12-02 11:31:17.566944788 +0000 UTC m=+0.081711823 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2)
Dec  2 06:31:17 np0005542249 systemd[1]: var-lib-containers-storage-overlay-a58371f9c7b889ebac3764c6b681708e35125a1631d148516538d813ca0c559d-merged.mount: Deactivated successfully.
Dec  2 06:31:17 np0005542249 podman[291263]: 2025-12-02 11:31:17.594524855 +0000 UTC m=+0.218109567 container remove 16a4301237d47311bad662cbaa23443edd22ba039d2819519361b83dc0656f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  2 06:31:17 np0005542249 systemd[1]: libpod-conmon-16a4301237d47311bad662cbaa23443edd22ba039d2819519361b83dc0656f2a.scope: Deactivated successfully.
Dec  2 06:31:17 np0005542249 podman[291340]: 2025-12-02 11:31:17.797705056 +0000 UTC m=+0.069444322 container create 7621eed46cdbb588a54357fdf8b6993ae2b7e7d5ffef3e7c94aad4cf1585950d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chaplygin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  2 06:31:17 np0005542249 systemd[1]: Started libpod-conmon-7621eed46cdbb588a54357fdf8b6993ae2b7e7d5ffef3e7c94aad4cf1585950d.scope.
Dec  2 06:31:17 np0005542249 podman[291340]: 2025-12-02 11:31:17.76423181 +0000 UTC m=+0.035971126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:31:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:31:17 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:31:17 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a52fdc17a2a6463eb223f2095920629f337fa01e0da744f7c3f54d411ad6834/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:31:17 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a52fdc17a2a6463eb223f2095920629f337fa01e0da744f7c3f54d411ad6834/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:31:17 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a52fdc17a2a6463eb223f2095920629f337fa01e0da744f7c3f54d411ad6834/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:31:17 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a52fdc17a2a6463eb223f2095920629f337fa01e0da744f7c3f54d411ad6834/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:31:17 np0005542249 podman[291340]: 2025-12-02 11:31:17.901320712 +0000 UTC m=+0.173060018 container init 7621eed46cdbb588a54357fdf8b6993ae2b7e7d5ffef3e7c94aad4cf1585950d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chaplygin, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:31:17 np0005542249 podman[291340]: 2025-12-02 11:31:17.91418069 +0000 UTC m=+0.185919946 container start 7621eed46cdbb588a54357fdf8b6993ae2b7e7d5ffef3e7c94aad4cf1585950d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:31:17 np0005542249 podman[291340]: 2025-12-02 11:31:17.918084666 +0000 UTC m=+0.189823982 container attach 7621eed46cdbb588a54357fdf8b6993ae2b7e7d5ffef3e7c94aad4cf1585950d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:31:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:31:17 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1729046235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:31:17 np0005542249 nova_compute[254900]: 2025-12-02 11:31:17.948 254904 DEBUG oslo_concurrency.processutils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:31:17 np0005542249 nova_compute[254900]: 2025-12-02 11:31:17.959 254904 DEBUG nova.compute.provider_tree [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:31:17 np0005542249 nova_compute[254900]: 2025-12-02 11:31:17.978 254904 DEBUG nova.scheduler.client.report [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.015 254904 DEBUG oslo_concurrency.lockutils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.016 254904 DEBUG nova.compute.manager [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.067 254904 DEBUG nova.compute.manager [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.073 254904 DEBUG nova.network.neutron [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.096 254904 INFO nova.virt.libvirt.driver [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.113 254904 DEBUG nova.compute.manager [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.156 254904 INFO nova.virt.block_device [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Booting with volume aca02ad2-47f5-4d77-9df0-2c95a1cb88a2 at /dev/vda#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.274 254904 DEBUG os_brick.utils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.276 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.294 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.295 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[658b4738-d0db-4e53-8c9e-ec4f651ccf70]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.298 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.312 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.312 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[89eea5e5-c67d-4f14-8d85-3f2395b8a55e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.315 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.330 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.330 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[20bccb58-28a6-4bbd-90ef-6a7a27fb4591]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.336 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[eff26d68-d898-4ee3-91c1-3972a9b1f011]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.337 254904 DEBUG oslo_concurrency.processutils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:31:18 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1650: 321 pgs: 321 active+clean; 283 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 11 MiB/s wr, 81 op/s
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.382 254904 DEBUG oslo_concurrency.processutils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CMD "nvme version" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.387 254904 DEBUG os_brick.initiator.connectors.lightos [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.388 254904 DEBUG os_brick.initiator.connectors.lightos [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.388 254904 DEBUG os_brick.initiator.connectors.lightos [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.389 254904 DEBUG os_brick.utils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] <== get_connector_properties: return (113ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.390 254904 DEBUG nova.virt.block_device [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Updating existing volume attachment record: caf30f54-1de9-49c1-82e5-3ba6e127696f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.588 254904 DEBUG nova.network.neutron [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Successfully updated port: 8ac6e5bf-779d-409f-90e8-7ee2dbf72367 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.610 254904 DEBUG oslo_concurrency.lockutils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "refresh_cache-300cc277-5780-4174-88ed-a942194a10b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.611 254904 DEBUG oslo_concurrency.lockutils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquired lock "refresh_cache-300cc277-5780-4174-88ed-a942194a10b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.611 254904 DEBUG nova.network.neutron [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.640 254904 DEBUG nova.policy [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1caa62e7ee8b42be98bc34780a7197f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a893d0c223f746328e706d7491d73b20', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.697 254904 DEBUG nova.compute.manager [req-b4cf5925-1b3d-4924-859e-fd3ea415ce2f req-3ac14e8c-dcf0-46dd-8f73-f3e48792b4be 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Received event network-changed-8ac6e5bf-779d-409f-90e8-7ee2dbf72367 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.698 254904 DEBUG nova.compute.manager [req-b4cf5925-1b3d-4924-859e-fd3ea415ce2f req-3ac14e8c-dcf0-46dd-8f73-f3e48792b4be 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Refreshing instance network info cache due to event network-changed-8ac6e5bf-779d-409f-90e8-7ee2dbf72367. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:31:18 np0005542249 nova_compute[254900]: 2025-12-02 11:31:18.699 254904 DEBUG oslo_concurrency.lockutils [req-b4cf5925-1b3d-4924-859e-fd3ea415ce2f req-3ac14e8c-dcf0-46dd-8f73-f3e48792b4be 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-300cc277-5780-4174-88ed-a942194a10b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]: {
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:    "0": [
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:        {
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "devices": [
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "/dev/loop3"
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            ],
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "lv_name": "ceph_lv0",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "lv_size": "21470642176",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "name": "ceph_lv0",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "tags": {
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.cluster_name": "ceph",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.crush_device_class": "",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.encrypted": "0",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.osd_id": "0",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.type": "block",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.vdo": "0"
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            },
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "type": "block",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "vg_name": "ceph_vg0"
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:        }
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:    ],
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:    "1": [
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:        {
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "devices": [
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "/dev/loop4"
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            ],
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "lv_name": "ceph_lv1",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "lv_size": "21470642176",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "name": "ceph_lv1",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "tags": {
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.cluster_name": "ceph",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.crush_device_class": "",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.encrypted": "0",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.osd_id": "1",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.type": "block",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.vdo": "0"
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            },
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "type": "block",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "vg_name": "ceph_vg1"
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:        }
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:    ],
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:    "2": [
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:        {
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "devices": [
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "/dev/loop5"
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            ],
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "lv_name": "ceph_lv2",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "lv_size": "21470642176",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "name": "ceph_lv2",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "tags": {
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.cluster_name": "ceph",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.crush_device_class": "",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.encrypted": "0",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.osd_id": "2",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.type": "block",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:                "ceph.vdo": "0"
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            },
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "type": "block",
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:            "vg_name": "ceph_vg2"
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:        }
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]:    ]
Dec  2 06:31:18 np0005542249 hopeful_chaplygin[291357]: }
Dec  2 06:31:18 np0005542249 systemd[1]: libpod-7621eed46cdbb588a54357fdf8b6993ae2b7e7d5ffef3e7c94aad4cf1585950d.scope: Deactivated successfully.
Dec  2 06:31:18 np0005542249 podman[291340]: 2025-12-02 11:31:18.772194722 +0000 UTC m=+1.043934058 container died 7621eed46cdbb588a54357fdf8b6993ae2b7e7d5ffef3e7c94aad4cf1585950d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  2 06:31:18 np0005542249 systemd[1]: var-lib-containers-storage-overlay-7a52fdc17a2a6463eb223f2095920629f337fa01e0da744f7c3f54d411ad6834-merged.mount: Deactivated successfully.
Dec  2 06:31:18 np0005542249 podman[291340]: 2025-12-02 11:31:18.831176439 +0000 UTC m=+1.102915655 container remove 7621eed46cdbb588a54357fdf8b6993ae2b7e7d5ffef3e7c94aad4cf1585950d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_chaplygin, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:31:18 np0005542249 systemd[1]: libpod-conmon-7621eed46cdbb588a54357fdf8b6993ae2b7e7d5ffef3e7c94aad4cf1585950d.scope: Deactivated successfully.
Dec  2 06:31:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:31:18 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2809026937' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.022 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.048 254904 DEBUG nova.network.neutron [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.296 254904 DEBUG nova.compute.manager [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.298 254904 DEBUG nova.virt.libvirt.driver [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.298 254904 INFO nova.virt.libvirt.driver [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Creating image(s)#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.299 254904 DEBUG nova.virt.libvirt.driver [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.299 254904 DEBUG nova.virt.libvirt.driver [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Ensure instance console log exists: /var/lib/nova/instances/8c463218-c639-4256-b580-f16aaa113a7f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.300 254904 DEBUG oslo_concurrency.lockutils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.300 254904 DEBUG oslo_concurrency.lockutils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.300 254904 DEBUG oslo_concurrency.lockutils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.610 254904 DEBUG nova.network.neutron [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Successfully created port: 9e69b326-3b7f-4fee-8050-9ff725b75719 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.661 254904 DEBUG nova.network.neutron [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Updating instance_info_cache with network_info: [{"id": "8ac6e5bf-779d-409f-90e8-7ee2dbf72367", "address": "fa:16:3e:28:86:70", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ac6e5bf-77", "ovs_interfaceid": "8ac6e5bf-779d-409f-90e8-7ee2dbf72367", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.680 254904 DEBUG oslo_concurrency.lockutils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Releasing lock "refresh_cache-300cc277-5780-4174-88ed-a942194a10b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.680 254904 DEBUG nova.compute.manager [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Instance network_info: |[{"id": "8ac6e5bf-779d-409f-90e8-7ee2dbf72367", "address": "fa:16:3e:28:86:70", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ac6e5bf-77", "ovs_interfaceid": "8ac6e5bf-779d-409f-90e8-7ee2dbf72367", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.681 254904 DEBUG oslo_concurrency.lockutils [req-b4cf5925-1b3d-4924-859e-fd3ea415ce2f req-3ac14e8c-dcf0-46dd-8f73-f3e48792b4be 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-300cc277-5780-4174-88ed-a942194a10b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.681 254904 DEBUG nova.network.neutron [req-b4cf5925-1b3d-4924-859e-fd3ea415ce2f req-3ac14e8c-dcf0-46dd-8f73-f3e48792b4be 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Refreshing network info cache for port 8ac6e5bf-779d-409f-90e8-7ee2dbf72367 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.685 254904 DEBUG nova.virt.libvirt.driver [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Start _get_guest_xml network_info=[{"id": "8ac6e5bf-779d-409f-90e8-7ee2dbf72367", "address": "fa:16:3e:28:86:70", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ac6e5bf-77", "ovs_interfaceid": "8ac6e5bf-779d-409f-90e8-7ee2dbf72367", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-12023da5-5883-4d9a-868f-46e516f8d4bb', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '12023da5-5883-4d9a-868f-46e516f8d4bb', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '300cc277-5780-4174-88ed-a942194a10b9', 'attached_at': '', 'detached_at': '', 'volume_id': '12023da5-5883-4d9a-868f-46e516f8d4bb', 'serial': '12023da5-5883-4d9a-868f-46e516f8d4bb'}, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'attachment_id': 'b77f2f0d-c2d5-41b8-a1e8-ea911376fb65', 'delete_on_termination': False, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.692 254904 WARNING nova.virt.libvirt.driver [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.701 254904 DEBUG nova.virt.libvirt.host [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.702 254904 DEBUG nova.virt.libvirt.host [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.707 254904 DEBUG nova.virt.libvirt.host [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.708 254904 DEBUG nova.virt.libvirt.host [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.708 254904 DEBUG nova.virt.libvirt.driver [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.708 254904 DEBUG nova.virt.hardware [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.709 254904 DEBUG nova.virt.hardware [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.709 254904 DEBUG nova.virt.hardware [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.709 254904 DEBUG nova.virt.hardware [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.709 254904 DEBUG nova.virt.hardware [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.710 254904 DEBUG nova.virt.hardware [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.710 254904 DEBUG nova.virt.hardware [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.710 254904 DEBUG nova.virt.hardware [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.710 254904 DEBUG nova.virt.hardware [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.710 254904 DEBUG nova.virt.hardware [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.711 254904 DEBUG nova.virt.hardware [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.744 254904 DEBUG nova.storage.rbd_utils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] rbd image 300cc277-5780-4174-88ed-a942194a10b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:31:19 np0005542249 nova_compute[254900]: 2025-12-02 11:31:19.750 254904 DEBUG oslo_concurrency.processutils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:31:19 np0005542249 podman[291527]: 2025-12-02 11:31:19.816754555 +0000 UTC m=+0.068082684 container create 289ffd55fc39e05b7a8efd1756c7eb52cc3876eebca9adc59e9e9f944a0db803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:31:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:19.846 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:31:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:19.846 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:31:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:19.848 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:31:19 np0005542249 systemd[1]: Started libpod-conmon-289ffd55fc39e05b7a8efd1756c7eb52cc3876eebca9adc59e9e9f944a0db803.scope.
Dec  2 06:31:19 np0005542249 podman[291527]: 2025-12-02 11:31:19.791729418 +0000 UTC m=+0.043057587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:31:19 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:31:19 np0005542249 podman[291527]: 2025-12-02 11:31:19.912885459 +0000 UTC m=+0.164213598 container init 289ffd55fc39e05b7a8efd1756c7eb52cc3876eebca9adc59e9e9f944a0db803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:31:19 np0005542249 podman[291527]: 2025-12-02 11:31:19.925202662 +0000 UTC m=+0.176530781 container start 289ffd55fc39e05b7a8efd1756c7eb52cc3876eebca9adc59e9e9f944a0db803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:31:19 np0005542249 podman[291527]: 2025-12-02 11:31:19.928239074 +0000 UTC m=+0.179567213 container attach 289ffd55fc39e05b7a8efd1756c7eb52cc3876eebca9adc59e9e9f944a0db803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  2 06:31:19 np0005542249 sharp_nash[291561]: 167 167
Dec  2 06:31:19 np0005542249 systemd[1]: libpod-289ffd55fc39e05b7a8efd1756c7eb52cc3876eebca9adc59e9e9f944a0db803.scope: Deactivated successfully.
Dec  2 06:31:19 np0005542249 podman[291527]: 2025-12-02 11:31:19.934339709 +0000 UTC m=+0.185667828 container died 289ffd55fc39e05b7a8efd1756c7eb52cc3876eebca9adc59e9e9f944a0db803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_nash, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  2 06:31:19 np0005542249 systemd[1]: var-lib-containers-storage-overlay-1e56447bb04eff1c3ae956b10ec8998d6ad174d46b7d7f2bba8651180d404549-merged.mount: Deactivated successfully.
Dec  2 06:31:19 np0005542249 podman[291527]: 2025-12-02 11:31:19.980249912 +0000 UTC m=+0.231578041 container remove 289ffd55fc39e05b7a8efd1756c7eb52cc3876eebca9adc59e9e9f944a0db803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_nash, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:31:20 np0005542249 systemd[1]: libpod-conmon-289ffd55fc39e05b7a8efd1756c7eb52cc3876eebca9adc59e9e9f944a0db803.scope: Deactivated successfully.
Dec  2 06:31:20 np0005542249 podman[291603]: 2025-12-02 11:31:20.226574542 +0000 UTC m=+0.077882800 container create 39343ccbad66b9425f562da63546cc658f215cc5000270b194a852e99c270eb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_neumann, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:31:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:31:20 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2885947171' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.262 254904 DEBUG oslo_concurrency.processutils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:31:20 np0005542249 systemd[1]: Started libpod-conmon-39343ccbad66b9425f562da63546cc658f215cc5000270b194a852e99c270eb2.scope.
Dec  2 06:31:20 np0005542249 podman[291603]: 2025-12-02 11:31:20.195317145 +0000 UTC m=+0.046625463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.296 254904 DEBUG nova.virt.libvirt.vif [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:31:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-444799541',display_name='tempest-TestVolumeBootPattern-server-444799541',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-444799541',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDl9Chitcp+6ZZ9O/so1iQpbrg+ZOVWOrATMsWTbgaWcZg2lFiQK4KEUyaqp5+G/z2wPorJssN622GdMYPRLScxIeivbRrFeE5q310MfETTcDT4f8HB9OmcWcicW5ZF4QA==',key_name='tempest-TestVolumeBootPattern-765108891',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='625a6939c31646a4a83ea851774cf28c',ramdisk_id='',reservation_id='r-ex0tqxlf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1396850361',owner_user_name='tempest-TestVolumeBootPattern-1396850361-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:31:15Z,user_data=None,user_id='6ccb73a613554d938221b4bf46d7ae83',uuid=300cc277-5780-4174-88ed-a942194a10b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8ac6e5bf-779d-409f-90e8-7ee2dbf72367", "address": "fa:16:3e:28:86:70", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ac6e5bf-77", "ovs_interfaceid": "8ac6e5bf-779d-409f-90e8-7ee2dbf72367", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.297 254904 DEBUG nova.network.os_vif_util [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converting VIF {"id": "8ac6e5bf-779d-409f-90e8-7ee2dbf72367", "address": "fa:16:3e:28:86:70", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ac6e5bf-77", "ovs_interfaceid": "8ac6e5bf-779d-409f-90e8-7ee2dbf72367", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.297 254904 DEBUG nova.network.os_vif_util [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:86:70,bridge_name='br-int',has_traffic_filtering=True,id=8ac6e5bf-779d-409f-90e8-7ee2dbf72367,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ac6e5bf-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.299 254904 DEBUG nova.objects.instance [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lazy-loading 'pci_devices' on Instance uuid 300cc277-5780-4174-88ed-a942194a10b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:31:20 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:31:20 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f37885ba0cf39d36c307ce1652567a9c35e73b08d118448facce2dc80da7346/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:31:20 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f37885ba0cf39d36c307ce1652567a9c35e73b08d118448facce2dc80da7346/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:31:20 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f37885ba0cf39d36c307ce1652567a9c35e73b08d118448facce2dc80da7346/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:31:20 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f37885ba0cf39d36c307ce1652567a9c35e73b08d118448facce2dc80da7346/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.318 254904 DEBUG nova.virt.libvirt.driver [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:31:20 np0005542249 nova_compute[254900]:  <uuid>300cc277-5780-4174-88ed-a942194a10b9</uuid>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:  <name>instance-00000019</name>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <nova:name>tempest-TestVolumeBootPattern-server-444799541</nova:name>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:31:19</nova:creationTime>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:31:20 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:        <nova:user uuid="6ccb73a613554d938221b4bf46d7ae83">tempest-TestVolumeBootPattern-1396850361-project-member</nova:user>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:        <nova:project uuid="625a6939c31646a4a83ea851774cf28c">tempest-TestVolumeBootPattern-1396850361</nova:project>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:        <nova:port uuid="8ac6e5bf-779d-409f-90e8-7ee2dbf72367">
Dec  2 06:31:20 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <entry name="serial">300cc277-5780-4174-88ed-a942194a10b9</entry>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <entry name="uuid">300cc277-5780-4174-88ed-a942194a10b9</entry>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/300cc277-5780-4174-88ed-a942194a10b9_disk.config">
Dec  2 06:31:20 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:31:20 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="volumes/volume-12023da5-5883-4d9a-868f-46e516f8d4bb">
Dec  2 06:31:20 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:31:20 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <serial>12023da5-5883-4d9a-868f-46e516f8d4bb</serial>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:28:86:70"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <target dev="tap8ac6e5bf-77"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/300cc277-5780-4174-88ed-a942194a10b9/console.log" append="off"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:31:20 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:31:20 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:31:20 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:31:20 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.320 254904 DEBUG nova.compute.manager [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Preparing to wait for external event network-vif-plugged-8ac6e5bf-779d-409f-90e8-7ee2dbf72367 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.320 254904 DEBUG oslo_concurrency.lockutils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "300cc277-5780-4174-88ed-a942194a10b9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.321 254904 DEBUG oslo_concurrency.lockutils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "300cc277-5780-4174-88ed-a942194a10b9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.321 254904 DEBUG oslo_concurrency.lockutils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "300cc277-5780-4174-88ed-a942194a10b9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.322 254904 DEBUG nova.virt.libvirt.vif [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:31:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-444799541',display_name='tempest-TestVolumeBootPattern-server-444799541',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-444799541',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDl9Chitcp+6ZZ9O/so1iQpbrg+ZOVWOrATMsWTbgaWcZg2lFiQK4KEUyaqp5+G/z2wPorJssN622GdMYPRLScxIeivbRrFeE5q310MfETTcDT4f8HB9OmcWcicW5ZF4QA==',key_name='tempest-TestVolumeBootPattern-765108891',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='625a6939c31646a4a83ea851774cf28c',ramdisk_id='',reservation_id='r-ex0tqxlf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1396850361',owner_user_name='tempest-TestVolumeBootPattern-1396850361-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:31:15Z,user_data=None,user_id='6ccb73a613554d938221b4bf46d7ae83',uuid=300cc277-5780-4174-88ed-a942194a10b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8ac6e5bf-779d-409f-90e8-7ee2dbf72367", "address": "fa:16:3e:28:86:70", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ac6e5bf-77", "ovs_interfaceid": "8ac6e5bf-779d-409f-90e8-7ee2dbf72367", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.323 254904 DEBUG nova.network.os_vif_util [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converting VIF {"id": "8ac6e5bf-779d-409f-90e8-7ee2dbf72367", "address": "fa:16:3e:28:86:70", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ac6e5bf-77", "ovs_interfaceid": "8ac6e5bf-779d-409f-90e8-7ee2dbf72367", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.324 254904 DEBUG nova.network.os_vif_util [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:86:70,bridge_name='br-int',has_traffic_filtering=True,id=8ac6e5bf-779d-409f-90e8-7ee2dbf72367,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ac6e5bf-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.324 254904 DEBUG os_vif [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:86:70,bridge_name='br-int',has_traffic_filtering=True,id=8ac6e5bf-779d-409f-90e8-7ee2dbf72367,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ac6e5bf-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.325 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.326 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.327 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:31:20 np0005542249 podman[291603]: 2025-12-02 11:31:20.327928206 +0000 UTC m=+0.179236474 container init 39343ccbad66b9425f562da63546cc658f215cc5000270b194a852e99c270eb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.332 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.333 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ac6e5bf-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.334 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8ac6e5bf-77, col_values=(('external_ids', {'iface-id': '8ac6e5bf-779d-409f-90e8-7ee2dbf72367', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:86:70', 'vm-uuid': '300cc277-5780-4174-88ed-a942194a10b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:31:20 np0005542249 podman[291603]: 2025-12-02 11:31:20.336333554 +0000 UTC m=+0.187641772 container start 39343ccbad66b9425f562da63546cc658f215cc5000270b194a852e99c270eb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_neumann, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.336 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:20 np0005542249 NetworkManager[48987]: <info>  [1764675080.3382] manager: (tap8ac6e5bf-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/125)
Dec  2 06:31:20 np0005542249 podman[291603]: 2025-12-02 11:31:20.340365973 +0000 UTC m=+0.191674221 container attach 39343ccbad66b9425f562da63546cc658f215cc5000270b194a852e99c270eb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_neumann, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  2 06:31:20 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1651: 321 pgs: 321 active+clean; 283 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 195 KiB/s rd, 9.5 MiB/s wr, 68 op/s
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.340 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.345 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.346 254904 INFO os_vif [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:86:70,bridge_name='br-int',has_traffic_filtering=True,id=8ac6e5bf-779d-409f-90e8-7ee2dbf72367,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ac6e5bf-77')#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.411 254904 DEBUG nova.virt.libvirt.driver [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.412 254904 DEBUG nova.virt.libvirt.driver [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.412 254904 DEBUG nova.virt.libvirt.driver [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] No VIF found with MAC fa:16:3e:28:86:70, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.412 254904 INFO nova.virt.libvirt.driver [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Using config drive#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.446 254904 DEBUG nova.storage.rbd_utils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] rbd image 300cc277-5780-4174-88ed-a942194a10b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.797 254904 INFO nova.virt.libvirt.driver [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Creating config drive at /var/lib/nova/instances/300cc277-5780-4174-88ed-a942194a10b9/disk.config#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.806 254904 DEBUG oslo_concurrency.processutils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/300cc277-5780-4174-88ed-a942194a10b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7nih_w4t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.943 254904 DEBUG oslo_concurrency.processutils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/300cc277-5780-4174-88ed-a942194a10b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7nih_w4t" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.988 254904 DEBUG nova.storage.rbd_utils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] rbd image 300cc277-5780-4174-88ed-a942194a10b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:31:20 np0005542249 nova_compute[254900]: 2025-12-02 11:31:20.995 254904 DEBUG oslo_concurrency.processutils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/300cc277-5780-4174-88ed-a942194a10b9/disk.config 300cc277-5780-4174-88ed-a942194a10b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.161 254904 DEBUG nova.network.neutron [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Successfully updated port: 9e69b326-3b7f-4fee-8050-9ff725b75719 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.182 254904 DEBUG oslo_concurrency.lockutils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "refresh_cache-8c463218-c639-4256-b580-f16aaa113a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.182 254904 DEBUG oslo_concurrency.lockutils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquired lock "refresh_cache-8c463218-c639-4256-b580-f16aaa113a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.182 254904 DEBUG nova.network.neutron [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.189 254904 DEBUG oslo_concurrency.processutils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/300cc277-5780-4174-88ed-a942194a10b9/disk.config 300cc277-5780-4174-88ed-a942194a10b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.189 254904 INFO nova.virt.libvirt.driver [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Deleting local config drive /var/lib/nova/instances/300cc277-5780-4174-88ed-a942194a10b9/disk.config because it was imported into RBD.#033[00m
Dec  2 06:31:21 np0005542249 kernel: tap8ac6e5bf-77: entered promiscuous mode
Dec  2 06:31:21 np0005542249 NetworkManager[48987]: <info>  [1764675081.2557] manager: (tap8ac6e5bf-77): new Tun device (/org/freedesktop/NetworkManager/Devices/126)
Dec  2 06:31:21 np0005542249 ovn_controller[153849]: 2025-12-02T11:31:21Z|00232|binding|INFO|Claiming lport 8ac6e5bf-779d-409f-90e8-7ee2dbf72367 for this chassis.
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.256 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:21 np0005542249 ovn_controller[153849]: 2025-12-02T11:31:21Z|00233|binding|INFO|8ac6e5bf-779d-409f-90e8-7ee2dbf72367: Claiming fa:16:3e:28:86:70 10.100.0.9
Dec  2 06:31:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:21.266 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:86:70 10.100.0.9'], port_security=['fa:16:3e:28:86:70 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '300cc277-5780-4174-88ed-a942194a10b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '625a6939c31646a4a83ea851774cf28c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bf93629e-6336-4a9c-a41d-6ce19e6b6662', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cc823ef0-3e69-4062-a488-a82f483f10bb, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=8ac6e5bf-779d-409f-90e8-7ee2dbf72367) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:31:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:21.267 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 8ac6e5bf-779d-409f-90e8-7ee2dbf72367 in datapath acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 bound to our chassis#033[00m
Dec  2 06:31:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:21.268 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.283 254904 DEBUG nova.compute.manager [req-e12c8cbe-ccd7-4dc3-9c2e-874a7624dea9 req-fc47aab5-bbc4-494f-81b2-377562caca44 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Received event network-changed-9e69b326-3b7f-4fee-8050-9ff725b75719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.284 254904 DEBUG nova.compute.manager [req-e12c8cbe-ccd7-4dc3-9c2e-874a7624dea9 req-fc47aab5-bbc4-494f-81b2-377562caca44 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Refreshing instance network info cache due to event network-changed-9e69b326-3b7f-4fee-8050-9ff725b75719. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.284 254904 DEBUG oslo_concurrency.lockutils [req-e12c8cbe-ccd7-4dc3-9c2e-874a7624dea9 req-fc47aab5-bbc4-494f-81b2-377562caca44 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-8c463218-c639-4256-b580-f16aaa113a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:31:21 np0005542249 ovn_controller[153849]: 2025-12-02T11:31:21Z|00234|binding|INFO|Setting lport 8ac6e5bf-779d-409f-90e8-7ee2dbf72367 ovn-installed in OVS
Dec  2 06:31:21 np0005542249 ovn_controller[153849]: 2025-12-02T11:31:21Z|00235|binding|INFO|Setting lport 8ac6e5bf-779d-409f-90e8-7ee2dbf72367 up in Southbound
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.287 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:21 np0005542249 systemd-udevd[291715]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.290 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:21.292 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[34a16866-6087-46db-a6a7-07305541c10d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:21 np0005542249 systemd-machined[216222]: New machine qemu-25-instance-00000019.
Dec  2 06:31:21 np0005542249 NetworkManager[48987]: <info>  [1764675081.3132] device (tap8ac6e5bf-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:31:21 np0005542249 NetworkManager[48987]: <info>  [1764675081.3144] device (tap8ac6e5bf-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:31:21 np0005542249 systemd[1]: Started Virtual Machine qemu-25-instance-00000019.
Dec  2 06:31:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:21.335 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[721ecb8f-bf59-47cf-8292-2be013c3f12c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:21.340 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[a20470d1-3cde-4a3d-bb3e-c01e3766995d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.344 254904 DEBUG nova.network.neutron [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:31:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:21.382 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[8aea685d-46ad-4f80-a0eb-1197111561a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:21.407 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[9bc3d4da-6eb8-452e-a37d-9bc66895b238]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacfaa8ac-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:73:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522321, 'reachable_time': 34711, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291735, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:21.432 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[2f5559e4-4eac-4cc1-b396-0ef578040285]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapacfaa8ac-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522335, 'tstamp': 522335}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291739, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapacfaa8ac-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522340, 'tstamp': 522340}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291739, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:21.435 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacfaa8ac-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.438 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.439 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:21.440 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapacfaa8ac-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:31:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:21.440 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:31:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:21.441 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapacfaa8ac-00, col_values=(('external_ids', {'iface-id': '1636ad30-406d-4138-823e-abbe7f4d87ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:31:21 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:21.442 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:31:21 np0005542249 angry_neumann[291622]: {
Dec  2 06:31:21 np0005542249 angry_neumann[291622]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:31:21 np0005542249 angry_neumann[291622]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:31:21 np0005542249 angry_neumann[291622]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:31:21 np0005542249 angry_neumann[291622]:        "osd_id": 0,
Dec  2 06:31:21 np0005542249 angry_neumann[291622]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:31:21 np0005542249 angry_neumann[291622]:        "type": "bluestore"
Dec  2 06:31:21 np0005542249 angry_neumann[291622]:    },
Dec  2 06:31:21 np0005542249 angry_neumann[291622]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:31:21 np0005542249 angry_neumann[291622]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:31:21 np0005542249 angry_neumann[291622]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:31:21 np0005542249 angry_neumann[291622]:        "osd_id": 2,
Dec  2 06:31:21 np0005542249 angry_neumann[291622]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:31:21 np0005542249 angry_neumann[291622]:        "type": "bluestore"
Dec  2 06:31:21 np0005542249 angry_neumann[291622]:    },
Dec  2 06:31:21 np0005542249 angry_neumann[291622]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:31:21 np0005542249 angry_neumann[291622]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:31:21 np0005542249 angry_neumann[291622]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:31:21 np0005542249 angry_neumann[291622]:        "osd_id": 1,
Dec  2 06:31:21 np0005542249 angry_neumann[291622]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:31:21 np0005542249 angry_neumann[291622]:        "type": "bluestore"
Dec  2 06:31:21 np0005542249 angry_neumann[291622]:    }
Dec  2 06:31:21 np0005542249 angry_neumann[291622]: }
Dec  2 06:31:21 np0005542249 systemd[1]: libpod-39343ccbad66b9425f562da63546cc658f215cc5000270b194a852e99c270eb2.scope: Deactivated successfully.
Dec  2 06:31:21 np0005542249 systemd[1]: libpod-39343ccbad66b9425f562da63546cc658f215cc5000270b194a852e99c270eb2.scope: Consumed 1.152s CPU time.
Dec  2 06:31:21 np0005542249 conmon[291622]: conmon 39343ccbad66b9425f56 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-39343ccbad66b9425f562da63546cc658f215cc5000270b194a852e99c270eb2.scope/container/memory.events
Dec  2 06:31:21 np0005542249 podman[291603]: 2025-12-02 11:31:21.517065054 +0000 UTC m=+1.368373302 container died 39343ccbad66b9425f562da63546cc658f215cc5000270b194a852e99c270eb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_neumann, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  2 06:31:21 np0005542249 systemd[1]: var-lib-containers-storage-overlay-4f37885ba0cf39d36c307ce1652567a9c35e73b08d118448facce2dc80da7346-merged.mount: Deactivated successfully.
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.565 254904 DEBUG nova.compute.manager [req-5c1e850b-c174-41b0-9a11-4fe7ff909f60 req-d2a88a96-5a9e-4816-989f-92bb9449f1a0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Received event network-vif-plugged-8ac6e5bf-779d-409f-90e8-7ee2dbf72367 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.565 254904 DEBUG oslo_concurrency.lockutils [req-5c1e850b-c174-41b0-9a11-4fe7ff909f60 req-d2a88a96-5a9e-4816-989f-92bb9449f1a0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "300cc277-5780-4174-88ed-a942194a10b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.565 254904 DEBUG oslo_concurrency.lockutils [req-5c1e850b-c174-41b0-9a11-4fe7ff909f60 req-d2a88a96-5a9e-4816-989f-92bb9449f1a0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "300cc277-5780-4174-88ed-a942194a10b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.566 254904 DEBUG oslo_concurrency.lockutils [req-5c1e850b-c174-41b0-9a11-4fe7ff909f60 req-d2a88a96-5a9e-4816-989f-92bb9449f1a0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "300cc277-5780-4174-88ed-a942194a10b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.566 254904 DEBUG nova.compute.manager [req-5c1e850b-c174-41b0-9a11-4fe7ff909f60 req-d2a88a96-5a9e-4816-989f-92bb9449f1a0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Processing event network-vif-plugged-8ac6e5bf-779d-409f-90e8-7ee2dbf72367 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:31:21 np0005542249 podman[291603]: 2025-12-02 11:31:21.582152527 +0000 UTC m=+1.433460745 container remove 39343ccbad66b9425f562da63546cc658f215cc5000270b194a852e99c270eb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_neumann, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:31:21 np0005542249 systemd[1]: libpod-conmon-39343ccbad66b9425f562da63546cc658f215cc5000270b194a852e99c270eb2.scope: Deactivated successfully.
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.624 254904 DEBUG nova.network.neutron [req-b4cf5925-1b3d-4924-859e-fd3ea415ce2f req-3ac14e8c-dcf0-46dd-8f73-f3e48792b4be 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Updated VIF entry in instance network info cache for port 8ac6e5bf-779d-409f-90e8-7ee2dbf72367. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.624 254904 DEBUG nova.network.neutron [req-b4cf5925-1b3d-4924-859e-fd3ea415ce2f req-3ac14e8c-dcf0-46dd-8f73-f3e48792b4be 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Updating instance_info_cache with network_info: [{"id": "8ac6e5bf-779d-409f-90e8-7ee2dbf72367", "address": "fa:16:3e:28:86:70", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ac6e5bf-77", "ovs_interfaceid": "8ac6e5bf-779d-409f-90e8-7ee2dbf72367", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:31:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:31:21 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:31:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.643 254904 DEBUG oslo_concurrency.lockutils [req-b4cf5925-1b3d-4924-859e-fd3ea415ce2f req-3ac14e8c-dcf0-46dd-8f73-f3e48792b4be 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-300cc277-5780-4174-88ed-a942194a10b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:31:21 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:31:21 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev e5d3d201-8b64-4546-82eb-396c55d7314b does not exist
Dec  2 06:31:21 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev a805bb2a-1d35-4750-9b85-33cf132096ea does not exist
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.892 254904 DEBUG nova.compute.manager [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.893 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764675081.8922617, 300cc277-5780-4174-88ed-a942194a10b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.894 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 300cc277-5780-4174-88ed-a942194a10b9] VM Started (Lifecycle Event)#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.899 254904 DEBUG nova.virt.libvirt.driver [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.904 254904 INFO nova.virt.libvirt.driver [-] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Instance spawned successfully.#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.904 254904 DEBUG nova.virt.libvirt.driver [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.930 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.934 254904 DEBUG nova.virt.libvirt.driver [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.934 254904 DEBUG nova.virt.libvirt.driver [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.935 254904 DEBUG nova.virt.libvirt.driver [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.936 254904 DEBUG nova.virt.libvirt.driver [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.936 254904 DEBUG nova.virt.libvirt.driver [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.937 254904 DEBUG nova.virt.libvirt.driver [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.943 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.986 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 300cc277-5780-4174-88ed-a942194a10b9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.986 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764675081.8962522, 300cc277-5780-4174-88ed-a942194a10b9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:31:21 np0005542249 nova_compute[254900]: 2025-12-02 11:31:21.986 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 300cc277-5780-4174-88ed-a942194a10b9] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.010 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.013 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764675081.8990219, 300cc277-5780-4174-88ed-a942194a10b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.014 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 300cc277-5780-4174-88ed-a942194a10b9] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.019 254904 INFO nova.compute.manager [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Took 4.82 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.020 254904 DEBUG nova.compute.manager [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.032 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.035 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.061 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 300cc277-5780-4174-88ed-a942194a10b9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.081 254904 INFO nova.compute.manager [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Took 7.07 seconds to build instance.#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.100 254904 DEBUG oslo_concurrency.lockutils [None req-9b8e0ac1-334c-463f-a9ad-589bda053c1b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "300cc277-5780-4174-88ed-a942194a10b9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:31:22 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1652: 321 pgs: 321 active+clean; 283 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 164 KiB/s rd, 9.4 MiB/s wr, 67 op/s
Dec  2 06:31:22 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:31:22 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.845 254904 DEBUG nova.network.neutron [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Updating instance_info_cache with network_info: [{"id": "9e69b326-3b7f-4fee-8050-9ff725b75719", "address": "fa:16:3e:12:b1:85", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e69b326-3b", "ovs_interfaceid": "9e69b326-3b7f-4fee-8050-9ff725b75719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.865 254904 DEBUG oslo_concurrency.lockutils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Releasing lock "refresh_cache-8c463218-c639-4256-b580-f16aaa113a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.865 254904 DEBUG nova.compute.manager [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Instance network_info: |[{"id": "9e69b326-3b7f-4fee-8050-9ff725b75719", "address": "fa:16:3e:12:b1:85", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e69b326-3b", "ovs_interfaceid": "9e69b326-3b7f-4fee-8050-9ff725b75719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.866 254904 DEBUG oslo_concurrency.lockutils [req-e12c8cbe-ccd7-4dc3-9c2e-874a7624dea9 req-fc47aab5-bbc4-494f-81b2-377562caca44 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-8c463218-c639-4256-b580-f16aaa113a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.866 254904 DEBUG nova.network.neutron [req-e12c8cbe-ccd7-4dc3-9c2e-874a7624dea9 req-fc47aab5-bbc4-494f-81b2-377562caca44 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Refreshing network info cache for port 9e69b326-3b7f-4fee-8050-9ff725b75719 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.871 254904 DEBUG nova.virt.libvirt.driver [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Start _get_guest_xml network_info=[{"id": "9e69b326-3b7f-4fee-8050-9ff725b75719", "address": "fa:16:3e:12:b1:85", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e69b326-3b", "ovs_interfaceid": "9e69b326-3b7f-4fee-8050-9ff725b75719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-aca02ad2-47f5-4d77-9df0-2c95a1cb88a2', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'aca02ad2-47f5-4d77-9df0-2c95a1cb88a2', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '8c463218-c639-4256-b580-f16aaa113a7f', 'attached_at': '', 'detached_at': '', 'volume_id': 'aca02ad2-47f5-4d77-9df0-2c95a1cb88a2', 'serial': 'aca02ad2-47f5-4d77-9df0-2c95a1cb88a2'}, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'attachment_id': 'caf30f54-1de9-49c1-82e5-3ba6e127696f', 'delete_on_termination': False, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:31:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.877 254904 WARNING nova.virt.libvirt.driver [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.883 254904 DEBUG nova.virt.libvirt.host [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.884 254904 DEBUG nova.virt.libvirt.host [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.894 254904 DEBUG nova.virt.libvirt.host [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.895 254904 DEBUG nova.virt.libvirt.host [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.897 254904 DEBUG nova.virt.libvirt.driver [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.898 254904 DEBUG nova.virt.hardware [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.899 254904 DEBUG nova.virt.hardware [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.899 254904 DEBUG nova.virt.hardware [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.899 254904 DEBUG nova.virt.hardware [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.900 254904 DEBUG nova.virt.hardware [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.900 254904 DEBUG nova.virt.hardware [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.900 254904 DEBUG nova.virt.hardware [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.901 254904 DEBUG nova.virt.hardware [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.901 254904 DEBUG nova.virt.hardware [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.901 254904 DEBUG nova.virt.hardware [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.902 254904 DEBUG nova.virt.hardware [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.935 254904 DEBUG nova.storage.rbd_utils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] rbd image 8c463218-c639-4256-b580-f16aaa113a7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:31:22 np0005542249 nova_compute[254900]: 2025-12-02 11:31:22.941 254904 DEBUG oslo_concurrency.processutils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:31:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:31:23 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1970211112' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.489 254904 DEBUG oslo_concurrency.processutils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.593 254904 DEBUG os_brick.encryptors [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Using volume encryption metadata '{'encryption_key_id': '5278e4ec-0fd9-4356-8fed-71e06c8cc2e6', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-aca02ad2-47f5-4d77-9df0-2c95a1cb88a2', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'aca02ad2-47f5-4d77-9df0-2c95a1cb88a2', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '8c463218-c639-4256-b580-f16aaa113a7f', 'attached_at': '', 'detached_at': '', 'volume_id': 'aca02ad2-47f5-4d77-9df0-2c95a1cb88a2', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.596 254904 DEBUG barbicanclient.client [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.612 254904 DEBUG barbicanclient.v1.secrets [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/5278e4ec-0fd9-4356-8fed-71e06c8cc2e6 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.613 254904 INFO barbicanclient.base [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/5278e4ec-0fd9-4356-8fed-71e06c8cc2e6#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.638 254904 DEBUG nova.compute.manager [req-c8d5022c-41e5-49de-b189-832dbca79bd7 req-b26f28ca-c968-4574-b895-27dd304408d8 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Received event network-vif-plugged-8ac6e5bf-779d-409f-90e8-7ee2dbf72367 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.639 254904 DEBUG oslo_concurrency.lockutils [req-c8d5022c-41e5-49de-b189-832dbca79bd7 req-b26f28ca-c968-4574-b895-27dd304408d8 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "300cc277-5780-4174-88ed-a942194a10b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.639 254904 DEBUG oslo_concurrency.lockutils [req-c8d5022c-41e5-49de-b189-832dbca79bd7 req-b26f28ca-c968-4574-b895-27dd304408d8 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "300cc277-5780-4174-88ed-a942194a10b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.640 254904 DEBUG oslo_concurrency.lockutils [req-c8d5022c-41e5-49de-b189-832dbca79bd7 req-b26f28ca-c968-4574-b895-27dd304408d8 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "300cc277-5780-4174-88ed-a942194a10b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.640 254904 DEBUG nova.compute.manager [req-c8d5022c-41e5-49de-b189-832dbca79bd7 req-b26f28ca-c968-4574-b895-27dd304408d8 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] No waiting events found dispatching network-vif-plugged-8ac6e5bf-779d-409f-90e8-7ee2dbf72367 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.640 254904 WARNING nova.compute.manager [req-c8d5022c-41e5-49de-b189-832dbca79bd7 req-b26f28ca-c968-4574-b895-27dd304408d8 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Received unexpected event network-vif-plugged-8ac6e5bf-779d-409f-90e8-7ee2dbf72367 for instance with vm_state active and task_state None.#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.644 254904 DEBUG barbicanclient.client [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.645 254904 INFO barbicanclient.base [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/5278e4ec-0fd9-4356-8fed-71e06c8cc2e6#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.666 254904 DEBUG barbicanclient.client [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.667 254904 INFO barbicanclient.base [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/5278e4ec-0fd9-4356-8fed-71e06c8cc2e6#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.686 254904 DEBUG barbicanclient.client [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.687 254904 INFO barbicanclient.base [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/5278e4ec-0fd9-4356-8fed-71e06c8cc2e6#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.706 254904 DEBUG barbicanclient.client [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.707 254904 INFO barbicanclient.base [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/5278e4ec-0fd9-4356-8fed-71e06c8cc2e6#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.733 254904 DEBUG barbicanclient.client [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.734 254904 INFO barbicanclient.base [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/5278e4ec-0fd9-4356-8fed-71e06c8cc2e6#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.849 254904 DEBUG barbicanclient.client [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.850 254904 INFO barbicanclient.base [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/5278e4ec-0fd9-4356-8fed-71e06c8cc2e6#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.876 254904 DEBUG barbicanclient.client [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.877 254904 INFO barbicanclient.base [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/5278e4ec-0fd9-4356-8fed-71e06c8cc2e6#033[00m
Dec  2 06:31:23 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:23.890 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.899 254904 DEBUG barbicanclient.client [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.899 254904 INFO barbicanclient.base [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/5278e4ec-0fd9-4356-8fed-71e06c8cc2e6#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.916 254904 DEBUG barbicanclient.client [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.916 254904 INFO barbicanclient.base [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/5278e4ec-0fd9-4356-8fed-71e06c8cc2e6#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.954 254904 DEBUG barbicanclient.client [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.954 254904 INFO barbicanclient.base [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/5278e4ec-0fd9-4356-8fed-71e06c8cc2e6#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.986 254904 DEBUG barbicanclient.client [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:31:23 np0005542249 nova_compute[254900]: 2025-12-02 11:31:23.986 254904 INFO barbicanclient.base [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/5278e4ec-0fd9-4356-8fed-71e06c8cc2e6#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.010 254904 DEBUG barbicanclient.client [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.010 254904 INFO barbicanclient.base [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/5278e4ec-0fd9-4356-8fed-71e06c8cc2e6#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.026 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.032 254904 DEBUG barbicanclient.client [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.033 254904 INFO barbicanclient.base [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/5278e4ec-0fd9-4356-8fed-71e06c8cc2e6#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.055 254904 DEBUG barbicanclient.client [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.056 254904 INFO barbicanclient.base [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/5278e4ec-0fd9-4356-8fed-71e06c8cc2e6#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.080 254904 DEBUG barbicanclient.client [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.081 254904 DEBUG nova.virt.libvirt.host [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Secret XML: <secret ephemeral="no" private="no">
Dec  2 06:31:24 np0005542249 nova_compute[254900]:  <usage type="volume">
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <volume>aca02ad2-47f5-4d77-9df0-2c95a1cb88a2</volume>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:  </usage>
Dec  2 06:31:24 np0005542249 nova_compute[254900]: </secret>
Dec  2 06:31:24 np0005542249 nova_compute[254900]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.126 254904 DEBUG nova.virt.libvirt.vif [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:31:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-797466503',display_name='tempest-TransferEncryptedVolumeTest-server-797466503',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-797466503',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFgf6Glrr4mOxvEUPVgKAzbMgkblDirrH2khctJvQZz94LiWsbLQ82mzb2Y9Rt3SfEin9Hpb/echGrZ3sf80MCARzFat1FEJ1OH4HmTSACHUm+YJhtZATdTFOe7LkwhRFQ==',key_name='tempest-TransferEncryptedVolumeTest-1310282930',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a893d0c223f746328e706d7491d73b20',ramdisk_id='',reservation_id='r-jc3gpsea',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1499588457',owner_user_name='tempest-TransferEncryptedVolumeTest-1499588457-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:31:18Z,user_data=None,user_id='1caa62e7ee8b42be98bc34780a7197f9',uuid=8c463218-c639-4256-b580-f16aaa113a7f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e69b326-3b7f-4fee-8050-9ff725b75719", "address": "fa:16:3e:12:b1:85", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e69b326-3b", "ovs_interfaceid": "9e69b326-3b7f-4fee-8050-9ff725b75719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.128 254904 DEBUG nova.network.os_vif_util [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Converting VIF {"id": "9e69b326-3b7f-4fee-8050-9ff725b75719", "address": "fa:16:3e:12:b1:85", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e69b326-3b", "ovs_interfaceid": "9e69b326-3b7f-4fee-8050-9ff725b75719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.129 254904 DEBUG nova.network.os_vif_util [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:b1:85,bridge_name='br-int',has_traffic_filtering=True,id=9e69b326-3b7f-4fee-8050-9ff725b75719,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e69b326-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.132 254904 DEBUG nova.objects.instance [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8c463218-c639-4256-b580-f16aaa113a7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.157 254904 DEBUG nova.virt.libvirt.driver [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:31:24 np0005542249 nova_compute[254900]:  <uuid>8c463218-c639-4256-b580-f16aaa113a7f</uuid>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:  <name>instance-0000001a</name>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-797466503</nova:name>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:31:22</nova:creationTime>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:31:24 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:        <nova:user uuid="1caa62e7ee8b42be98bc34780a7197f9">tempest-TransferEncryptedVolumeTest-1499588457-project-member</nova:user>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:        <nova:project uuid="a893d0c223f746328e706d7491d73b20">tempest-TransferEncryptedVolumeTest-1499588457</nova:project>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:        <nova:port uuid="9e69b326-3b7f-4fee-8050-9ff725b75719">
Dec  2 06:31:24 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <entry name="serial">8c463218-c639-4256-b580-f16aaa113a7f</entry>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <entry name="uuid">8c463218-c639-4256-b580-f16aaa113a7f</entry>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/8c463218-c639-4256-b580-f16aaa113a7f_disk.config">
Dec  2 06:31:24 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:31:24 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="volumes/volume-aca02ad2-47f5-4d77-9df0-2c95a1cb88a2">
Dec  2 06:31:24 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:31:24 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <serial>aca02ad2-47f5-4d77-9df0-2c95a1cb88a2</serial>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <encryption format="luks">
Dec  2 06:31:24 np0005542249 nova_compute[254900]:        <secret type="passphrase" uuid="ec522058-2405-4576-a701-790740e6c389"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      </encryption>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:12:b1:85"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <target dev="tap9e69b326-3b"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/8c463218-c639-4256-b580-f16aaa113a7f/console.log" append="off"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:31:24 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:31:24 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:31:24 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:31:24 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.158 254904 DEBUG nova.compute.manager [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Preparing to wait for external event network-vif-plugged-9e69b326-3b7f-4fee-8050-9ff725b75719 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.159 254904 DEBUG oslo_concurrency.lockutils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "8c463218-c639-4256-b580-f16aaa113a7f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.159 254904 DEBUG oslo_concurrency.lockutils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "8c463218-c639-4256-b580-f16aaa113a7f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.160 254904 DEBUG oslo_concurrency.lockutils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "8c463218-c639-4256-b580-f16aaa113a7f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.161 254904 DEBUG nova.virt.libvirt.vif [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:31:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-797466503',display_name='tempest-TransferEncryptedVolumeTest-server-797466503',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-797466503',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFgf6Glrr4mOxvEUPVgKAzbMgkblDirrH2khctJvQZz94LiWsbLQ82mzb2Y9Rt3SfEin9Hpb/echGrZ3sf80MCARzFat1FEJ1OH4HmTSACHUm+YJhtZATdTFOe7LkwhRFQ==',key_name='tempest-TransferEncryptedVolumeTest-1310282930',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a893d0c223f746328e706d7491d73b20',ramdisk_id='',reservation_id='r-jc3gpsea',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1499588457',owner_user_name='tempest-TransferEncryptedVolumeTest-1499588457-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:31:18Z,user_data=None,user_id='1caa62e7ee8b42be98bc34780a7197f9',uuid=8c463218-c639-4256-b580-f16aaa113a7f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e69b326-3b7f-4fee-8050-9ff725b75719", "address": "fa:16:3e:12:b1:85", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e69b326-3b", "ovs_interfaceid": "9e69b326-3b7f-4fee-8050-9ff725b75719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.162 254904 DEBUG nova.network.os_vif_util [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Converting VIF {"id": "9e69b326-3b7f-4fee-8050-9ff725b75719", "address": "fa:16:3e:12:b1:85", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e69b326-3b", "ovs_interfaceid": "9e69b326-3b7f-4fee-8050-9ff725b75719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.163 254904 DEBUG nova.network.os_vif_util [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:b1:85,bridge_name='br-int',has_traffic_filtering=True,id=9e69b326-3b7f-4fee-8050-9ff725b75719,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e69b326-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.164 254904 DEBUG os_vif [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:b1:85,bridge_name='br-int',has_traffic_filtering=True,id=9e69b326-3b7f-4fee-8050-9ff725b75719,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e69b326-3b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.165 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.166 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.166 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.172 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.173 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e69b326-3b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.173 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9e69b326-3b, col_values=(('external_ids', {'iface-id': '9e69b326-3b7f-4fee-8050-9ff725b75719', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:12:b1:85', 'vm-uuid': '8c463218-c639-4256-b580-f16aaa113a7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.175 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:24 np0005542249 NetworkManager[48987]: <info>  [1764675084.1765] manager: (tap9e69b326-3b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.179 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.182 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.184 254904 INFO os_vif [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:b1:85,bridge_name='br-int',has_traffic_filtering=True,id=9e69b326-3b7f-4fee-8050-9ff725b75719,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e69b326-3b')#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.266 254904 DEBUG nova.network.neutron [req-e12c8cbe-ccd7-4dc3-9c2e-874a7624dea9 req-fc47aab5-bbc4-494f-81b2-377562caca44 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Updated VIF entry in instance network info cache for port 9e69b326-3b7f-4fee-8050-9ff725b75719. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.267 254904 DEBUG nova.network.neutron [req-e12c8cbe-ccd7-4dc3-9c2e-874a7624dea9 req-fc47aab5-bbc4-494f-81b2-377562caca44 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Updating instance_info_cache with network_info: [{"id": "9e69b326-3b7f-4fee-8050-9ff725b75719", "address": "fa:16:3e:12:b1:85", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e69b326-3b", "ovs_interfaceid": "9e69b326-3b7f-4fee-8050-9ff725b75719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.275 254904 DEBUG nova.virt.libvirt.driver [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.275 254904 DEBUG nova.virt.libvirt.driver [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.276 254904 DEBUG nova.virt.libvirt.driver [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] No VIF found with MAC fa:16:3e:12:b1:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.277 254904 INFO nova.virt.libvirt.driver [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Using config drive#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.316 254904 DEBUG nova.storage.rbd_utils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] rbd image 8c463218-c639-4256-b580-f16aaa113a7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.324 254904 DEBUG oslo_concurrency.lockutils [req-e12c8cbe-ccd7-4dc3-9c2e-874a7624dea9 req-fc47aab5-bbc4-494f-81b2-377562caca44 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-8c463218-c639-4256-b580-f16aaa113a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:31:24 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1653: 321 pgs: 321 active+clean; 283 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 738 KiB/s rd, 9.4 MiB/s wr, 97 op/s
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.715 254904 INFO nova.virt.libvirt.driver [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Creating config drive at /var/lib/nova/instances/8c463218-c639-4256-b580-f16aaa113a7f/disk.config#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.723 254904 DEBUG oslo_concurrency.processutils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8c463218-c639-4256-b580-f16aaa113a7f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpae0jeqxs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.874 254904 DEBUG oslo_concurrency.processutils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8c463218-c639-4256-b580-f16aaa113a7f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpae0jeqxs" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.924 254904 DEBUG nova.storage.rbd_utils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] rbd image 8c463218-c639-4256-b580-f16aaa113a7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:31:24 np0005542249 nova_compute[254900]: 2025-12-02 11:31:24.931 254904 DEBUG oslo_concurrency.processutils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8c463218-c639-4256-b580-f16aaa113a7f/disk.config 8c463218-c639-4256-b580-f16aaa113a7f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:31:25 np0005542249 nova_compute[254900]: 2025-12-02 11:31:25.152 254904 DEBUG oslo_concurrency.processutils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8c463218-c639-4256-b580-f16aaa113a7f/disk.config 8c463218-c639-4256-b580-f16aaa113a7f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:31:25 np0005542249 nova_compute[254900]: 2025-12-02 11:31:25.154 254904 INFO nova.virt.libvirt.driver [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Deleting local config drive /var/lib/nova/instances/8c463218-c639-4256-b580-f16aaa113a7f/disk.config because it was imported into RBD.#033[00m
Dec  2 06:31:25 np0005542249 kernel: tap9e69b326-3b: entered promiscuous mode
Dec  2 06:31:25 np0005542249 nova_compute[254900]: 2025-12-02 11:31:25.235 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:25 np0005542249 ovn_controller[153849]: 2025-12-02T11:31:25Z|00236|binding|INFO|Claiming lport 9e69b326-3b7f-4fee-8050-9ff725b75719 for this chassis.
Dec  2 06:31:25 np0005542249 ovn_controller[153849]: 2025-12-02T11:31:25Z|00237|binding|INFO|9e69b326-3b7f-4fee-8050-9ff725b75719: Claiming fa:16:3e:12:b1:85 10.100.0.6
Dec  2 06:31:25 np0005542249 NetworkManager[48987]: <info>  [1764675085.2375] manager: (tap9e69b326-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/128)
Dec  2 06:31:25 np0005542249 nova_compute[254900]: 2025-12-02 11:31:25.254 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:25 np0005542249 ovn_controller[153849]: 2025-12-02T11:31:25Z|00238|binding|INFO|Setting lport 9e69b326-3b7f-4fee-8050-9ff725b75719 ovn-installed in OVS
Dec  2 06:31:25 np0005542249 nova_compute[254900]: 2025-12-02 11:31:25.257 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:25 np0005542249 ovn_controller[153849]: 2025-12-02T11:31:25Z|00239|binding|INFO|Setting lport 9e69b326-3b7f-4fee-8050-9ff725b75719 up in Southbound
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.267 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:b1:85 10.100.0.6'], port_security=['fa:16:3e:12:b1:85 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8c463218-c639-4256-b580-f16aaa113a7f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a893d0c223f746328e706d7491d73b20', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3d9ac6f0-d677-4372-91b9-169b8de31d1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9a246c4-d9fe-402e-8fa6-6099b55c4866, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=9e69b326-3b7f-4fee-8050-9ff725b75719) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.268 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 9e69b326-3b7f-4fee-8050-9ff725b75719 in datapath 4f9f73cb-9730-4829-ae15-1f03b97e60f8 bound to our chassis#033[00m
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.271 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4f9f73cb-9730-4829-ae15-1f03b97e60f8#033[00m
Dec  2 06:31:25 np0005542249 systemd-udevd[291961]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.288 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[1aab1b36-9e54-4df1-bafe-b4d70f2209db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.289 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4f9f73cb-91 in ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 06:31:25 np0005542249 NetworkManager[48987]: <info>  [1764675085.2908] device (tap9e69b326-3b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:31:25 np0005542249 NetworkManager[48987]: <info>  [1764675085.2918] device (tap9e69b326-3b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.295 262398 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4f9f73cb-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.295 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[6bcbfc24-1019-4fa6-be65-22003a57072f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.296 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f71b57-4eef-4208-a946-931c8181a92f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:25 np0005542249 systemd-machined[216222]: New machine qemu-26-instance-0000001a.
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.316 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[5ae58e02-5d85-40a6-9dba-0d49ef14e61e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:25 np0005542249 systemd[1]: Started Virtual Machine qemu-26-instance-0000001a.
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.342 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[a559f9dd-39b5-48ba-b704-b55a56146363]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.394 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[d67bc716-a3cd-456f-a4e1-42e5e29bd0cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.404 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1f4859-9c83-47e4-b6c7-2e0ade788711]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:25 np0005542249 systemd-udevd[291965]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:31:25 np0005542249 NetworkManager[48987]: <info>  [1764675085.4122] manager: (tap4f9f73cb-90): new Veth device (/org/freedesktop/NetworkManager/Devices/129)
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.461 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[ba193346-2486-441a-b002-042b96a375ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.467 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[d90f8cb7-7334-4489-a685-8d2119af0e27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:25 np0005542249 NetworkManager[48987]: <info>  [1764675085.4970] device (tap4f9f73cb-90): carrier: link connected
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.507 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[13b59bdd-8104-4535-9acb-e7dfda2108ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.536 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[147926bb-e4b3-44fc-815b-fb217ba25b48]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f9f73cb-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:ed:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527832, 'reachable_time': 22992, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291995, 'error': None, 'target': 'ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.559 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[63543f06-f3f2-4a16-ac82-3cb86051f828]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:edbf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527832, 'tstamp': 527832}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291996, 'error': None, 'target': 'ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.587 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[e8b2bd4a-0d29-4c33-a63f-12cd41762183]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f9f73cb-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:ed:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527832, 'reachable_time': 22992, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291997, 'error': None, 'target': 'ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.649 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[891820b1-d1d0-43de-949e-26dba891e363]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.750 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[b3d17a78-604a-4ca2-9f80-48f4a6e0e7ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.752 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f9f73cb-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.753 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.754 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f9f73cb-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:31:25 np0005542249 nova_compute[254900]: 2025-12-02 11:31:25.756 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:25 np0005542249 NetworkManager[48987]: <info>  [1764675085.7578] manager: (tap4f9f73cb-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/130)
Dec  2 06:31:25 np0005542249 kernel: tap4f9f73cb-90: entered promiscuous mode
Dec  2 06:31:25 np0005542249 nova_compute[254900]: 2025-12-02 11:31:25.761 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.763 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4f9f73cb-90, col_values=(('external_ids', {'iface-id': '244504fe-2e21-493b-8e56-0db40be1f53e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:31:25 np0005542249 nova_compute[254900]: 2025-12-02 11:31:25.765 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:25 np0005542249 ovn_controller[153849]: 2025-12-02T11:31:25Z|00240|binding|INFO|Releasing lport 244504fe-2e21-493b-8e56-0db40be1f53e from this chassis (sb_readonly=0)
Dec  2 06:31:25 np0005542249 nova_compute[254900]: 2025-12-02 11:31:25.789 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.791 163757 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4f9f73cb-9730-4829-ae15-1f03b97e60f8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4f9f73cb-9730-4829-ae15-1f03b97e60f8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.793 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[34836547-dde4-4882-9ca8-1fedea7dc938]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.794 163757 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: global
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]:    log         /dev/log local0 debug
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]:    log-tag     haproxy-metadata-proxy-4f9f73cb-9730-4829-ae15-1f03b97e60f8
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]:    user        root
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]:    group       root
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]:    maxconn     1024
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]:    pidfile     /var/lib/neutron/external/pids/4f9f73cb-9730-4829-ae15-1f03b97e60f8.pid.haproxy
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]:    daemon
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: defaults
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]:    log global
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]:    mode http
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]:    option httplog
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]:    option dontlognull
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]:    option http-server-close
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]:    option forwardfor
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]:    retries                 3
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]:    timeout http-request    30s
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]:    timeout connect         30s
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]:    timeout client          32s
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]:    timeout server          32s
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]:    timeout http-keep-alive 30s
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: listen listener
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]:    bind 169.254.169.254:80
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]:    http-request add-header X-OVN-Network-ID 4f9f73cb-9730-4829-ae15-1f03b97e60f8
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 06:31:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:25.798 163757 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'env', 'PROCESS_TAG=haproxy-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4f9f73cb-9730-4829-ae15-1f03b97e60f8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 06:31:25 np0005542249 nova_compute[254900]: 2025-12-02 11:31:25.900 254904 DEBUG nova.compute.manager [req-d974f3a6-1cdf-40b3-98d1-1fec2eb1b196 req-e244315d-20db-4f24-bd2a-cf9a0988ffd5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Received event network-vif-plugged-9e69b326-3b7f-4fee-8050-9ff725b75719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:31:25 np0005542249 nova_compute[254900]: 2025-12-02 11:31:25.901 254904 DEBUG oslo_concurrency.lockutils [req-d974f3a6-1cdf-40b3-98d1-1fec2eb1b196 req-e244315d-20db-4f24-bd2a-cf9a0988ffd5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "8c463218-c639-4256-b580-f16aaa113a7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:31:25 np0005542249 nova_compute[254900]: 2025-12-02 11:31:25.902 254904 DEBUG oslo_concurrency.lockutils [req-d974f3a6-1cdf-40b3-98d1-1fec2eb1b196 req-e244315d-20db-4f24-bd2a-cf9a0988ffd5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "8c463218-c639-4256-b580-f16aaa113a7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:31:25 np0005542249 nova_compute[254900]: 2025-12-02 11:31:25.902 254904 DEBUG oslo_concurrency.lockutils [req-d974f3a6-1cdf-40b3-98d1-1fec2eb1b196 req-e244315d-20db-4f24-bd2a-cf9a0988ffd5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "8c463218-c639-4256-b580-f16aaa113a7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:31:25 np0005542249 nova_compute[254900]: 2025-12-02 11:31:25.902 254904 DEBUG nova.compute.manager [req-d974f3a6-1cdf-40b3-98d1-1fec2eb1b196 req-e244315d-20db-4f24-bd2a-cf9a0988ffd5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Processing event network-vif-plugged-9e69b326-3b7f-4fee-8050-9ff725b75719 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:31:26 np0005542249 podman[292064]: 2025-12-02 11:31:26.283488513 +0000 UTC m=+0.070734157 container create 32c31c223920c6c63979a668588a0caa2ec8fb379ccea27c7638d34e53185106 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:31:26 np0005542249 systemd[1]: Started libpod-conmon-32c31c223920c6c63979a668588a0caa2ec8fb379ccea27c7638d34e53185106.scope.
Dec  2 06:31:26 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1654: 321 pgs: 321 active+clean; 283 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 723 KiB/s rd, 4.0 MiB/s wr, 75 op/s
Dec  2 06:31:26 np0005542249 podman[292064]: 2025-12-02 11:31:26.249741869 +0000 UTC m=+0.036987543 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:31:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:31:26
Dec  2 06:31:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:31:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:31:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'default.rgw.meta', 'backups', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', '.mgr']
Dec  2 06:31:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:31:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:31:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:31:26 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:31:26 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/729cc689fad9db3e1a3a2896b8e9f50dd5ca6c26df7d3b19b177adc37b87f239/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 06:31:26 np0005542249 podman[292064]: 2025-12-02 11:31:26.402235557 +0000 UTC m=+0.189481221 container init 32c31c223920c6c63979a668588a0caa2ec8fb379ccea27c7638d34e53185106 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec  2 06:31:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:31:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:31:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:31:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:31:26 np0005542249 podman[292064]: 2025-12-02 11:31:26.411902749 +0000 UTC m=+0.199148393 container start 32c31c223920c6c63979a668588a0caa2ec8fb379ccea27c7638d34e53185106 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  2 06:31:26 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[292080]: [NOTICE]   (292084) : New worker (292086) forked
Dec  2 06:31:26 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[292080]: [NOTICE]   (292084) : Loading success.
Dec  2 06:31:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:31:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:31:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:31:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:31:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:31:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:31:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:31:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:31:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:31:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:31:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:31:27 np0005542249 nova_compute[254900]: 2025-12-02 11:31:27.974 254904 DEBUG nova.compute.manager [req-1a72309b-0065-4f45-aff5-b71acd2252ac req-e92c79df-dd7c-4af3-a0ec-0b3256e24ca8 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Received event network-vif-plugged-9e69b326-3b7f-4fee-8050-9ff725b75719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:31:27 np0005542249 nova_compute[254900]: 2025-12-02 11:31:27.975 254904 DEBUG oslo_concurrency.lockutils [req-1a72309b-0065-4f45-aff5-b71acd2252ac req-e92c79df-dd7c-4af3-a0ec-0b3256e24ca8 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "8c463218-c639-4256-b580-f16aaa113a7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:31:27 np0005542249 nova_compute[254900]: 2025-12-02 11:31:27.976 254904 DEBUG oslo_concurrency.lockutils [req-1a72309b-0065-4f45-aff5-b71acd2252ac req-e92c79df-dd7c-4af3-a0ec-0b3256e24ca8 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "8c463218-c639-4256-b580-f16aaa113a7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:31:27 np0005542249 nova_compute[254900]: 2025-12-02 11:31:27.976 254904 DEBUG oslo_concurrency.lockutils [req-1a72309b-0065-4f45-aff5-b71acd2252ac req-e92c79df-dd7c-4af3-a0ec-0b3256e24ca8 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "8c463218-c639-4256-b580-f16aaa113a7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:31:27 np0005542249 nova_compute[254900]: 2025-12-02 11:31:27.976 254904 DEBUG nova.compute.manager [req-1a72309b-0065-4f45-aff5-b71acd2252ac req-e92c79df-dd7c-4af3-a0ec-0b3256e24ca8 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] No waiting events found dispatching network-vif-plugged-9e69b326-3b7f-4fee-8050-9ff725b75719 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:31:27 np0005542249 nova_compute[254900]: 2025-12-02 11:31:27.976 254904 WARNING nova.compute.manager [req-1a72309b-0065-4f45-aff5-b71acd2252ac req-e92c79df-dd7c-4af3-a0ec-0b3256e24ca8 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Received unexpected event network-vif-plugged-9e69b326-3b7f-4fee-8050-9ff725b75719 for instance with vm_state building and task_state spawning.#033[00m
Dec  2 06:31:27 np0005542249 nova_compute[254900]: 2025-12-02 11:31:27.998 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764675087.9979198, 8c463218-c639-4256-b580-f16aaa113a7f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:31:27 np0005542249 nova_compute[254900]: 2025-12-02 11:31:27.999 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] VM Started (Lifecycle Event)#033[00m
Dec  2 06:31:28 np0005542249 podman[292099]: 2025-12-02 11:31:27.999826035 +0000 UTC m=+0.081067306 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  2 06:31:28 np0005542249 nova_compute[254900]: 2025-12-02 11:31:28.003 254904 DEBUG nova.compute.manager [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:31:28 np0005542249 nova_compute[254900]: 2025-12-02 11:31:28.009 254904 DEBUG nova.virt.libvirt.driver [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:31:28 np0005542249 nova_compute[254900]: 2025-12-02 11:31:28.013 254904 INFO nova.virt.libvirt.driver [-] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Instance spawned successfully.#033[00m
Dec  2 06:31:28 np0005542249 nova_compute[254900]: 2025-12-02 11:31:28.014 254904 DEBUG nova.virt.libvirt.driver [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 06:31:28 np0005542249 nova_compute[254900]: 2025-12-02 11:31:28.016 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:31:28 np0005542249 nova_compute[254900]: 2025-12-02 11:31:28.020 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:31:28 np0005542249 nova_compute[254900]: 2025-12-02 11:31:28.031 254904 DEBUG nova.virt.libvirt.driver [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:31:28 np0005542249 nova_compute[254900]: 2025-12-02 11:31:28.032 254904 DEBUG nova.virt.libvirt.driver [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:31:28 np0005542249 nova_compute[254900]: 2025-12-02 11:31:28.032 254904 DEBUG nova.virt.libvirt.driver [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:31:28 np0005542249 nova_compute[254900]: 2025-12-02 11:31:28.033 254904 DEBUG nova.virt.libvirt.driver [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:31:28 np0005542249 nova_compute[254900]: 2025-12-02 11:31:28.033 254904 DEBUG nova.virt.libvirt.driver [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:31:28 np0005542249 nova_compute[254900]: 2025-12-02 11:31:28.033 254904 DEBUG nova.virt.libvirt.driver [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:31:28 np0005542249 nova_compute[254900]: 2025-12-02 11:31:28.037 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:31:28 np0005542249 nova_compute[254900]: 2025-12-02 11:31:28.037 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764675088.0012655, 8c463218-c639-4256-b580-f16aaa113a7f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:31:28 np0005542249 nova_compute[254900]: 2025-12-02 11:31:28.037 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:31:28 np0005542249 podman[292101]: 2025-12-02 11:31:28.050438556 +0000 UTC m=+0.118258204 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  2 06:31:28 np0005542249 nova_compute[254900]: 2025-12-02 11:31:28.060 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:31:28 np0005542249 nova_compute[254900]: 2025-12-02 11:31:28.064 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764675088.0071661, 8c463218-c639-4256-b580-f16aaa113a7f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:31:28 np0005542249 nova_compute[254900]: 2025-12-02 11:31:28.064 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:31:28 np0005542249 nova_compute[254900]: 2025-12-02 11:31:28.082 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:31:28 np0005542249 nova_compute[254900]: 2025-12-02 11:31:28.086 254904 INFO nova.compute.manager [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Took 8.79 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:31:28 np0005542249 nova_compute[254900]: 2025-12-02 11:31:28.087 254904 DEBUG nova.compute.manager [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:31:28 np0005542249 nova_compute[254900]: 2025-12-02 11:31:28.089 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:31:28 np0005542249 nova_compute[254900]: 2025-12-02 11:31:28.114 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:31:28 np0005542249 nova_compute[254900]: 2025-12-02 11:31:28.143 254904 INFO nova.compute.manager [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Took 10.88 seconds to build instance.#033[00m
Dec  2 06:31:28 np0005542249 nova_compute[254900]: 2025-12-02 11:31:28.160 254904 DEBUG oslo_concurrency.lockutils [None req-f5b12ddb-09d4-4076-b23b-c1b05990be8f 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "8c463218-c639-4256-b580-f16aaa113a7f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:31:28 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1655: 321 pgs: 321 active+clean; 283 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.0 MiB/s wr, 121 op/s
Dec  2 06:31:29 np0005542249 nova_compute[254900]: 2025-12-02 11:31:29.029 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:29 np0005542249 nova_compute[254900]: 2025-12-02 11:31:29.176 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:29 np0005542249 nova_compute[254900]: 2025-12-02 11:31:29.881 254904 DEBUG nova.compute.manager [req-4f0bfe39-1720-4a85-bfac-d6c246626b00 req-85df43e7-cb6a-4d41-985c-d0a23e3b38a0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Received event network-changed-8ac6e5bf-779d-409f-90e8-7ee2dbf72367 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:31:29 np0005542249 nova_compute[254900]: 2025-12-02 11:31:29.881 254904 DEBUG nova.compute.manager [req-4f0bfe39-1720-4a85-bfac-d6c246626b00 req-85df43e7-cb6a-4d41-985c-d0a23e3b38a0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Refreshing instance network info cache due to event network-changed-8ac6e5bf-779d-409f-90e8-7ee2dbf72367. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:31:29 np0005542249 nova_compute[254900]: 2025-12-02 11:31:29.881 254904 DEBUG oslo_concurrency.lockutils [req-4f0bfe39-1720-4a85-bfac-d6c246626b00 req-85df43e7-cb6a-4d41-985c-d0a23e3b38a0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-300cc277-5780-4174-88ed-a942194a10b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:31:29 np0005542249 nova_compute[254900]: 2025-12-02 11:31:29.882 254904 DEBUG oslo_concurrency.lockutils [req-4f0bfe39-1720-4a85-bfac-d6c246626b00 req-85df43e7-cb6a-4d41-985c-d0a23e3b38a0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-300cc277-5780-4174-88ed-a942194a10b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:31:29 np0005542249 nova_compute[254900]: 2025-12-02 11:31:29.882 254904 DEBUG nova.network.neutron [req-4f0bfe39-1720-4a85-bfac-d6c246626b00 req-85df43e7-cb6a-4d41-985c-d0a23e3b38a0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Refreshing network info cache for port 8ac6e5bf-779d-409f-90e8-7ee2dbf72367 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:31:30 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1656: 321 pgs: 321 active+clean; 283 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 25 KiB/s wr, 89 op/s
Dec  2 06:31:32 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1657: 321 pgs: 321 active+clean; 283 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 25 KiB/s wr, 101 op/s
Dec  2 06:31:32 np0005542249 nova_compute[254900]: 2025-12-02 11:31:32.636 254904 DEBUG nova.network.neutron [req-4f0bfe39-1720-4a85-bfac-d6c246626b00 req-85df43e7-cb6a-4d41-985c-d0a23e3b38a0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Updated VIF entry in instance network info cache for port 8ac6e5bf-779d-409f-90e8-7ee2dbf72367. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:31:32 np0005542249 nova_compute[254900]: 2025-12-02 11:31:32.637 254904 DEBUG nova.network.neutron [req-4f0bfe39-1720-4a85-bfac-d6c246626b00 req-85df43e7-cb6a-4d41-985c-d0a23e3b38a0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Updating instance_info_cache with network_info: [{"id": "8ac6e5bf-779d-409f-90e8-7ee2dbf72367", "address": "fa:16:3e:28:86:70", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ac6e5bf-77", "ovs_interfaceid": "8ac6e5bf-779d-409f-90e8-7ee2dbf72367", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:31:32 np0005542249 nova_compute[254900]: 2025-12-02 11:31:32.666 254904 DEBUG oslo_concurrency.lockutils [req-4f0bfe39-1720-4a85-bfac-d6c246626b00 req-85df43e7-cb6a-4d41-985c-d0a23e3b38a0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-300cc277-5780-4174-88ed-a942194a10b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:31:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:31:34 np0005542249 nova_compute[254900]: 2025-12-02 11:31:34.144 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:34 np0005542249 nova_compute[254900]: 2025-12-02 11:31:34.179 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:34 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1658: 321 pgs: 321 active+clean; 284 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 28 KiB/s wr, 151 op/s
Dec  2 06:31:34 np0005542249 nova_compute[254900]: 2025-12-02 11:31:34.373 254904 DEBUG nova.compute.manager [req-b02c5de5-4e21-4a8a-a044-808669a60729 req-9f1b7074-5ff3-492c-afee-cff4504224cc 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Received event network-changed-9e69b326-3b7f-4fee-8050-9ff725b75719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:31:34 np0005542249 nova_compute[254900]: 2025-12-02 11:31:34.373 254904 DEBUG nova.compute.manager [req-b02c5de5-4e21-4a8a-a044-808669a60729 req-9f1b7074-5ff3-492c-afee-cff4504224cc 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Refreshing instance network info cache due to event network-changed-9e69b326-3b7f-4fee-8050-9ff725b75719. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:31:34 np0005542249 nova_compute[254900]: 2025-12-02 11:31:34.374 254904 DEBUG oslo_concurrency.lockutils [req-b02c5de5-4e21-4a8a-a044-808669a60729 req-9f1b7074-5ff3-492c-afee-cff4504224cc 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-8c463218-c639-4256-b580-f16aaa113a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:31:34 np0005542249 nova_compute[254900]: 2025-12-02 11:31:34.374 254904 DEBUG oslo_concurrency.lockutils [req-b02c5de5-4e21-4a8a-a044-808669a60729 req-9f1b7074-5ff3-492c-afee-cff4504224cc 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-8c463218-c639-4256-b580-f16aaa113a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:31:34 np0005542249 nova_compute[254900]: 2025-12-02 11:31:34.374 254904 DEBUG nova.network.neutron [req-b02c5de5-4e21-4a8a-a044-808669a60729 req-9f1b7074-5ff3-492c-afee-cff4504224cc 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Refreshing network info cache for port 9e69b326-3b7f-4fee-8050-9ff725b75719 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:31:35 np0005542249 nova_compute[254900]: 2025-12-02 11:31:35.579 254904 DEBUG nova.network.neutron [req-b02c5de5-4e21-4a8a-a044-808669a60729 req-9f1b7074-5ff3-492c-afee-cff4504224cc 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Updated VIF entry in instance network info cache for port 9e69b326-3b7f-4fee-8050-9ff725b75719. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:31:35 np0005542249 nova_compute[254900]: 2025-12-02 11:31:35.580 254904 DEBUG nova.network.neutron [req-b02c5de5-4e21-4a8a-a044-808669a60729 req-9f1b7074-5ff3-492c-afee-cff4504224cc 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Updating instance_info_cache with network_info: [{"id": "9e69b326-3b7f-4fee-8050-9ff725b75719", "address": "fa:16:3e:12:b1:85", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e69b326-3b", "ovs_interfaceid": "9e69b326-3b7f-4fee-8050-9ff725b75719", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:31:35 np0005542249 nova_compute[254900]: 2025-12-02 11:31:35.611 254904 DEBUG oslo_concurrency.lockutils [req-b02c5de5-4e21-4a8a-a044-808669a60729 req-9f1b7074-5ff3-492c-afee-cff4504224cc 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-8c463218-c639-4256-b580-f16aaa113a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:31:35 np0005542249 ovn_controller[153849]: 2025-12-02T11:31:35Z|00056|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.6 does not match offer 10.100.0.9
Dec  2 06:31:35 np0005542249 ovn_controller[153849]: 2025-12-02T11:31:35Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:28:86:70 10.100.0.9
Dec  2 06:31:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:31:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:31:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:31:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:31:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 7.312931399361854e-06 of space, bias 1.0, pg target 0.002193879419808556 quantized to 32 (current 32)
Dec  2 06:31:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:31:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029841211458943805 of space, bias 1.0, pg target 0.8952363437683142 quantized to 32 (current 32)
Dec  2 06:31:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:31:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  2 06:31:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:31:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Dec  2 06:31:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:31:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:31:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:31:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:31:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:31:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:31:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:31:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:31:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:31:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:31:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:31:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:31:36 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1659: 321 pgs: 321 active+clean; 284 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 16 KiB/s wr, 117 op/s
Dec  2 06:31:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:31:38 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1660: 321 pgs: 321 active+clean; 293 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 252 KiB/s wr, 154 op/s
Dec  2 06:31:39 np0005542249 nova_compute[254900]: 2025-12-02 11:31:39.173 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:39 np0005542249 nova_compute[254900]: 2025-12-02 11:31:39.180 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:39 np0005542249 ovn_controller[153849]: 2025-12-02T11:31:39Z|00058|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.6 does not match offer 10.100.0.9
Dec  2 06:31:39 np0005542249 ovn_controller[153849]: 2025-12-02T11:31:39Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:28:86:70 10.100.0.9
Dec  2 06:31:40 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1661: 321 pgs: 321 active+clean; 297 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 512 KiB/s wr, 119 op/s
Dec  2 06:31:40 np0005542249 ovn_controller[153849]: 2025-12-02T11:31:40Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:28:86:70 10.100.0.9
Dec  2 06:31:40 np0005542249 ovn_controller[153849]: 2025-12-02T11:31:40Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:28:86:70 10.100.0.9
Dec  2 06:31:42 np0005542249 ovn_controller[153849]: 2025-12-02T11:31:42Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:12:b1:85 10.100.0.6
Dec  2 06:31:42 np0005542249 ovn_controller[153849]: 2025-12-02T11:31:42Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:12:b1:85 10.100.0.6
Dec  2 06:31:42 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1662: 321 pgs: 321 active+clean; 301 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 505 KiB/s wr, 120 op/s
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:42.884281) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675102884340, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2022, "num_deletes": 256, "total_data_size": 3081189, "memory_usage": 3124712, "flush_reason": "Manual Compaction"}
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675102901172, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 1871091, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31811, "largest_seqno": 33832, "table_properties": {"data_size": 1864235, "index_size": 3610, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 18313, "raw_average_key_size": 21, "raw_value_size": 1848882, "raw_average_value_size": 2144, "num_data_blocks": 164, "num_entries": 862, "num_filter_entries": 862, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764674917, "oldest_key_time": 1764674917, "file_creation_time": 1764675102, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 16949 microseconds, and 9558 cpu microseconds.
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:42.901232) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 1871091 bytes OK
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:42.901262) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:42.902971) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:42.902997) EVENT_LOG_v1 {"time_micros": 1764675102902989, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:42.903051) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 3072563, prev total WAL file size 3072563, number of live WAL files 2.
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:42.904636) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303031' seq:72057594037927935, type:22 .. '6D6772737461740031323533' seq:0, type:0; will stop at (end)
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(1827KB)], [65(10MB)]
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675102904688, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 12515016, "oldest_snapshot_seqno": -1}
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6529 keys, 10340358 bytes, temperature: kUnknown
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675102985844, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 10340358, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10292196, "index_size": 30706, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 163027, "raw_average_key_size": 24, "raw_value_size": 10170572, "raw_average_value_size": 1557, "num_data_blocks": 1243, "num_entries": 6529, "num_filter_entries": 6529, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672515, "oldest_key_time": 0, "file_creation_time": 1764675102, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:42.986299) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 10340358 bytes
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:42.988225) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.0 rd, 127.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 10.2 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(12.2) write-amplify(5.5) OK, records in: 6967, records dropped: 438 output_compression: NoCompression
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:42.988252) EVENT_LOG_v1 {"time_micros": 1764675102988239, "job": 36, "event": "compaction_finished", "compaction_time_micros": 81287, "compaction_time_cpu_micros": 45979, "output_level": 6, "num_output_files": 1, "total_output_size": 10340358, "num_input_records": 6967, "num_output_records": 6529, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675102988838, "job": 36, "event": "table_file_deletion", "file_number": 67}
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675102991401, "job": 36, "event": "table_file_deletion", "file_number": 65}
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:42.904530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:42.991569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:42.991580) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:42.991583) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:42.991587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:31:42 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:42.991590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:31:44 np0005542249 nova_compute[254900]: 2025-12-02 11:31:44.175 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:44 np0005542249 nova_compute[254900]: 2025-12-02 11:31:44.183 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:44 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1663: 321 pgs: 321 active+clean; 345 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 5.0 MiB/s wr, 162 op/s
Dec  2 06:31:45 np0005542249 nova_compute[254900]: 2025-12-02 11:31:45.383 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:31:45 np0005542249 nova_compute[254900]: 2025-12-02 11:31:45.384 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:31:46 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1664: 321 pgs: 321 active+clean; 345 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.0 MiB/s wr, 110 op/s
Dec  2 06:31:46 np0005542249 nova_compute[254900]: 2025-12-02 11:31:46.383 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:31:46 np0005542249 nova_compute[254900]: 2025-12-02 11:31:46.384 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:31:47 np0005542249 nova_compute[254900]: 2025-12-02 11:31:47.490 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "refresh_cache-64fbd54d-f574-44e6-a788-53938d2219e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:31:47 np0005542249 nova_compute[254900]: 2025-12-02 11:31:47.491 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquired lock "refresh_cache-64fbd54d-f574-44e6-a788-53938d2219e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:31:47 np0005542249 nova_compute[254900]: 2025-12-02 11:31:47.491 254904 DEBUG nova.network.neutron [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  2 06:31:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:31:48 np0005542249 podman[292151]: 2025-12-02 11:31:48.03208751 +0000 UTC m=+0.103770861 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 06:31:48 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1665: 321 pgs: 321 active+clean; 370 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 6.4 MiB/s wr, 128 op/s
Dec  2 06:31:49 np0005542249 nova_compute[254900]: 2025-12-02 11:31:49.178 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:49 np0005542249 nova_compute[254900]: 2025-12-02 11:31:49.185 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:31:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2013093953' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:31:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:31:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2013093953' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:31:50 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1666: 321 pgs: 321 active+clean; 370 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 836 KiB/s rd, 6.1 MiB/s wr, 91 op/s
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.478 254904 DEBUG oslo_concurrency.lockutils [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "8c463218-c639-4256-b580-f16aaa113a7f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.479 254904 DEBUG oslo_concurrency.lockutils [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "8c463218-c639-4256-b580-f16aaa113a7f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.480 254904 DEBUG oslo_concurrency.lockutils [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "8c463218-c639-4256-b580-f16aaa113a7f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.480 254904 DEBUG oslo_concurrency.lockutils [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "8c463218-c639-4256-b580-f16aaa113a7f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.481 254904 DEBUG oslo_concurrency.lockutils [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "8c463218-c639-4256-b580-f16aaa113a7f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.483 254904 INFO nova.compute.manager [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Terminating instance#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.485 254904 DEBUG nova.compute.manager [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:31:50 np0005542249 kernel: tap9e69b326-3b (unregistering): left promiscuous mode
Dec  2 06:31:50 np0005542249 NetworkManager[48987]: <info>  [1764675110.5580] device (tap9e69b326-3b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:31:50 np0005542249 ovn_controller[153849]: 2025-12-02T11:31:50Z|00241|binding|INFO|Releasing lport 9e69b326-3b7f-4fee-8050-9ff725b75719 from this chassis (sb_readonly=0)
Dec  2 06:31:50 np0005542249 ovn_controller[153849]: 2025-12-02T11:31:50Z|00242|binding|INFO|Setting lport 9e69b326-3b7f-4fee-8050-9ff725b75719 down in Southbound
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.605 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:50 np0005542249 ovn_controller[153849]: 2025-12-02T11:31:50Z|00243|binding|INFO|Removing iface tap9e69b326-3b ovn-installed in OVS
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.613 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:50 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:50.621 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:b1:85 10.100.0.6'], port_security=['fa:16:3e:12:b1:85 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8c463218-c639-4256-b580-f16aaa113a7f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a893d0c223f746328e706d7491d73b20', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3d9ac6f0-d677-4372-91b9-169b8de31d1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.218'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9a246c4-d9fe-402e-8fa6-6099b55c4866, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=9e69b326-3b7f-4fee-8050-9ff725b75719) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:31:50 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:50.625 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 9e69b326-3b7f-4fee-8050-9ff725b75719 in datapath 4f9f73cb-9730-4829-ae15-1f03b97e60f8 unbound from our chassis#033[00m
Dec  2 06:31:50 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:50.628 163757 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4f9f73cb-9730-4829-ae15-1f03b97e60f8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 06:31:50 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:50.630 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf63710-ad61-4835-a9a3-f4be5e7eab83]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:50 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:50.631 163757 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8 namespace which is not needed anymore#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.641 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:50 np0005542249 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Dec  2 06:31:50 np0005542249 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000001a.scope: Consumed 17.002s CPU time.
Dec  2 06:31:50 np0005542249 systemd-machined[216222]: Machine qemu-26-instance-0000001a terminated.
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.726 254904 INFO nova.virt.libvirt.driver [-] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Instance destroyed successfully.#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.727 254904 DEBUG nova.objects.instance [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lazy-loading 'resources' on Instance uuid 8c463218-c639-4256-b580-f16aaa113a7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.744 254904 DEBUG nova.virt.libvirt.vif [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:31:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-797466503',display_name='tempest-TransferEncryptedVolumeTest-server-797466503',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-797466503',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFgf6Glrr4mOxvEUPVgKAzbMgkblDirrH2khctJvQZz94LiWsbLQ82mzb2Y9Rt3SfEin9Hpb/echGrZ3sf80MCARzFat1FEJ1OH4HmTSACHUm+YJhtZATdTFOe7LkwhRFQ==',key_name='tempest-TransferEncryptedVolumeTest-1310282930',keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:31:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a893d0c223f746328e706d7491d73b20',ramdisk_id='',reservation_id='r-jc3gpsea',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1499588457',owner_user_name='tempest-TransferEncryptedVolumeTest-1499588457-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:31:28Z,user_data=None,user_id='1caa62e7ee8b42be98bc34780a7197f9',uuid=8c463218-c639-4256-b580-f16aaa113a7f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9e69b326-3b7f-4fee-8050-9ff725b75719", "address": "fa:16:3e:12:b1:85", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e69b326-3b", "ovs_interfaceid": "9e69b326-3b7f-4fee-8050-9ff725b75719", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.745 254904 DEBUG nova.network.os_vif_util [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Converting VIF {"id": "9e69b326-3b7f-4fee-8050-9ff725b75719", "address": "fa:16:3e:12:b1:85", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e69b326-3b", "ovs_interfaceid": "9e69b326-3b7f-4fee-8050-9ff725b75719", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.746 254904 DEBUG nova.network.os_vif_util [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:12:b1:85,bridge_name='br-int',has_traffic_filtering=True,id=9e69b326-3b7f-4fee-8050-9ff725b75719,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e69b326-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.746 254904 DEBUG os_vif [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:b1:85,bridge_name='br-int',has_traffic_filtering=True,id=9e69b326-3b7f-4fee-8050-9ff725b75719,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e69b326-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.748 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.749 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e69b326-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.751 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.754 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.761 254904 INFO os_vif [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:b1:85,bridge_name='br-int',has_traffic_filtering=True,id=9e69b326-3b7f-4fee-8050-9ff725b75719,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e69b326-3b')#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.769 254904 DEBUG nova.network.neutron [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Updating instance_info_cache with network_info: [{"id": "1c8dc20d-b649-4107-a95c-d427a6c8f59a", "address": "fa:16:3e:e1:5e:cc", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c8dc20d-b6", "ovs_interfaceid": "1c8dc20d-b649-4107-a95c-d427a6c8f59a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.796 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Releasing lock "refresh_cache-64fbd54d-f574-44e6-a788-53938d2219e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.796 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.797 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.797 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:31:50 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[292080]: [NOTICE]   (292084) : haproxy version is 2.8.14-c23fe91
Dec  2 06:31:50 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[292080]: [NOTICE]   (292084) : path to executable is /usr/sbin/haproxy
Dec  2 06:31:50 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[292080]: [WARNING]  (292084) : Exiting Master process...
Dec  2 06:31:50 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[292080]: [ALERT]    (292084) : Current worker (292086) exited with code 143 (Terminated)
Dec  2 06:31:50 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[292080]: [WARNING]  (292084) : All workers exited. Exiting... (0)
Dec  2 06:31:50 np0005542249 systemd[1]: libpod-32c31c223920c6c63979a668588a0caa2ec8fb379ccea27c7638d34e53185106.scope: Deactivated successfully.
Dec  2 06:31:50 np0005542249 podman[292206]: 2025-12-02 11:31:50.83675223 +0000 UTC m=+0.062588526 container died 32c31c223920c6c63979a668588a0caa2ec8fb379ccea27c7638d34e53185106 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 06:31:50 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-32c31c223920c6c63979a668588a0caa2ec8fb379ccea27c7638d34e53185106-userdata-shm.mount: Deactivated successfully.
Dec  2 06:31:50 np0005542249 systemd[1]: var-lib-containers-storage-overlay-729cc689fad9db3e1a3a2896b8e9f50dd5ca6c26df7d3b19b177adc37b87f239-merged.mount: Deactivated successfully.
Dec  2 06:31:50 np0005542249 podman[292206]: 2025-12-02 11:31:50.886922459 +0000 UTC m=+0.112758725 container cleanup 32c31c223920c6c63979a668588a0caa2ec8fb379ccea27c7638d34e53185106 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 06:31:50 np0005542249 systemd[1]: libpod-conmon-32c31c223920c6c63979a668588a0caa2ec8fb379ccea27c7638d34e53185106.scope: Deactivated successfully.
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.955 254904 DEBUG nova.compute.manager [req-92dd4325-d3c2-46af-abfc-75970e5cf509 req-12d256b5-19b0-4003-927e-d8ffd0975f91 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Received event network-vif-unplugged-9e69b326-3b7f-4fee-8050-9ff725b75719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.956 254904 DEBUG oslo_concurrency.lockutils [req-92dd4325-d3c2-46af-abfc-75970e5cf509 req-12d256b5-19b0-4003-927e-d8ffd0975f91 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "8c463218-c639-4256-b580-f16aaa113a7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.956 254904 DEBUG oslo_concurrency.lockutils [req-92dd4325-d3c2-46af-abfc-75970e5cf509 req-12d256b5-19b0-4003-927e-d8ffd0975f91 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "8c463218-c639-4256-b580-f16aaa113a7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.957 254904 DEBUG oslo_concurrency.lockutils [req-92dd4325-d3c2-46af-abfc-75970e5cf509 req-12d256b5-19b0-4003-927e-d8ffd0975f91 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "8c463218-c639-4256-b580-f16aaa113a7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.957 254904 DEBUG nova.compute.manager [req-92dd4325-d3c2-46af-abfc-75970e5cf509 req-12d256b5-19b0-4003-927e-d8ffd0975f91 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] No waiting events found dispatching network-vif-unplugged-9e69b326-3b7f-4fee-8050-9ff725b75719 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.957 254904 DEBUG nova.compute.manager [req-92dd4325-d3c2-46af-abfc-75970e5cf509 req-12d256b5-19b0-4003-927e-d8ffd0975f91 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Received event network-vif-unplugged-9e69b326-3b7f-4fee-8050-9ff725b75719 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 06:31:50 np0005542249 podman[292253]: 2025-12-02 11:31:50.968069556 +0000 UTC m=+0.055512414 container remove 32c31c223920c6c63979a668588a0caa2ec8fb379ccea27c7638d34e53185106 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.978 254904 INFO nova.virt.libvirt.driver [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Deleting instance files /var/lib/nova/instances/8c463218-c639-4256-b580-f16aaa113a7f_del#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.979 254904 INFO nova.virt.libvirt.driver [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Deletion of /var/lib/nova/instances/8c463218-c639-4256-b580-f16aaa113a7f_del complete#033[00m
Dec  2 06:31:50 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:50.978 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[b0813cfa-cda5-4ea1-9b53-9f87fd7a49cc]: (4, ('Tue Dec  2 11:31:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8 (32c31c223920c6c63979a668588a0caa2ec8fb379ccea27c7638d34e53185106)\n32c31c223920c6c63979a668588a0caa2ec8fb379ccea27c7638d34e53185106\nTue Dec  2 11:31:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8 (32c31c223920c6c63979a668588a0caa2ec8fb379ccea27c7638d34e53185106)\n32c31c223920c6c63979a668588a0caa2ec8fb379ccea27c7638d34e53185106\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:50 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:50.981 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[26373882-1853-44d4-9c8f-a0633e46744d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:50 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:50.982 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f9f73cb-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:31:50 np0005542249 nova_compute[254900]: 2025-12-02 11:31:50.984 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:50 np0005542249 kernel: tap4f9f73cb-90: left promiscuous mode
Dec  2 06:31:51 np0005542249 nova_compute[254900]: 2025-12-02 11:31:51.004 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:51 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:51.008 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[d11d1f91-14b8-4e00-bcdb-69c363febcd2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:51 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:51.027 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[5fddbde8-3c6c-4006-a367-476b10addee2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:51 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:51.029 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[ce12c9aa-9a5f-4fec-bf2e-0fb99ffcb397]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:51 np0005542249 nova_compute[254900]: 2025-12-02 11:31:51.039 254904 INFO nova.compute.manager [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Took 0.55 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:31:51 np0005542249 nova_compute[254900]: 2025-12-02 11:31:51.040 254904 DEBUG oslo.service.loopingcall [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:31:51 np0005542249 nova_compute[254900]: 2025-12-02 11:31:51.040 254904 DEBUG nova.compute.manager [-] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:31:51 np0005542249 nova_compute[254900]: 2025-12-02 11:31:51.040 254904 DEBUG nova.network.neutron [-] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:31:51 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:51.049 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[f6c74e66-493c-417f-aa47-aadab2f7e981]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527820, 'reachable_time': 42122, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292272, 'error': None, 'target': 'ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:51 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:51.052 164036 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 06:31:51 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:31:51.053 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[1125a392-2f9c-4d9e-af43-efc5eae98fbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:31:51 np0005542249 systemd[1]: run-netns-ovnmeta\x2d4f9f73cb\x2d9730\x2d4829\x2dae15\x2d1f03b97e60f8.mount: Deactivated successfully.
Dec  2 06:31:52 np0005542249 nova_compute[254900]: 2025-12-02 11:31:52.157 254904 DEBUG nova.network.neutron [-] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:31:52 np0005542249 nova_compute[254900]: 2025-12-02 11:31:52.176 254904 INFO nova.compute.manager [-] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Took 1.14 seconds to deallocate network for instance.#033[00m
Dec  2 06:31:52 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1667: 321 pgs: 321 active+clean; 370 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 612 KiB/s rd, 5.9 MiB/s wr, 84 op/s
Dec  2 06:31:52 np0005542249 nova_compute[254900]: 2025-12-02 11:31:52.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:31:52 np0005542249 nova_compute[254900]: 2025-12-02 11:31:52.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:31:52 np0005542249 nova_compute[254900]: 2025-12-02 11:31:52.389 254904 INFO nova.compute.manager [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Took 0.21 seconds to detach 1 volumes for instance.#033[00m
Dec  2 06:31:52 np0005542249 nova_compute[254900]: 2025-12-02 11:31:52.404 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:31:52 np0005542249 nova_compute[254900]: 2025-12-02 11:31:52.444 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:31:52 np0005542249 nova_compute[254900]: 2025-12-02 11:31:52.444 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:31:52 np0005542249 nova_compute[254900]: 2025-12-02 11:31:52.445 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:31:52 np0005542249 nova_compute[254900]: 2025-12-02 11:31:52.445 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:31:52 np0005542249 nova_compute[254900]: 2025-12-02 11:31:52.445 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:31:52 np0005542249 nova_compute[254900]: 2025-12-02 11:31:52.476 254904 DEBUG oslo_concurrency.lockutils [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:31:52 np0005542249 nova_compute[254900]: 2025-12-02 11:31:52.477 254904 DEBUG oslo_concurrency.lockutils [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:31:52 np0005542249 nova_compute[254900]: 2025-12-02 11:31:52.525 254904 DEBUG nova.scheduler.client.report [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Refreshing inventories for resource provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  2 06:31:52 np0005542249 nova_compute[254900]: 2025-12-02 11:31:52.549 254904 DEBUG nova.scheduler.client.report [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Updating ProviderTree inventory for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  2 06:31:52 np0005542249 nova_compute[254900]: 2025-12-02 11:31:52.549 254904 DEBUG nova.compute.provider_tree [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Updating inventory in ProviderTree for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  2 06:31:52 np0005542249 nova_compute[254900]: 2025-12-02 11:31:52.562 254904 DEBUG nova.scheduler.client.report [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Refreshing aggregate associations for resource provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  2 06:31:52 np0005542249 nova_compute[254900]: 2025-12-02 11:31:52.594 254904 DEBUG nova.scheduler.client.report [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Refreshing trait associations for resource provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062, traits: HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SVM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_MMX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,HW_CPU_X86_CLMUL,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE41,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  2 06:31:52 np0005542249 nova_compute[254900]: 2025-12-02 11:31:52.701 254904 DEBUG oslo_concurrency.processutils [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:31:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:31:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:31:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1577315086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:31:52 np0005542249 nova_compute[254900]: 2025-12-02 11:31:52.931 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.043 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.043 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.049 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.049 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.072 254904 DEBUG nova.compute.manager [req-b4c69e7c-cd8e-4ded-b3ef-7733beccba06 req-3ffc8abc-8269-45cd-9044-572a46c9c9e6 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Received event network-vif-plugged-9e69b326-3b7f-4fee-8050-9ff725b75719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.073 254904 DEBUG oslo_concurrency.lockutils [req-b4c69e7c-cd8e-4ded-b3ef-7733beccba06 req-3ffc8abc-8269-45cd-9044-572a46c9c9e6 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "8c463218-c639-4256-b580-f16aaa113a7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.073 254904 DEBUG oslo_concurrency.lockutils [req-b4c69e7c-cd8e-4ded-b3ef-7733beccba06 req-3ffc8abc-8269-45cd-9044-572a46c9c9e6 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "8c463218-c639-4256-b580-f16aaa113a7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.074 254904 DEBUG oslo_concurrency.lockutils [req-b4c69e7c-cd8e-4ded-b3ef-7733beccba06 req-3ffc8abc-8269-45cd-9044-572a46c9c9e6 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "8c463218-c639-4256-b580-f16aaa113a7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.074 254904 DEBUG nova.compute.manager [req-b4c69e7c-cd8e-4ded-b3ef-7733beccba06 req-3ffc8abc-8269-45cd-9044-572a46c9c9e6 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] No waiting events found dispatching network-vif-plugged-9e69b326-3b7f-4fee-8050-9ff725b75719 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.074 254904 WARNING nova.compute.manager [req-b4c69e7c-cd8e-4ded-b3ef-7733beccba06 req-3ffc8abc-8269-45cd-9044-572a46c9c9e6 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Received unexpected event network-vif-plugged-9e69b326-3b7f-4fee-8050-9ff725b75719 for instance with vm_state deleted and task_state None.#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.074 254904 DEBUG nova.compute.manager [req-b4c69e7c-cd8e-4ded-b3ef-7733beccba06 req-3ffc8abc-8269-45cd-9044-572a46c9c9e6 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Received event network-vif-deleted-9e69b326-3b7f-4fee-8050-9ff725b75719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:31:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:31:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3825694847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.249 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.251 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3943MB free_disk=59.98784637451172GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.251 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.254 254904 DEBUG oslo_concurrency.processutils [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.258 254904 DEBUG nova.compute.provider_tree [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.288 254904 DEBUG nova.scheduler.client.report [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.315 254904 DEBUG oslo_concurrency.lockutils [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.319 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.361 254904 INFO nova.scheduler.client.report [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Deleted allocations for instance 8c463218-c639-4256-b580-f16aaa113a7f#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.401 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Instance 64fbd54d-f574-44e6-a788-53938d2219e8 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.401 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Instance 300cc277-5780-4174-88ed-a942194a10b9 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.401 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.401 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.436 254904 DEBUG oslo_concurrency.lockutils [None req-6e1bb9b8-011e-41ea-9ab3-8a4af7fe628d 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "8c463218-c639-4256-b580-f16aaa113a7f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.957s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.466 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:31:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:31:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3908697627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.951 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.960 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:31:53 np0005542249 nova_compute[254900]: 2025-12-02 11:31:53.983 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:31:54 np0005542249 nova_compute[254900]: 2025-12-02 11:31:54.012 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:31:54 np0005542249 nova_compute[254900]: 2025-12-02 11:31:54.012 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:31:54 np0005542249 nova_compute[254900]: 2025-12-02 11:31:54.266 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:54 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1668: 321 pgs: 321 active+clean; 373 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 592 KiB/s rd, 5.9 MiB/s wr, 90 op/s
Dec  2 06:31:54 np0005542249 nova_compute[254900]: 2025-12-02 11:31:54.991 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:31:55 np0005542249 nova_compute[254900]: 2025-12-02 11:31:55.753 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:31:56 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1669: 321 pgs: 321 active+clean; 373 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec  2 06:31:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:31:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:31:56 np0005542249 nova_compute[254900]: 2025-12-02 11:31:56.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:31:56 np0005542249 nova_compute[254900]: 2025-12-02 11:31:56.383 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:31:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:31:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:31:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:31:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:57.162471) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675117162531, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 366, "num_deletes": 251, "total_data_size": 190906, "memory_usage": 197960, "flush_reason": "Manual Compaction"}
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675117176669, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 189039, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33833, "largest_seqno": 34198, "table_properties": {"data_size": 186813, "index_size": 390, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5646, "raw_average_key_size": 18, "raw_value_size": 182369, "raw_average_value_size": 601, "num_data_blocks": 18, "num_entries": 303, "num_filter_entries": 303, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764675103, "oldest_key_time": 1764675103, "file_creation_time": 1764675117, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 14308 microseconds, and 2080 cpu microseconds.
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:57.176769) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 189039 bytes OK
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:57.176815) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:57.184085) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:57.184159) EVENT_LOG_v1 {"time_micros": 1764675117184143, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:57.184199) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 188477, prev total WAL file size 216122, number of live WAL files 2.
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:57.185083) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(184KB)], [68(10098KB)]
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675117185205, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10529397, "oldest_snapshot_seqno": -1}
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6322 keys, 8817647 bytes, temperature: kUnknown
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675117259979, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8817647, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8772619, "index_size": 28150, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15813, "raw_key_size": 159430, "raw_average_key_size": 25, "raw_value_size": 8656239, "raw_average_value_size": 1369, "num_data_blocks": 1126, "num_entries": 6322, "num_filter_entries": 6322, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672515, "oldest_key_time": 0, "file_creation_time": 1764675117, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:57.260464) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8817647 bytes
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:57.262441) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.5 rd, 117.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.9 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(102.3) write-amplify(46.6) OK, records in: 6832, records dropped: 510 output_compression: NoCompression
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:57.262476) EVENT_LOG_v1 {"time_micros": 1764675117262460, "job": 38, "event": "compaction_finished", "compaction_time_micros": 74950, "compaction_time_cpu_micros": 47007, "output_level": 6, "num_output_files": 1, "total_output_size": 8817647, "num_input_records": 6832, "num_output_records": 6322, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675117262744, "job": 38, "event": "table_file_deletion", "file_number": 70}
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675117266300, "job": 38, "event": "table_file_deletion", "file_number": 68}
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:57.184830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:57.266519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:57.266530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:57.266534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:57.266539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:31:57.266544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:31:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:31:58 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1670: 321 pgs: 321 active+clean; 373 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec  2 06:31:59 np0005542249 podman[292340]: 2025-12-02 11:31:59.033325257 +0000 UTC m=+0.099002672 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  2 06:31:59 np0005542249 podman[292341]: 2025-12-02 11:31:59.089535049 +0000 UTC m=+0.155035589 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  2 06:31:59 np0005542249 nova_compute[254900]: 2025-12-02 11:31:59.313 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:00 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1671: 321 pgs: 321 active+clean; 373 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 62 KiB/s wr, 19 op/s
Dec  2 06:32:00 np0005542249 nova_compute[254900]: 2025-12-02 11:32:00.756 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:01 np0005542249 nova_compute[254900]: 2025-12-02 11:32:01.027 254904 DEBUG oslo_concurrency.lockutils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "c5811f91-4c9d-4c71-81dd-47e6a637fc29" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:32:01 np0005542249 nova_compute[254900]: 2025-12-02 11:32:01.028 254904 DEBUG oslo_concurrency.lockutils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "c5811f91-4c9d-4c71-81dd-47e6a637fc29" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:32:01 np0005542249 nova_compute[254900]: 2025-12-02 11:32:01.054 254904 DEBUG nova.compute.manager [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:32:01 np0005542249 nova_compute[254900]: 2025-12-02 11:32:01.181 254904 DEBUG oslo_concurrency.lockutils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:32:01 np0005542249 nova_compute[254900]: 2025-12-02 11:32:01.182 254904 DEBUG oslo_concurrency.lockutils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:32:01 np0005542249 nova_compute[254900]: 2025-12-02 11:32:01.194 254904 DEBUG nova.virt.hardware [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:32:01 np0005542249 nova_compute[254900]: 2025-12-02 11:32:01.195 254904 INFO nova.compute.claims [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:32:01 np0005542249 nova_compute[254900]: 2025-12-02 11:32:01.353 254904 DEBUG oslo_concurrency.processutils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:32:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:32:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2299603833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:32:01 np0005542249 nova_compute[254900]: 2025-12-02 11:32:01.869 254904 DEBUG oslo_concurrency.processutils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:32:01 np0005542249 nova_compute[254900]: 2025-12-02 11:32:01.876 254904 DEBUG nova.compute.provider_tree [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:32:01 np0005542249 nova_compute[254900]: 2025-12-02 11:32:01.891 254904 DEBUG nova.scheduler.client.report [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:32:01 np0005542249 nova_compute[254900]: 2025-12-02 11:32:01.911 254904 DEBUG oslo_concurrency.lockutils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:32:01 np0005542249 nova_compute[254900]: 2025-12-02 11:32:01.912 254904 DEBUG nova.compute.manager [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:32:01 np0005542249 nova_compute[254900]: 2025-12-02 11:32:01.956 254904 DEBUG nova.compute.manager [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:32:01 np0005542249 nova_compute[254900]: 2025-12-02 11:32:01.957 254904 DEBUG nova.network.neutron [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:32:01 np0005542249 nova_compute[254900]: 2025-12-02 11:32:01.977 254904 INFO nova.virt.libvirt.driver [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:32:02 np0005542249 nova_compute[254900]: 2025-12-02 11:32:02.015 254904 DEBUG nova.compute.manager [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:32:02 np0005542249 nova_compute[254900]: 2025-12-02 11:32:02.055 254904 INFO nova.virt.block_device [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Booting with volume aca02ad2-47f5-4d77-9df0-2c95a1cb88a2 at /dev/vda#033[00m
Dec  2 06:32:02 np0005542249 nova_compute[254900]: 2025-12-02 11:32:02.183 254904 DEBUG os_brick.utils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:32:02 np0005542249 nova_compute[254900]: 2025-12-02 11:32:02.185 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:32:02 np0005542249 nova_compute[254900]: 2025-12-02 11:32:02.206 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:32:02 np0005542249 nova_compute[254900]: 2025-12-02 11:32:02.206 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[52ed7514-68a9-4104-a4ad-01bbeab18c88]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:02 np0005542249 nova_compute[254900]: 2025-12-02 11:32:02.210 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:32:02 np0005542249 nova_compute[254900]: 2025-12-02 11:32:02.228 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:32:02 np0005542249 nova_compute[254900]: 2025-12-02 11:32:02.228 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[31c49737-425e-4ff5-b069-844950185928]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:02 np0005542249 nova_compute[254900]: 2025-12-02 11:32:02.231 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:32:02 np0005542249 nova_compute[254900]: 2025-12-02 11:32:02.247 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:32:02 np0005542249 nova_compute[254900]: 2025-12-02 11:32:02.248 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[e6609eb7-98a0-426c-9d39-0a23b28da418]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:02 np0005542249 nova_compute[254900]: 2025-12-02 11:32:02.251 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[71a10907-29c3-4186-822a-79299b9fd26f]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:02 np0005542249 nova_compute[254900]: 2025-12-02 11:32:02.251 254904 DEBUG oslo_concurrency.processutils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:32:02 np0005542249 nova_compute[254900]: 2025-12-02 11:32:02.291 254904 DEBUG oslo_concurrency.processutils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CMD "nvme version" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:32:02 np0005542249 nova_compute[254900]: 2025-12-02 11:32:02.295 254904 DEBUG os_brick.initiator.connectors.lightos [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:32:02 np0005542249 nova_compute[254900]: 2025-12-02 11:32:02.296 254904 DEBUG os_brick.initiator.connectors.lightos [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:32:02 np0005542249 nova_compute[254900]: 2025-12-02 11:32:02.296 254904 DEBUG os_brick.initiator.connectors.lightos [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:32:02 np0005542249 nova_compute[254900]: 2025-12-02 11:32:02.297 254904 DEBUG os_brick.utils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] <== get_connector_properties: return (112ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:32:02 np0005542249 nova_compute[254900]: 2025-12-02 11:32:02.298 254904 DEBUG nova.virt.block_device [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Updating existing volume attachment record: bee461ba-3123-4e3f-b977-5539529f66ad _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:32:02 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1672: 321 pgs: 321 active+clean; 373 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 169 KiB/s rd, 62 KiB/s wr, 23 op/s
Dec  2 06:32:02 np0005542249 nova_compute[254900]: 2025-12-02 11:32:02.742 254904 DEBUG nova.policy [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1caa62e7ee8b42be98bc34780a7197f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a893d0c223f746328e706d7491d73b20', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:32:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:32:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:32:02 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2451947970' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.275 254904 DEBUG nova.compute.manager [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.278 254904 DEBUG nova.virt.libvirt.driver [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.279 254904 INFO nova.virt.libvirt.driver [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Creating image(s)#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.280 254904 DEBUG nova.virt.libvirt.driver [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.281 254904 DEBUG nova.virt.libvirt.driver [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Ensure instance console log exists: /var/lib/nova/instances/c5811f91-4c9d-4c71-81dd-47e6a637fc29/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.281 254904 DEBUG oslo_concurrency.lockutils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.282 254904 DEBUG oslo_concurrency.lockutils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.282 254904 DEBUG oslo_concurrency.lockutils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.333 254904 DEBUG nova.compute.manager [req-45ba0d51-7f1f-4cda-b503-dbd6ee71dd93 req-122ba4eb-ccc1-4dd3-bb4f-48b26e09ee37 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Received event network-changed-8ac6e5bf-779d-409f-90e8-7ee2dbf72367 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.334 254904 DEBUG nova.compute.manager [req-45ba0d51-7f1f-4cda-b503-dbd6ee71dd93 req-122ba4eb-ccc1-4dd3-bb4f-48b26e09ee37 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Refreshing instance network info cache due to event network-changed-8ac6e5bf-779d-409f-90e8-7ee2dbf72367. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.334 254904 DEBUG oslo_concurrency.lockutils [req-45ba0d51-7f1f-4cda-b503-dbd6ee71dd93 req-122ba4eb-ccc1-4dd3-bb4f-48b26e09ee37 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-300cc277-5780-4174-88ed-a942194a10b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.334 254904 DEBUG oslo_concurrency.lockutils [req-45ba0d51-7f1f-4cda-b503-dbd6ee71dd93 req-122ba4eb-ccc1-4dd3-bb4f-48b26e09ee37 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-300cc277-5780-4174-88ed-a942194a10b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.335 254904 DEBUG nova.network.neutron [req-45ba0d51-7f1f-4cda-b503-dbd6ee71dd93 req-122ba4eb-ccc1-4dd3-bb4f-48b26e09ee37 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Refreshing network info cache for port 8ac6e5bf-779d-409f-90e8-7ee2dbf72367 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.487 254904 DEBUG oslo_concurrency.lockutils [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "300cc277-5780-4174-88ed-a942194a10b9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.487 254904 DEBUG oslo_concurrency.lockutils [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "300cc277-5780-4174-88ed-a942194a10b9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.488 254904 DEBUG oslo_concurrency.lockutils [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "300cc277-5780-4174-88ed-a942194a10b9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.488 254904 DEBUG oslo_concurrency.lockutils [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "300cc277-5780-4174-88ed-a942194a10b9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.489 254904 DEBUG oslo_concurrency.lockutils [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "300cc277-5780-4174-88ed-a942194a10b9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.491 254904 INFO nova.compute.manager [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Terminating instance#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.493 254904 DEBUG nova.compute.manager [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:32:03 np0005542249 kernel: tap8ac6e5bf-77 (unregistering): left promiscuous mode
Dec  2 06:32:03 np0005542249 NetworkManager[48987]: <info>  [1764675123.5596] device (tap8ac6e5bf-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:32:03 np0005542249 ovn_controller[153849]: 2025-12-02T11:32:03Z|00244|binding|INFO|Releasing lport 8ac6e5bf-779d-409f-90e8-7ee2dbf72367 from this chassis (sb_readonly=0)
Dec  2 06:32:03 np0005542249 ovn_controller[153849]: 2025-12-02T11:32:03Z|00245|binding|INFO|Setting lport 8ac6e5bf-779d-409f-90e8-7ee2dbf72367 down in Southbound
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.563 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:03 np0005542249 ovn_controller[153849]: 2025-12-02T11:32:03Z|00246|binding|INFO|Removing iface tap8ac6e5bf-77 ovn-installed in OVS
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.567 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:03 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:03.575 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:86:70 10.100.0.9'], port_security=['fa:16:3e:28:86:70 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '300cc277-5780-4174-88ed-a942194a10b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '625a6939c31646a4a83ea851774cf28c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bf93629e-6336-4a9c-a41d-6ce19e6b6662', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cc823ef0-3e69-4062-a488-a82f483f10bb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=8ac6e5bf-779d-409f-90e8-7ee2dbf72367) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:32:03 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:03.576 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 8ac6e5bf-779d-409f-90e8-7ee2dbf72367 in datapath acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 unbound from our chassis#033[00m
Dec  2 06:32:03 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:03.578 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.588 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:03 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:03.605 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[e496d609-6485-4469-b45c-54dd9d20c24c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:03 np0005542249 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000019.scope: Deactivated successfully.
Dec  2 06:32:03 np0005542249 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000019.scope: Consumed 15.471s CPU time.
Dec  2 06:32:03 np0005542249 systemd-machined[216222]: Machine qemu-25-instance-00000019 terminated.
Dec  2 06:32:03 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:03.650 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[b37dc0f8-a64e-4018-8ebb-7fd5555df767]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:03 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:03.655 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[d993498d-12ea-42a7-8a93-f64dad2e7396]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.668 254904 DEBUG nova.network.neutron [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Successfully created port: 4df50564-1d9f-49d0-a86a-a8166845d7bd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:32:03 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:03.696 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[70c369ba-3d52-4a95-a7a6-c59fadc58b35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.719 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:03 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:03.727 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[5f876f3f-4fd8-40e5-b997-83c75fcc89e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacfaa8ac-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:73:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522321, 'reachable_time': 34711, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292429, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.732 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.742 254904 INFO nova.virt.libvirt.driver [-] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Instance destroyed successfully.#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.742 254904 DEBUG nova.objects.instance [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lazy-loading 'resources' on Instance uuid 300cc277-5780-4174-88ed-a942194a10b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:32:03 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:03.755 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[dc2470f3-b777-4a4c-a4f8-f12776b6c1df]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapacfaa8ac-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522335, 'tstamp': 522335}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292436, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapacfaa8ac-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522340, 'tstamp': 522340}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292436, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:03 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:03.757 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacfaa8ac-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.759 254904 DEBUG nova.virt.libvirt.vif [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:31:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-444799541',display_name='tempest-TestVolumeBootPattern-server-444799541',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-444799541',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDl9Chitcp+6ZZ9O/so1iQpbrg+ZOVWOrATMsWTbgaWcZg2lFiQK4KEUyaqp5+G/z2wPorJssN622GdMYPRLScxIeivbRrFeE5q310MfETTcDT4f8HB9OmcWcicW5ZF4QA==',key_name='tempest-TestVolumeBootPattern-765108891',keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:31:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='625a6939c31646a4a83ea851774cf28c',ramdisk_id='',reservation_id='r-ex0tqxlf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1396850361',owner_user_name='tempest-TestVolumeBootPattern-1396850361-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:31:22Z,user_data=None,user_id='6ccb73a613554d938221b4bf46d7ae83',uuid=300cc277-5780-4174-88ed-a942194a10b9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8ac6e5bf-779d-409f-90e8-7ee2dbf72367", "address": "fa:16:3e:28:86:70", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ac6e5bf-77", "ovs_interfaceid": "8ac6e5bf-779d-409f-90e8-7ee2dbf72367", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.760 254904 DEBUG nova.network.os_vif_util [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converting VIF {"id": "8ac6e5bf-779d-409f-90e8-7ee2dbf72367", "address": "fa:16:3e:28:86:70", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ac6e5bf-77", "ovs_interfaceid": "8ac6e5bf-779d-409f-90e8-7ee2dbf72367", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.761 254904 DEBUG nova.network.os_vif_util [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:28:86:70,bridge_name='br-int',has_traffic_filtering=True,id=8ac6e5bf-779d-409f-90e8-7ee2dbf72367,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ac6e5bf-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.762 254904 DEBUG os_vif [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:86:70,bridge_name='br-int',has_traffic_filtering=True,id=8ac6e5bf-779d-409f-90e8-7ee2dbf72367,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ac6e5bf-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.764 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:03 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:03.765 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapacfaa8ac-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:32:03 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:03.765 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.765 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ac6e5bf-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:32:03 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:03.766 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapacfaa8ac-00, col_values=(('external_ids', {'iface-id': '1636ad30-406d-4138-823e-abbe7f4d87ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:32:03 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:03.766 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.767 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.769 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:03 np0005542249 nova_compute[254900]: 2025-12-02 11:32:03.773 254904 INFO os_vif [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:86:70,bridge_name='br-int',has_traffic_filtering=True,id=8ac6e5bf-779d-409f-90e8-7ee2dbf72367,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ac6e5bf-77')#033[00m
Dec  2 06:32:04 np0005542249 nova_compute[254900]: 2025-12-02 11:32:04.105 254904 INFO nova.virt.libvirt.driver [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Deleting instance files /var/lib/nova/instances/300cc277-5780-4174-88ed-a942194a10b9_del#033[00m
Dec  2 06:32:04 np0005542249 nova_compute[254900]: 2025-12-02 11:32:04.106 254904 INFO nova.virt.libvirt.driver [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Deletion of /var/lib/nova/instances/300cc277-5780-4174-88ed-a942194a10b9_del complete#033[00m
Dec  2 06:32:04 np0005542249 nova_compute[254900]: 2025-12-02 11:32:04.176 254904 INFO nova.compute.manager [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:32:04 np0005542249 nova_compute[254900]: 2025-12-02 11:32:04.177 254904 DEBUG oslo.service.loopingcall [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:32:04 np0005542249 nova_compute[254900]: 2025-12-02 11:32:04.178 254904 DEBUG nova.compute.manager [-] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:32:04 np0005542249 nova_compute[254900]: 2025-12-02 11:32:04.178 254904 DEBUG nova.network.neutron [-] [instance: 300cc277-5780-4174-88ed-a942194a10b9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:32:04 np0005542249 nova_compute[254900]: 2025-12-02 11:32:04.188 254904 DEBUG nova.compute.manager [req-9e759045-d5f2-451b-810d-bef459f2f9ff req-3f898f07-d8ec-491c-957c-bff57cdec33c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Received event network-vif-unplugged-8ac6e5bf-779d-409f-90e8-7ee2dbf72367 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:32:04 np0005542249 nova_compute[254900]: 2025-12-02 11:32:04.190 254904 DEBUG oslo_concurrency.lockutils [req-9e759045-d5f2-451b-810d-bef459f2f9ff req-3f898f07-d8ec-491c-957c-bff57cdec33c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "300cc277-5780-4174-88ed-a942194a10b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:32:04 np0005542249 nova_compute[254900]: 2025-12-02 11:32:04.191 254904 DEBUG oslo_concurrency.lockutils [req-9e759045-d5f2-451b-810d-bef459f2f9ff req-3f898f07-d8ec-491c-957c-bff57cdec33c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "300cc277-5780-4174-88ed-a942194a10b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:32:04 np0005542249 nova_compute[254900]: 2025-12-02 11:32:04.191 254904 DEBUG oslo_concurrency.lockutils [req-9e759045-d5f2-451b-810d-bef459f2f9ff req-3f898f07-d8ec-491c-957c-bff57cdec33c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "300cc277-5780-4174-88ed-a942194a10b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:32:04 np0005542249 nova_compute[254900]: 2025-12-02 11:32:04.192 254904 DEBUG nova.compute.manager [req-9e759045-d5f2-451b-810d-bef459f2f9ff req-3f898f07-d8ec-491c-957c-bff57cdec33c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] No waiting events found dispatching network-vif-unplugged-8ac6e5bf-779d-409f-90e8-7ee2dbf72367 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:32:04 np0005542249 nova_compute[254900]: 2025-12-02 11:32:04.192 254904 DEBUG nova.compute.manager [req-9e759045-d5f2-451b-810d-bef459f2f9ff req-3f898f07-d8ec-491c-957c-bff57cdec33c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Received event network-vif-unplugged-8ac6e5bf-779d-409f-90e8-7ee2dbf72367 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 06:32:04 np0005542249 nova_compute[254900]: 2025-12-02 11:32:04.316 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:04 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1673: 321 pgs: 321 active+clean; 373 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 283 KiB/s rd, 61 KiB/s wr, 62 op/s
Dec  2 06:32:04 np0005542249 nova_compute[254900]: 2025-12-02 11:32:04.389 254904 DEBUG nova.network.neutron [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Successfully updated port: 4df50564-1d9f-49d0-a86a-a8166845d7bd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:32:04 np0005542249 nova_compute[254900]: 2025-12-02 11:32:04.407 254904 DEBUG oslo_concurrency.lockutils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "refresh_cache-c5811f91-4c9d-4c71-81dd-47e6a637fc29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:32:04 np0005542249 nova_compute[254900]: 2025-12-02 11:32:04.408 254904 DEBUG oslo_concurrency.lockutils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquired lock "refresh_cache-c5811f91-4c9d-4c71-81dd-47e6a637fc29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:32:04 np0005542249 nova_compute[254900]: 2025-12-02 11:32:04.408 254904 DEBUG nova.network.neutron [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:32:04 np0005542249 nova_compute[254900]: 2025-12-02 11:32:04.603 254904 DEBUG nova.network.neutron [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:32:04 np0005542249 nova_compute[254900]: 2025-12-02 11:32:04.776 254904 DEBUG nova.network.neutron [-] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:32:04 np0005542249 nova_compute[254900]: 2025-12-02 11:32:04.794 254904 INFO nova.compute.manager [-] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Took 0.62 seconds to deallocate network for instance.#033[00m
Dec  2 06:32:04 np0005542249 nova_compute[254900]: 2025-12-02 11:32:04.810 254904 DEBUG nova.network.neutron [req-45ba0d51-7f1f-4cda-b503-dbd6ee71dd93 req-122ba4eb-ccc1-4dd3-bb4f-48b26e09ee37 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Updated VIF entry in instance network info cache for port 8ac6e5bf-779d-409f-90e8-7ee2dbf72367. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:32:04 np0005542249 nova_compute[254900]: 2025-12-02 11:32:04.811 254904 DEBUG nova.network.neutron [req-45ba0d51-7f1f-4cda-b503-dbd6ee71dd93 req-122ba4eb-ccc1-4dd3-bb4f-48b26e09ee37 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Updating instance_info_cache with network_info: [{"id": "8ac6e5bf-779d-409f-90e8-7ee2dbf72367", "address": "fa:16:3e:28:86:70", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ac6e5bf-77", "ovs_interfaceid": "8ac6e5bf-779d-409f-90e8-7ee2dbf72367", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:32:04 np0005542249 nova_compute[254900]: 2025-12-02 11:32:04.826 254904 DEBUG oslo_concurrency.lockutils [req-45ba0d51-7f1f-4cda-b503-dbd6ee71dd93 req-122ba4eb-ccc1-4dd3-bb4f-48b26e09ee37 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-300cc277-5780-4174-88ed-a942194a10b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:32:04 np0005542249 nova_compute[254900]: 2025-12-02 11:32:04.951 254904 INFO nova.compute.manager [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Took 0.16 seconds to detach 1 volumes for instance.#033[00m
Dec  2 06:32:04 np0005542249 nova_compute[254900]: 2025-12-02 11:32:04.993 254904 DEBUG oslo_concurrency.lockutils [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:32:04 np0005542249 nova_compute[254900]: 2025-12-02 11:32:04.993 254904 DEBUG oslo_concurrency.lockutils [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.086 254904 DEBUG oslo_concurrency.processutils [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.447 254904 DEBUG nova.compute.manager [req-4810af5a-1684-44e2-abfb-9d1656e22057 req-446d6f66-75df-4245-b309-89a91108a41a 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Received event network-changed-4df50564-1d9f-49d0-a86a-a8166845d7bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.447 254904 DEBUG nova.compute.manager [req-4810af5a-1684-44e2-abfb-9d1656e22057 req-446d6f66-75df-4245-b309-89a91108a41a 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Refreshing instance network info cache due to event network-changed-4df50564-1d9f-49d0-a86a-a8166845d7bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.448 254904 DEBUG oslo_concurrency.lockutils [req-4810af5a-1684-44e2-abfb-9d1656e22057 req-446d6f66-75df-4245-b309-89a91108a41a 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-c5811f91-4c9d-4c71-81dd-47e6a637fc29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:32:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:32:05 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2483035480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.667 254904 DEBUG oslo_concurrency.processutils [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.679 254904 DEBUG nova.compute.provider_tree [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.701 254904 DEBUG nova.scheduler.client.report [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.723 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764675110.7215762, 8c463218-c639-4256-b580-f16aaa113a7f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.724 254904 INFO nova.compute.manager [-] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.726 254904 DEBUG oslo_concurrency.lockutils [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.748 254904 INFO nova.scheduler.client.report [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Deleted allocations for instance 300cc277-5780-4174-88ed-a942194a10b9#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.751 254904 DEBUG nova.compute.manager [None req-6264b89e-1ad6-4d1d-b99a-7ecb4554d4e2 - - - - - -] [instance: 8c463218-c639-4256-b580-f16aaa113a7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.810 254904 DEBUG oslo_concurrency.lockutils [None req-43ae3a98-7780-465b-8177-fbbaaae4ac36 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "300cc277-5780-4174-88ed-a942194a10b9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.947 254904 DEBUG nova.network.neutron [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Updating instance_info_cache with network_info: [{"id": "4df50564-1d9f-49d0-a86a-a8166845d7bd", "address": "fa:16:3e:07:41:77", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4df50564-1d", "ovs_interfaceid": "4df50564-1d9f-49d0-a86a-a8166845d7bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.962 254904 DEBUG oslo_concurrency.lockutils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Releasing lock "refresh_cache-c5811f91-4c9d-4c71-81dd-47e6a637fc29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.963 254904 DEBUG nova.compute.manager [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Instance network_info: |[{"id": "4df50564-1d9f-49d0-a86a-a8166845d7bd", "address": "fa:16:3e:07:41:77", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4df50564-1d", "ovs_interfaceid": "4df50564-1d9f-49d0-a86a-a8166845d7bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.963 254904 DEBUG oslo_concurrency.lockutils [req-4810af5a-1684-44e2-abfb-9d1656e22057 req-446d6f66-75df-4245-b309-89a91108a41a 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-c5811f91-4c9d-4c71-81dd-47e6a637fc29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.963 254904 DEBUG nova.network.neutron [req-4810af5a-1684-44e2-abfb-9d1656e22057 req-446d6f66-75df-4245-b309-89a91108a41a 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Refreshing network info cache for port 4df50564-1d9f-49d0-a86a-a8166845d7bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.966 254904 DEBUG nova.virt.libvirt.driver [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Start _get_guest_xml network_info=[{"id": "4df50564-1d9f-49d0-a86a-a8166845d7bd", "address": "fa:16:3e:07:41:77", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4df50564-1d", "ovs_interfaceid": "4df50564-1d9f-49d0-a86a-a8166845d7bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-aca02ad2-47f5-4d77-9df0-2c95a1cb88a2', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'aca02ad2-47f5-4d77-9df0-2c95a1cb88a2', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'c5811f91-4c9d-4c71-81dd-47e6a637fc29', 'attached_at': '', 'detached_at': '', 'volume_id': 'aca02ad2-47f5-4d77-9df0-2c95a1cb88a2', 'serial': 'aca02ad2-47f5-4d77-9df0-2c95a1cb88a2'}, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'attachment_id': 'bee461ba-3123-4e3f-b977-5539529f66ad', 'delete_on_termination': False, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.973 254904 WARNING nova.virt.libvirt.driver [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.980 254904 DEBUG nova.virt.libvirt.host [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.980 254904 DEBUG nova.virt.libvirt.host [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.987 254904 DEBUG nova.virt.libvirt.host [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.987 254904 DEBUG nova.virt.libvirt.host [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.987 254904 DEBUG nova.virt.libvirt.driver [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.988 254904 DEBUG nova.virt.hardware [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.988 254904 DEBUG nova.virt.hardware [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.988 254904 DEBUG nova.virt.hardware [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.988 254904 DEBUG nova.virt.hardware [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.988 254904 DEBUG nova.virt.hardware [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.988 254904 DEBUG nova.virt.hardware [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.989 254904 DEBUG nova.virt.hardware [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.989 254904 DEBUG nova.virt.hardware [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.989 254904 DEBUG nova.virt.hardware [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.989 254904 DEBUG nova.virt.hardware [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:32:05 np0005542249 nova_compute[254900]: 2025-12-02 11:32:05.989 254904 DEBUG nova.virt.hardware [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.019 254904 DEBUG nova.storage.rbd_utils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] rbd image c5811f91-4c9d-4c71-81dd-47e6a637fc29_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.026 254904 DEBUG oslo_concurrency.processutils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.269 254904 DEBUG nova.compute.manager [req-d724be0f-1a4f-4f00-80bd-9d603c6381ff req-03492051-46f1-4e25-abc0-d18ddba9b31c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Received event network-vif-plugged-8ac6e5bf-779d-409f-90e8-7ee2dbf72367 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.270 254904 DEBUG oslo_concurrency.lockutils [req-d724be0f-1a4f-4f00-80bd-9d603c6381ff req-03492051-46f1-4e25-abc0-d18ddba9b31c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "300cc277-5780-4174-88ed-a942194a10b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.270 254904 DEBUG oslo_concurrency.lockutils [req-d724be0f-1a4f-4f00-80bd-9d603c6381ff req-03492051-46f1-4e25-abc0-d18ddba9b31c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "300cc277-5780-4174-88ed-a942194a10b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.271 254904 DEBUG oslo_concurrency.lockutils [req-d724be0f-1a4f-4f00-80bd-9d603c6381ff req-03492051-46f1-4e25-abc0-d18ddba9b31c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "300cc277-5780-4174-88ed-a942194a10b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.271 254904 DEBUG nova.compute.manager [req-d724be0f-1a4f-4f00-80bd-9d603c6381ff req-03492051-46f1-4e25-abc0-d18ddba9b31c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] No waiting events found dispatching network-vif-plugged-8ac6e5bf-779d-409f-90e8-7ee2dbf72367 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.271 254904 WARNING nova.compute.manager [req-d724be0f-1a4f-4f00-80bd-9d603c6381ff req-03492051-46f1-4e25-abc0-d18ddba9b31c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Received unexpected event network-vif-plugged-8ac6e5bf-779d-409f-90e8-7ee2dbf72367 for instance with vm_state deleted and task_state None.#033[00m
Dec  2 06:32:06 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1674: 321 pgs: 321 active+clean; 373 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 236 KiB/s rd, 0 B/s wr, 48 op/s
Dec  2 06:32:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:32:06 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/858828628' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.504 254904 DEBUG oslo_concurrency.processutils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.701 254904 DEBUG os_brick.encryptors [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Using volume encryption metadata '{'encryption_key_id': '49cd9ff2-94ba-42d1-a202-f78c8419b7da', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-aca02ad2-47f5-4d77-9df0-2c95a1cb88a2', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'aca02ad2-47f5-4d77-9df0-2c95a1cb88a2', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'c5811f91-4c9d-4c71-81dd-47e6a637fc29', 'attached_at': '', 'detached_at': '', 'volume_id': 'aca02ad2-47f5-4d77-9df0-2c95a1cb88a2', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.705 254904 DEBUG barbicanclient.client [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.724 254904 DEBUG barbicanclient.v1.secrets [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/49cd9ff2-94ba-42d1-a202-f78c8419b7da get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.725 254904 INFO barbicanclient.base [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/49cd9ff2-94ba-42d1-a202-f78c8419b7da#033[00m
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.757 254904 DEBUG barbicanclient.client [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.758 254904 INFO barbicanclient.base [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/49cd9ff2-94ba-42d1-a202-f78c8419b7da#033[00m
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.777 254904 DEBUG barbicanclient.client [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.777 254904 INFO barbicanclient.base [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/49cd9ff2-94ba-42d1-a202-f78c8419b7da#033[00m
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.802 254904 DEBUG barbicanclient.client [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.803 254904 INFO barbicanclient.base [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/49cd9ff2-94ba-42d1-a202-f78c8419b7da#033[00m
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.827 254904 DEBUG barbicanclient.client [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.828 254904 INFO barbicanclient.base [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/49cd9ff2-94ba-42d1-a202-f78c8419b7da#033[00m
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.875 254904 DEBUG barbicanclient.client [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.875 254904 INFO barbicanclient.base [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/49cd9ff2-94ba-42d1-a202-f78c8419b7da#033[00m
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.901 254904 DEBUG barbicanclient.client [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.902 254904 INFO barbicanclient.base [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/49cd9ff2-94ba-42d1-a202-f78c8419b7da#033[00m
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.944 254904 DEBUG barbicanclient.client [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:32:06 np0005542249 nova_compute[254900]: 2025-12-02 11:32:06.945 254904 INFO barbicanclient.base [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/49cd9ff2-94ba-42d1-a202-f78c8419b7da#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.000 254904 DEBUG barbicanclient.client [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.000 254904 INFO barbicanclient.base [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/49cd9ff2-94ba-42d1-a202-f78c8419b7da#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.030 254904 DEBUG barbicanclient.client [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.031 254904 INFO barbicanclient.base [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/49cd9ff2-94ba-42d1-a202-f78c8419b7da#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.058 254904 DEBUG barbicanclient.client [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.059 254904 INFO barbicanclient.base [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/49cd9ff2-94ba-42d1-a202-f78c8419b7da#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.083 254904 DEBUG barbicanclient.client [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.084 254904 INFO barbicanclient.base [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/49cd9ff2-94ba-42d1-a202-f78c8419b7da#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.114 254904 DEBUG barbicanclient.client [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.115 254904 INFO barbicanclient.base [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/49cd9ff2-94ba-42d1-a202-f78c8419b7da#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.145 254904 DEBUG barbicanclient.client [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.146 254904 INFO barbicanclient.base [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/49cd9ff2-94ba-42d1-a202-f78c8419b7da#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.168 254904 DEBUG barbicanclient.client [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.168 254904 INFO barbicanclient.base [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Calculated Secrets uuid ref: secrets/49cd9ff2-94ba-42d1-a202-f78c8419b7da#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.197 254904 DEBUG barbicanclient.client [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.198 254904 DEBUG nova.virt.libvirt.host [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Secret XML: <secret ephemeral="no" private="no">
Dec  2 06:32:07 np0005542249 nova_compute[254900]:  <usage type="volume">
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <volume>aca02ad2-47f5-4d77-9df0-2c95a1cb88a2</volume>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:  </usage>
Dec  2 06:32:07 np0005542249 nova_compute[254900]: </secret>
Dec  2 06:32:07 np0005542249 nova_compute[254900]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.240 254904 DEBUG nova.virt.libvirt.vif [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:32:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1199496089',display_name='tempest-TransferEncryptedVolumeTest-server-1199496089',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1199496089',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFgf6Glrr4mOxvEUPVgKAzbMgkblDirrH2khctJvQZz94LiWsbLQ82mzb2Y9Rt3SfEin9Hpb/echGrZ3sf80MCARzFat1FEJ1OH4HmTSACHUm+YJhtZATdTFOe7LkwhRFQ==',key_name='tempest-TransferEncryptedVolumeTest-1310282930',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a893d0c223f746328e706d7491d73b20',ramdisk_id='',reservation_id='r-om6k07uf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1499588457',owner_user_name='tempest-TransferEncryptedVolumeTest-1499588457-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:32:02Z,user_data=None,user_id='1caa62e7ee8b42be98bc34780a7197f9',uuid=c5811f91-4c9d-4c71-81dd-47e6a637fc29,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4df50564-1d9f-49d0-a86a-a8166845d7bd", "address": "fa:16:3e:07:41:77", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4df50564-1d", "ovs_interfaceid": "4df50564-1d9f-49d0-a86a-a8166845d7bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.241 254904 DEBUG nova.network.os_vif_util [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Converting VIF {"id": "4df50564-1d9f-49d0-a86a-a8166845d7bd", "address": "fa:16:3e:07:41:77", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4df50564-1d", "ovs_interfaceid": "4df50564-1d9f-49d0-a86a-a8166845d7bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.242 254904 DEBUG nova.network.os_vif_util [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:41:77,bridge_name='br-int',has_traffic_filtering=True,id=4df50564-1d9f-49d0-a86a-a8166845d7bd,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4df50564-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.245 254904 DEBUG nova.objects.instance [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lazy-loading 'pci_devices' on Instance uuid c5811f91-4c9d-4c71-81dd-47e6a637fc29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.263 254904 DEBUG nova.virt.libvirt.driver [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:32:07 np0005542249 nova_compute[254900]:  <uuid>c5811f91-4c9d-4c71-81dd-47e6a637fc29</uuid>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:  <name>instance-0000001b</name>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-1199496089</nova:name>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:32:05</nova:creationTime>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:32:07 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:        <nova:user uuid="1caa62e7ee8b42be98bc34780a7197f9">tempest-TransferEncryptedVolumeTest-1499588457-project-member</nova:user>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:        <nova:project uuid="a893d0c223f746328e706d7491d73b20">tempest-TransferEncryptedVolumeTest-1499588457</nova:project>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:        <nova:port uuid="4df50564-1d9f-49d0-a86a-a8166845d7bd">
Dec  2 06:32:07 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <entry name="serial">c5811f91-4c9d-4c71-81dd-47e6a637fc29</entry>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <entry name="uuid">c5811f91-4c9d-4c71-81dd-47e6a637fc29</entry>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/c5811f91-4c9d-4c71-81dd-47e6a637fc29_disk.config">
Dec  2 06:32:07 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:32:07 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="volumes/volume-aca02ad2-47f5-4d77-9df0-2c95a1cb88a2">
Dec  2 06:32:07 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:32:07 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <serial>aca02ad2-47f5-4d77-9df0-2c95a1cb88a2</serial>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <encryption format="luks">
Dec  2 06:32:07 np0005542249 nova_compute[254900]:        <secret type="passphrase" uuid="b61d34bd-83f9-4fb4-883e-93ad00a28e78"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      </encryption>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:07:41:77"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <target dev="tap4df50564-1d"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/c5811f91-4c9d-4c71-81dd-47e6a637fc29/console.log" append="off"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:32:07 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:32:07 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:32:07 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:32:07 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.264 254904 DEBUG nova.compute.manager [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Preparing to wait for external event network-vif-plugged-4df50564-1d9f-49d0-a86a-a8166845d7bd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.264 254904 DEBUG oslo_concurrency.lockutils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "c5811f91-4c9d-4c71-81dd-47e6a637fc29-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.264 254904 DEBUG oslo_concurrency.lockutils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "c5811f91-4c9d-4c71-81dd-47e6a637fc29-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.265 254904 DEBUG oslo_concurrency.lockutils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "c5811f91-4c9d-4c71-81dd-47e6a637fc29-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.266 254904 DEBUG nova.virt.libvirt.vif [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:32:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1199496089',display_name='tempest-TransferEncryptedVolumeTest-server-1199496089',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1199496089',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFgf6Glrr4mOxvEUPVgKAzbMgkblDirrH2khctJvQZz94LiWsbLQ82mzb2Y9Rt3SfEin9Hpb/echGrZ3sf80MCARzFat1FEJ1OH4HmTSACHUm+YJhtZATdTFOe7LkwhRFQ==',key_name='tempest-TransferEncryptedVolumeTest-1310282930',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a893d0c223f746328e706d7491d73b20',ramdisk_id='',reservation_id='r-om6k07uf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1499588457',owner_user_name='tempest-TransferEncryptedVolumeTest-1499588457-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:32:02Z,user_data=None,user_id='1caa62e7ee8b42be98bc34780a7197f9',uuid=c5811f91-4c9d-4c71-81dd-47e6a637fc29,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4df50564-1d9f-49d0-a86a-a8166845d7bd", "address": "fa:16:3e:07:41:77", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4df50564-1d", "ovs_interfaceid": "4df50564-1d9f-49d0-a86a-a8166845d7bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.266 254904 DEBUG nova.network.os_vif_util [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Converting VIF {"id": "4df50564-1d9f-49d0-a86a-a8166845d7bd", "address": "fa:16:3e:07:41:77", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4df50564-1d", "ovs_interfaceid": "4df50564-1d9f-49d0-a86a-a8166845d7bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.267 254904 DEBUG nova.network.os_vif_util [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:41:77,bridge_name='br-int',has_traffic_filtering=True,id=4df50564-1d9f-49d0-a86a-a8166845d7bd,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4df50564-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.268 254904 DEBUG os_vif [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:41:77,bridge_name='br-int',has_traffic_filtering=True,id=4df50564-1d9f-49d0-a86a-a8166845d7bd,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4df50564-1d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.269 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.270 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.270 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.274 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.275 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4df50564-1d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.276 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4df50564-1d, col_values=(('external_ids', {'iface-id': '4df50564-1d9f-49d0-a86a-a8166845d7bd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:41:77', 'vm-uuid': 'c5811f91-4c9d-4c71-81dd-47e6a637fc29'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.278 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:07 np0005542249 NetworkManager[48987]: <info>  [1764675127.2812] manager: (tap4df50564-1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.281 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.286 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.288 254904 INFO os_vif [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:41:77,bridge_name='br-int',has_traffic_filtering=True,id=4df50564-1d9f-49d0-a86a-a8166845d7bd,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4df50564-1d')#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.364 254904 DEBUG nova.virt.libvirt.driver [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.365 254904 DEBUG nova.virt.libvirt.driver [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.365 254904 DEBUG nova.virt.libvirt.driver [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] No VIF found with MAC fa:16:3e:07:41:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.366 254904 INFO nova.virt.libvirt.driver [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Using config drive#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.401 254904 DEBUG nova.storage.rbd_utils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] rbd image c5811f91-4c9d-4c71-81dd-47e6a637fc29_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:32:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:32:07 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3360823482' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:32:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:32:07 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3360823482' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.747 254904 INFO nova.virt.libvirt.driver [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Creating config drive at /var/lib/nova/instances/c5811f91-4c9d-4c71-81dd-47e6a637fc29/disk.config#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.757 254904 DEBUG oslo_concurrency.processutils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c5811f91-4c9d-4c71-81dd-47e6a637fc29/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc94i0b31 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.797 254904 DEBUG nova.network.neutron [req-4810af5a-1684-44e2-abfb-9d1656e22057 req-446d6f66-75df-4245-b309-89a91108a41a 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Updated VIF entry in instance network info cache for port 4df50564-1d9f-49d0-a86a-a8166845d7bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.798 254904 DEBUG nova.network.neutron [req-4810af5a-1684-44e2-abfb-9d1656e22057 req-446d6f66-75df-4245-b309-89a91108a41a 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Updating instance_info_cache with network_info: [{"id": "4df50564-1d9f-49d0-a86a-a8166845d7bd", "address": "fa:16:3e:07:41:77", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4df50564-1d", "ovs_interfaceid": "4df50564-1d9f-49d0-a86a-a8166845d7bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.821 254904 DEBUG oslo_concurrency.lockutils [req-4810af5a-1684-44e2-abfb-9d1656e22057 req-446d6f66-75df-4245-b309-89a91108a41a 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-c5811f91-4c9d-4c71-81dd-47e6a637fc29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.821 254904 DEBUG nova.compute.manager [req-4810af5a-1684-44e2-abfb-9d1656e22057 req-446d6f66-75df-4245-b309-89a91108a41a 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Received event network-vif-deleted-8ac6e5bf-779d-409f-90e8-7ee2dbf72367 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.822 254904 INFO nova.compute.manager [req-4810af5a-1684-44e2-abfb-9d1656e22057 req-446d6f66-75df-4245-b309-89a91108a41a 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Neutron deleted interface 8ac6e5bf-779d-409f-90e8-7ee2dbf72367; detaching it from the instance and deleting it from the info cache#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.822 254904 DEBUG nova.network.neutron [req-4810af5a-1684-44e2-abfb-9d1656e22057 req-446d6f66-75df-4245-b309-89a91108a41a 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.845 254904 DEBUG nova.compute.manager [req-4810af5a-1684-44e2-abfb-9d1656e22057 req-446d6f66-75df-4245-b309-89a91108a41a 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Detach interface failed, port_id=8ac6e5bf-779d-409f-90e8-7ee2dbf72367, reason: Instance 300cc277-5780-4174-88ed-a942194a10b9 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec  2 06:32:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.912 254904 DEBUG oslo_concurrency.processutils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c5811f91-4c9d-4c71-81dd-47e6a637fc29/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc94i0b31" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.953 254904 DEBUG nova.storage.rbd_utils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] rbd image c5811f91-4c9d-4c71-81dd-47e6a637fc29_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:32:07 np0005542249 nova_compute[254900]: 2025-12-02 11:32:07.959 254904 DEBUG oslo_concurrency.processutils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c5811f91-4c9d-4c71-81dd-47e6a637fc29/disk.config c5811f91-4c9d-4c71-81dd-47e6a637fc29_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:32:08 np0005542249 nova_compute[254900]: 2025-12-02 11:32:08.160 254904 DEBUG oslo_concurrency.processutils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c5811f91-4c9d-4c71-81dd-47e6a637fc29/disk.config c5811f91-4c9d-4c71-81dd-47e6a637fc29_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:32:08 np0005542249 nova_compute[254900]: 2025-12-02 11:32:08.162 254904 INFO nova.virt.libvirt.driver [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Deleting local config drive /var/lib/nova/instances/c5811f91-4c9d-4c71-81dd-47e6a637fc29/disk.config because it was imported into RBD.#033[00m
Dec  2 06:32:08 np0005542249 kernel: tap4df50564-1d: entered promiscuous mode
Dec  2 06:32:08 np0005542249 NetworkManager[48987]: <info>  [1764675128.2508] manager: (tap4df50564-1d): new Tun device (/org/freedesktop/NetworkManager/Devices/132)
Dec  2 06:32:08 np0005542249 ovn_controller[153849]: 2025-12-02T11:32:08Z|00247|binding|INFO|Claiming lport 4df50564-1d9f-49d0-a86a-a8166845d7bd for this chassis.
Dec  2 06:32:08 np0005542249 ovn_controller[153849]: 2025-12-02T11:32:08Z|00248|binding|INFO|4df50564-1d9f-49d0-a86a-a8166845d7bd: Claiming fa:16:3e:07:41:77 10.100.0.13
Dec  2 06:32:08 np0005542249 nova_compute[254900]: 2025-12-02 11:32:08.252 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.272 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:41:77 10.100.0.13'], port_security=['fa:16:3e:07:41:77 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c5811f91-4c9d-4c71-81dd-47e6a637fc29', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a893d0c223f746328e706d7491d73b20', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3d9ac6f0-d677-4372-91b9-169b8de31d1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9a246c4-d9fe-402e-8fa6-6099b55c4866, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=4df50564-1d9f-49d0-a86a-a8166845d7bd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.275 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 4df50564-1d9f-49d0-a86a-a8166845d7bd in datapath 4f9f73cb-9730-4829-ae15-1f03b97e60f8 bound to our chassis#033[00m
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.278 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4f9f73cb-9730-4829-ae15-1f03b97e60f8#033[00m
Dec  2 06:32:08 np0005542249 systemd-udevd[292595]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:32:08 np0005542249 ovn_controller[153849]: 2025-12-02T11:32:08Z|00249|binding|INFO|Setting lport 4df50564-1d9f-49d0-a86a-a8166845d7bd up in Southbound
Dec  2 06:32:08 np0005542249 ovn_controller[153849]: 2025-12-02T11:32:08Z|00250|binding|INFO|Setting lport 4df50564-1d9f-49d0-a86a-a8166845d7bd ovn-installed in OVS
Dec  2 06:32:08 np0005542249 nova_compute[254900]: 2025-12-02 11:32:08.293 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.295 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[c1bec9a9-5270-4c69-93ee-21aed3db72bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.296 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4f9f73cb-91 in ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 06:32:08 np0005542249 NetworkManager[48987]: <info>  [1764675128.2965] device (tap4df50564-1d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.298 262398 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4f9f73cb-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 06:32:08 np0005542249 nova_compute[254900]: 2025-12-02 11:32:08.299 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.298 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[1111e406-f8d9-4f50-9c53-ec195bfc6d2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.299 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[71f23bdf-4cbc-4365-a913-02a427bbf3e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:08 np0005542249 NetworkManager[48987]: <info>  [1764675128.3028] device (tap4df50564-1d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:32:08 np0005542249 systemd-machined[216222]: New machine qemu-27-instance-0000001b.
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.315 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[64d3c1aa-0bfc-4f12-8507-f71a3aa453e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:08 np0005542249 systemd[1]: Started Virtual Machine qemu-27-instance-0000001b.
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.344 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[a5a11842-a593-4b1d-8209-fefe3c7dea56]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:08 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1675: 321 pgs: 321 active+clean; 361 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 267 KiB/s rd, 1023 B/s wr, 92 op/s
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.382 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[305b33d5-64dc-4d83-b2f7-b2b9c37eae40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.390 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[53db9fdc-f3f5-4b1a-82fb-752481197881]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:08 np0005542249 NetworkManager[48987]: <info>  [1764675128.3919] manager: (tap4f9f73cb-90): new Veth device (/org/freedesktop/NetworkManager/Devices/133)
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.434 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[4b584692-0e9e-4e51-b98c-430e0cea00e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.440 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[b5bfc6e6-0e85-4a09-ad15-3bb2c124d067]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:08 np0005542249 NetworkManager[48987]: <info>  [1764675128.4678] device (tap4f9f73cb-90): carrier: link connected
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.474 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[eaaaffae-0d17-48ff-afd8-c17e59160913]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e404 do_prune osdmap full prune enabled
Dec  2 06:32:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e405 e405: 3 total, 3 up, 3 in
Dec  2 06:32:08 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e405: 3 total, 3 up, 3 in
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.510 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[ab15d3ff-f0fc-4492-b229-6747f51dc990]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f9f73cb-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:ed:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532129, 'reachable_time': 39440, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292631, 'error': None, 'target': 'ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.535 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[e4ec0715-8edb-4471-8930-b73ee05ca550]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:edbf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 532129, 'tstamp': 532129}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292633, 'error': None, 'target': 'ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.562 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[f9b6105c-6675-4cf2-965b-e9aab53a73e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f9f73cb-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:ed:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532129, 'reachable_time': 39440, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292634, 'error': None, 'target': 'ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.618 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[699dc1d8-4278-4deb-9723-06b59dcfb616]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:08 np0005542249 nova_compute[254900]: 2025-12-02 11:32:08.726 254904 DEBUG nova.compute.manager [req-050dbe6d-45e1-4366-b849-3186a375011e req-5bf1c5ac-cbd7-49f7-a801-e4f0519b5af5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Received event network-vif-plugged-4df50564-1d9f-49d0-a86a-a8166845d7bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:32:08 np0005542249 nova_compute[254900]: 2025-12-02 11:32:08.727 254904 DEBUG oslo_concurrency.lockutils [req-050dbe6d-45e1-4366-b849-3186a375011e req-5bf1c5ac-cbd7-49f7-a801-e4f0519b5af5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "c5811f91-4c9d-4c71-81dd-47e6a637fc29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:32:08 np0005542249 nova_compute[254900]: 2025-12-02 11:32:08.727 254904 DEBUG oslo_concurrency.lockutils [req-050dbe6d-45e1-4366-b849-3186a375011e req-5bf1c5ac-cbd7-49f7-a801-e4f0519b5af5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "c5811f91-4c9d-4c71-81dd-47e6a637fc29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:32:08 np0005542249 nova_compute[254900]: 2025-12-02 11:32:08.728 254904 DEBUG oslo_concurrency.lockutils [req-050dbe6d-45e1-4366-b849-3186a375011e req-5bf1c5ac-cbd7-49f7-a801-e4f0519b5af5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "c5811f91-4c9d-4c71-81dd-47e6a637fc29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:32:08 np0005542249 nova_compute[254900]: 2025-12-02 11:32:08.728 254904 DEBUG nova.compute.manager [req-050dbe6d-45e1-4366-b849-3186a375011e req-5bf1c5ac-cbd7-49f7-a801-e4f0519b5af5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Processing event network-vif-plugged-4df50564-1d9f-49d0-a86a-a8166845d7bd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.730 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[39ee788c-0ec2-43b0-b0b3-9d7d523c7727]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.732 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f9f73cb-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.732 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.733 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f9f73cb-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:32:08 np0005542249 kernel: tap4f9f73cb-90: entered promiscuous mode
Dec  2 06:32:08 np0005542249 NetworkManager[48987]: <info>  [1764675128.7365] manager: (tap4f9f73cb-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Dec  2 06:32:08 np0005542249 nova_compute[254900]: 2025-12-02 11:32:08.735 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.740 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4f9f73cb-90, col_values=(('external_ids', {'iface-id': '244504fe-2e21-493b-8e56-0db40be1f53e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:32:08 np0005542249 ovn_controller[153849]: 2025-12-02T11:32:08Z|00251|binding|INFO|Releasing lport 244504fe-2e21-493b-8e56-0db40be1f53e from this chassis (sb_readonly=0)
Dec  2 06:32:08 np0005542249 nova_compute[254900]: 2025-12-02 11:32:08.741 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:08 np0005542249 nova_compute[254900]: 2025-12-02 11:32:08.769 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.772 163757 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4f9f73cb-9730-4829-ae15-1f03b97e60f8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4f9f73cb-9730-4829-ae15-1f03b97e60f8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.773 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[0b8b8d97-7123-4be6-a000-415c7ab2cbaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.774 163757 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: global
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]:    log         /dev/log local0 debug
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]:    log-tag     haproxy-metadata-proxy-4f9f73cb-9730-4829-ae15-1f03b97e60f8
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]:    user        root
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]:    group       root
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]:    maxconn     1024
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]:    pidfile     /var/lib/neutron/external/pids/4f9f73cb-9730-4829-ae15-1f03b97e60f8.pid.haproxy
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]:    daemon
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: defaults
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]:    log global
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]:    mode http
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]:    option httplog
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]:    option dontlognull
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]:    option http-server-close
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]:    option forwardfor
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]:    retries                 3
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]:    timeout http-request    30s
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]:    timeout connect         30s
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]:    timeout client          32s
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]:    timeout server          32s
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]:    timeout http-keep-alive 30s
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: listen listener
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]:    bind 169.254.169.254:80
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]:    http-request add-header X-OVN-Network-ID 4f9f73cb-9730-4829-ae15-1f03b97e60f8
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 06:32:08 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:08.777 163757 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'env', 'PROCESS_TAG=haproxy-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4f9f73cb-9730-4829-ae15-1f03b97e60f8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 06:32:09 np0005542249 podman[292700]: 2025-12-02 11:32:09.238695875 +0000 UTC m=+0.063366920 container create e7022f01b2f0a8a6bd66bf7daad72ad48bb4f1d5d18810d9de1678a26b75f8f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 06:32:09 np0005542249 systemd[1]: Started libpod-conmon-e7022f01b2f0a8a6bd66bf7daad72ad48bb4f1d5d18810d9de1678a26b75f8f3.scope.
Dec  2 06:32:09 np0005542249 podman[292700]: 2025-12-02 11:32:09.205199682 +0000 UTC m=+0.029870757 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:32:09 np0005542249 nova_compute[254900]: 2025-12-02 11:32:09.317 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:09 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:32:09 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ba161d7c6fd0b36c6edc77e285dc302bf7054ecdc259861cbec5541af92c3b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 06:32:09 np0005542249 podman[292700]: 2025-12-02 11:32:09.341471147 +0000 UTC m=+0.166142202 container init e7022f01b2f0a8a6bd66bf7daad72ad48bb4f1d5d18810d9de1678a26b75f8f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  2 06:32:09 np0005542249 podman[292700]: 2025-12-02 11:32:09.347105809 +0000 UTC m=+0.171776844 container start e7022f01b2f0a8a6bd66bf7daad72ad48bb4f1d5d18810d9de1678a26b75f8f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec  2 06:32:09 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[292715]: [NOTICE]   (292719) : New worker (292721) forked
Dec  2 06:32:09 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[292715]: [NOTICE]   (292719) : Loading success.
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.196 254904 DEBUG oslo_concurrency.lockutils [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "64fbd54d-f574-44e6-a788-53938d2219e8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.197 254904 DEBUG oslo_concurrency.lockutils [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "64fbd54d-f574-44e6-a788-53938d2219e8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.197 254904 DEBUG oslo_concurrency.lockutils [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "64fbd54d-f574-44e6-a788-53938d2219e8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.198 254904 DEBUG oslo_concurrency.lockutils [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "64fbd54d-f574-44e6-a788-53938d2219e8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.198 254904 DEBUG oslo_concurrency.lockutils [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "64fbd54d-f574-44e6-a788-53938d2219e8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.201 254904 INFO nova.compute.manager [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Terminating instance#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.203 254904 DEBUG nova.compute.manager [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:32:10 np0005542249 kernel: tap1c8dc20d-b6 (unregistering): left promiscuous mode
Dec  2 06:32:10 np0005542249 NetworkManager[48987]: <info>  [1764675130.2729] device (tap1c8dc20d-b6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:32:10 np0005542249 ovn_controller[153849]: 2025-12-02T11:32:10Z|00252|binding|INFO|Releasing lport 1c8dc20d-b649-4107-a95c-d427a6c8f59a from this chassis (sb_readonly=0)
Dec  2 06:32:10 np0005542249 ovn_controller[153849]: 2025-12-02T11:32:10Z|00253|binding|INFO|Setting lport 1c8dc20d-b649-4107-a95c-d427a6c8f59a down in Southbound
Dec  2 06:32:10 np0005542249 ovn_controller[153849]: 2025-12-02T11:32:10Z|00254|binding|INFO|Removing iface tap1c8dc20d-b6 ovn-installed in OVS
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.287 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.292 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:10.299 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:5e:cc 10.100.0.6'], port_security=['fa:16:3e:e1:5e:cc 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '64fbd54d-f574-44e6-a788-53938d2219e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '625a6939c31646a4a83ea851774cf28c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bf93629e-6336-4a9c-a41d-6ce19e6b6662', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cc823ef0-3e69-4062-a488-a82f483f10bb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=1c8dc20d-b649-4107-a95c-d427a6c8f59a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:32:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:10.301 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 1c8dc20d-b649-4107-a95c-d427a6c8f59a in datapath acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 unbound from our chassis#033[00m
Dec  2 06:32:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:10.305 163757 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 06:32:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:10.309 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[8aad2044-7b27-489a-bae0-96898f868897]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:10.310 163757 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 namespace which is not needed anymore#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.331 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:10 np0005542249 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000018.scope: Deactivated successfully.
Dec  2 06:32:10 np0005542249 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000018.scope: Consumed 19.403s CPU time.
Dec  2 06:32:10 np0005542249 systemd-machined[216222]: Machine qemu-24-instance-00000018 terminated.
Dec  2 06:32:10 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1677: 321 pgs: 321 active+clean; 352 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 321 KiB/s rd, 1.3 KiB/s wr, 112 op/s
Dec  2 06:32:10 np0005542249 NetworkManager[48987]: <info>  [1764675130.4367] manager: (tap1c8dc20d-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/135)
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.470 254904 INFO nova.virt.libvirt.driver [-] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Instance destroyed successfully.#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.471 254904 DEBUG nova.objects.instance [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lazy-loading 'resources' on Instance uuid 64fbd54d-f574-44e6-a788-53938d2219e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.503 254904 DEBUG nova.virt.libvirt.vif [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:30:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1228569889',display_name='tempest-TestVolumeBootPattern-server-1228569889',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1228569889',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDl9Chitcp+6ZZ9O/so1iQpbrg+ZOVWOrATMsWTbgaWcZg2lFiQK4KEUyaqp5+G/z2wPorJssN622GdMYPRLScxIeivbRrFeE5q310MfETTcDT4f8HB9OmcWcicW5ZF4QA==',key_name='tempest-TestVolumeBootPattern-765108891',keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:30:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='625a6939c31646a4a83ea851774cf28c',ramdisk_id='',reservation_id='r-ltnonuu0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1396850361',owner_user_name='tempest-TestVolumeBootPattern-1396850361-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:30:32Z,user_data=None,user_id='6ccb73a613554d938221b4bf46d7ae83',uuid=64fbd54d-f574-44e6-a788-53938d2219e8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c8dc20d-b649-4107-a95c-d427a6c8f59a", "address": "fa:16:3e:e1:5e:cc", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c8dc20d-b6", "ovs_interfaceid": "1c8dc20d-b649-4107-a95c-d427a6c8f59a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.504 254904 DEBUG nova.network.os_vif_util [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converting VIF {"id": "1c8dc20d-b649-4107-a95c-d427a6c8f59a", "address": "fa:16:3e:e1:5e:cc", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c8dc20d-b6", "ovs_interfaceid": "1c8dc20d-b649-4107-a95c-d427a6c8f59a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.505 254904 DEBUG nova.network.os_vif_util [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e1:5e:cc,bridge_name='br-int',has_traffic_filtering=True,id=1c8dc20d-b649-4107-a95c-d427a6c8f59a,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c8dc20d-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.506 254904 DEBUG os_vif [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:5e:cc,bridge_name='br-int',has_traffic_filtering=True,id=1c8dc20d-b649-4107-a95c-d427a6c8f59a,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c8dc20d-b6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.509 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.510 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c8dc20d-b6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.512 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.516 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.518 254904 INFO os_vif [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:5e:cc,bridge_name='br-int',has_traffic_filtering=True,id=1c8dc20d-b649-4107-a95c-d427a6c8f59a,network=Network(acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c8dc20d-b6')#033[00m
Dec  2 06:32:10 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[290181]: [NOTICE]   (290185) : haproxy version is 2.8.14-c23fe91
Dec  2 06:32:10 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[290181]: [NOTICE]   (290185) : path to executable is /usr/sbin/haproxy
Dec  2 06:32:10 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[290181]: [ALERT]    (290185) : Current worker (290187) exited with code 143 (Terminated)
Dec  2 06:32:10 np0005542249 neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754[290181]: [WARNING]  (290185) : All workers exited. Exiting... (0)
Dec  2 06:32:10 np0005542249 systemd[1]: libpod-9d220dcd5324280330a5da7a7b12b176dc46590d6267ae6eec14ae16f02ed717.scope: Deactivated successfully.
Dec  2 06:32:10 np0005542249 podman[292751]: 2025-12-02 11:32:10.580365701 +0000 UTC m=+0.111140108 container died 9d220dcd5324280330a5da7a7b12b176dc46590d6267ae6eec14ae16f02ed717 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:32:10 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9d220dcd5324280330a5da7a7b12b176dc46590d6267ae6eec14ae16f02ed717-userdata-shm.mount: Deactivated successfully.
Dec  2 06:32:10 np0005542249 systemd[1]: var-lib-containers-storage-overlay-974aa7c9684c4968cf9a078d2a523b381430a175240095e528c6354c116af2e7-merged.mount: Deactivated successfully.
Dec  2 06:32:10 np0005542249 podman[292751]: 2025-12-02 11:32:10.633975857 +0000 UTC m=+0.164750274 container cleanup 9d220dcd5324280330a5da7a7b12b176dc46590d6267ae6eec14ae16f02ed717 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  2 06:32:10 np0005542249 systemd[1]: libpod-conmon-9d220dcd5324280330a5da7a7b12b176dc46590d6267ae6eec14ae16f02ed717.scope: Deactivated successfully.
Dec  2 06:32:10 np0005542249 podman[292807]: 2025-12-02 11:32:10.722409732 +0000 UTC m=+0.061745326 container remove 9d220dcd5324280330a5da7a7b12b176dc46590d6267ae6eec14ae16f02ed717 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:32:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:10.738 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[74417e2a-ea24-4bce-a5a5-990d4074b200]: (4, ('Tue Dec  2 11:32:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 (9d220dcd5324280330a5da7a7b12b176dc46590d6267ae6eec14ae16f02ed717)\n9d220dcd5324280330a5da7a7b12b176dc46590d6267ae6eec14ae16f02ed717\nTue Dec  2 11:32:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 (9d220dcd5324280330a5da7a7b12b176dc46590d6267ae6eec14ae16f02ed717)\n9d220dcd5324280330a5da7a7b12b176dc46590d6267ae6eec14ae16f02ed717\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:10.741 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[36607466-cbb5-44cc-a9ea-0bac8f3ba066]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:10.743 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacfaa8ac-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:32:10 np0005542249 kernel: tapacfaa8ac-00: left promiscuous mode
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.745 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.779 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:10.784 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[89a2dbe9-cdcd-4223-8689-788b4983fac6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.798 254904 INFO nova.virt.libvirt.driver [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Deleting instance files /var/lib/nova/instances/64fbd54d-f574-44e6-a788-53938d2219e8_del#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.800 254904 INFO nova.virt.libvirt.driver [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Deletion of /var/lib/nova/instances/64fbd54d-f574-44e6-a788-53938d2219e8_del complete#033[00m
Dec  2 06:32:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:10.802 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[5dc7ae65-7210-45d3-a724-e0044ae0bf71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:10.803 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[2615276e-3e31-4b00-a2d9-638ad9f9b4e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:10.829 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[36481edc-a337-43fb-94f9-afd54db51b09]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522312, 'reachable_time': 16158, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292823, 'error': None, 'target': 'ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:10 np0005542249 systemd[1]: run-netns-ovnmeta\x2dacfaa8ac\x2d0b3c\x2d4cdd\x2da6b8\x2da70a713ae754.mount: Deactivated successfully.
Dec  2 06:32:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:10.836 164036 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 06:32:10 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:10.836 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[e1a224b3-c9dd-4a90-b835-15d8815033a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.932 254904 DEBUG nova.compute.manager [req-bb427f1c-1680-4905-9de1-14eb67170ba5 req-24eec9ec-7649-4f2f-93d4-1652a71a916d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Received event network-vif-plugged-4df50564-1d9f-49d0-a86a-a8166845d7bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.933 254904 DEBUG oslo_concurrency.lockutils [req-bb427f1c-1680-4905-9de1-14eb67170ba5 req-24eec9ec-7649-4f2f-93d4-1652a71a916d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "c5811f91-4c9d-4c71-81dd-47e6a637fc29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.933 254904 DEBUG oslo_concurrency.lockutils [req-bb427f1c-1680-4905-9de1-14eb67170ba5 req-24eec9ec-7649-4f2f-93d4-1652a71a916d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "c5811f91-4c9d-4c71-81dd-47e6a637fc29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.934 254904 DEBUG oslo_concurrency.lockutils [req-bb427f1c-1680-4905-9de1-14eb67170ba5 req-24eec9ec-7649-4f2f-93d4-1652a71a916d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "c5811f91-4c9d-4c71-81dd-47e6a637fc29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.934 254904 DEBUG nova.compute.manager [req-bb427f1c-1680-4905-9de1-14eb67170ba5 req-24eec9ec-7649-4f2f-93d4-1652a71a916d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] No waiting events found dispatching network-vif-plugged-4df50564-1d9f-49d0-a86a-a8166845d7bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.935 254904 WARNING nova.compute.manager [req-bb427f1c-1680-4905-9de1-14eb67170ba5 req-24eec9ec-7649-4f2f-93d4-1652a71a916d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Received unexpected event network-vif-plugged-4df50564-1d9f-49d0-a86a-a8166845d7bd for instance with vm_state building and task_state spawning.#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.935 254904 DEBUG nova.compute.manager [req-bb427f1c-1680-4905-9de1-14eb67170ba5 req-24eec9ec-7649-4f2f-93d4-1652a71a916d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Received event network-changed-1c8dc20d-b649-4107-a95c-d427a6c8f59a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.935 254904 DEBUG nova.compute.manager [req-bb427f1c-1680-4905-9de1-14eb67170ba5 req-24eec9ec-7649-4f2f-93d4-1652a71a916d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Refreshing instance network info cache due to event network-changed-1c8dc20d-b649-4107-a95c-d427a6c8f59a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.936 254904 DEBUG oslo_concurrency.lockutils [req-bb427f1c-1680-4905-9de1-14eb67170ba5 req-24eec9ec-7649-4f2f-93d4-1652a71a916d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-64fbd54d-f574-44e6-a788-53938d2219e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.936 254904 DEBUG oslo_concurrency.lockutils [req-bb427f1c-1680-4905-9de1-14eb67170ba5 req-24eec9ec-7649-4f2f-93d4-1652a71a916d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-64fbd54d-f574-44e6-a788-53938d2219e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:32:10 np0005542249 nova_compute[254900]: 2025-12-02 11:32:10.936 254904 DEBUG nova.network.neutron [req-bb427f1c-1680-4905-9de1-14eb67170ba5 req-24eec9ec-7649-4f2f-93d4-1652a71a916d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Refreshing network info cache for port 1c8dc20d-b649-4107-a95c-d427a6c8f59a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.003 254904 INFO nova.compute.manager [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.004 254904 DEBUG oslo.service.loopingcall [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.005 254904 DEBUG nova.compute.manager [-] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.005 254904 DEBUG nova.network.neutron [-] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.151 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764675131.1507642, c5811f91-4c9d-4c71-81dd-47e6a637fc29 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.152 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] VM Started (Lifecycle Event)#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.161 254904 DEBUG nova.compute.manager [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.167 254904 DEBUG nova.virt.libvirt.driver [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.171 254904 INFO nova.virt.libvirt.driver [-] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Instance spawned successfully.#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.172 254904 DEBUG nova.virt.libvirt.driver [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.181 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.186 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.200 254904 DEBUG nova.virt.libvirt.driver [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.200 254904 DEBUG nova.virt.libvirt.driver [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.201 254904 DEBUG nova.virt.libvirt.driver [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.201 254904 DEBUG nova.virt.libvirt.driver [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.202 254904 DEBUG nova.virt.libvirt.driver [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.202 254904 DEBUG nova.virt.libvirt.driver [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.208 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.208 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764675131.1509142, c5811f91-4c9d-4c71-81dd-47e6a637fc29 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.209 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.236 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.240 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764675131.1655383, c5811f91-4c9d-4c71-81dd-47e6a637fc29 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.240 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.259 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.263 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.268 254904 INFO nova.compute.manager [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Took 7.99 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.268 254904 DEBUG nova.compute.manager [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.283 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.337 254904 INFO nova.compute.manager [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Took 10.19 seconds to build instance.#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.360 254904 DEBUG oslo_concurrency.lockutils [None req-ff3c97ee-1c41-4021-b7c2-a7223d16b176 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "c5811f91-4c9d-4c71-81dd-47e6a637fc29" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.332s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.815 254904 DEBUG nova.network.neutron [-] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:32:11 np0005542249 nova_compute[254900]: 2025-12-02 11:32:11.844 254904 INFO nova.compute.manager [-] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Took 0.84 seconds to deallocate network for instance.#033[00m
Dec  2 06:32:12 np0005542249 nova_compute[254900]: 2025-12-02 11:32:12.036 254904 INFO nova.compute.manager [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Took 0.19 seconds to detach 1 volumes for instance.#033[00m
Dec  2 06:32:12 np0005542249 nova_compute[254900]: 2025-12-02 11:32:12.086 254904 DEBUG oslo_concurrency.lockutils [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:32:12 np0005542249 nova_compute[254900]: 2025-12-02 11:32:12.087 254904 DEBUG oslo_concurrency.lockutils [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:32:12 np0005542249 nova_compute[254900]: 2025-12-02 11:32:12.165 254904 DEBUG nova.network.neutron [req-bb427f1c-1680-4905-9de1-14eb67170ba5 req-24eec9ec-7649-4f2f-93d4-1652a71a916d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Updated VIF entry in instance network info cache for port 1c8dc20d-b649-4107-a95c-d427a6c8f59a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:32:12 np0005542249 nova_compute[254900]: 2025-12-02 11:32:12.166 254904 DEBUG nova.network.neutron [req-bb427f1c-1680-4905-9de1-14eb67170ba5 req-24eec9ec-7649-4f2f-93d4-1652a71a916d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Updating instance_info_cache with network_info: [{"id": "1c8dc20d-b649-4107-a95c-d427a6c8f59a", "address": "fa:16:3e:e1:5e:cc", "network": {"id": "acfaa8ac-0b3c-4cdd-a6b8-a70a713ae754", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1957233689-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "625a6939c31646a4a83ea851774cf28c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c8dc20d-b6", "ovs_interfaceid": "1c8dc20d-b649-4107-a95c-d427a6c8f59a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:32:12 np0005542249 nova_compute[254900]: 2025-12-02 11:32:12.169 254904 DEBUG oslo_concurrency.processutils [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:32:12 np0005542249 nova_compute[254900]: 2025-12-02 11:32:12.210 254904 DEBUG oslo_concurrency.lockutils [req-bb427f1c-1680-4905-9de1-14eb67170ba5 req-24eec9ec-7649-4f2f-93d4-1652a71a916d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-64fbd54d-f574-44e6-a788-53938d2219e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:32:12 np0005542249 nova_compute[254900]: 2025-12-02 11:32:12.211 254904 DEBUG nova.compute.manager [req-bb427f1c-1680-4905-9de1-14eb67170ba5 req-24eec9ec-7649-4f2f-93d4-1652a71a916d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Received event network-vif-unplugged-1c8dc20d-b649-4107-a95c-d427a6c8f59a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:32:12 np0005542249 nova_compute[254900]: 2025-12-02 11:32:12.212 254904 DEBUG oslo_concurrency.lockutils [req-bb427f1c-1680-4905-9de1-14eb67170ba5 req-24eec9ec-7649-4f2f-93d4-1652a71a916d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "64fbd54d-f574-44e6-a788-53938d2219e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:32:12 np0005542249 nova_compute[254900]: 2025-12-02 11:32:12.212 254904 DEBUG oslo_concurrency.lockutils [req-bb427f1c-1680-4905-9de1-14eb67170ba5 req-24eec9ec-7649-4f2f-93d4-1652a71a916d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "64fbd54d-f574-44e6-a788-53938d2219e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:32:12 np0005542249 nova_compute[254900]: 2025-12-02 11:32:12.212 254904 DEBUG oslo_concurrency.lockutils [req-bb427f1c-1680-4905-9de1-14eb67170ba5 req-24eec9ec-7649-4f2f-93d4-1652a71a916d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "64fbd54d-f574-44e6-a788-53938d2219e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:32:12 np0005542249 nova_compute[254900]: 2025-12-02 11:32:12.213 254904 DEBUG nova.compute.manager [req-bb427f1c-1680-4905-9de1-14eb67170ba5 req-24eec9ec-7649-4f2f-93d4-1652a71a916d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] No waiting events found dispatching network-vif-unplugged-1c8dc20d-b649-4107-a95c-d427a6c8f59a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:32:12 np0005542249 nova_compute[254900]: 2025-12-02 11:32:12.213 254904 DEBUG nova.compute.manager [req-bb427f1c-1680-4905-9de1-14eb67170ba5 req-24eec9ec-7649-4f2f-93d4-1652a71a916d 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Received event network-vif-unplugged-1c8dc20d-b649-4107-a95c-d427a6c8f59a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 06:32:12 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1678: 321 pgs: 321 active+clean; 352 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 238 KiB/s rd, 17 KiB/s wr, 134 op/s
Dec  2 06:32:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:32:12 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2176420181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:32:12 np0005542249 nova_compute[254900]: 2025-12-02 11:32:12.692 254904 DEBUG oslo_concurrency.processutils [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:32:12 np0005542249 nova_compute[254900]: 2025-12-02 11:32:12.701 254904 DEBUG nova.compute.provider_tree [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:32:12 np0005542249 nova_compute[254900]: 2025-12-02 11:32:12.727 254904 DEBUG nova.scheduler.client.report [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:32:12 np0005542249 nova_compute[254900]: 2025-12-02 11:32:12.751 254904 DEBUG oslo_concurrency.lockutils [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:32:12 np0005542249 nova_compute[254900]: 2025-12-02 11:32:12.784 254904 INFO nova.scheduler.client.report [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Deleted allocations for instance 64fbd54d-f574-44e6-a788-53938d2219e8#033[00m
Dec  2 06:32:12 np0005542249 nova_compute[254900]: 2025-12-02 11:32:12.853 254904 DEBUG oslo_concurrency.lockutils [None req-8a5c1f1a-e28c-4fcb-8f8c-a4e67edbfb9b 6ccb73a613554d938221b4bf46d7ae83 625a6939c31646a4a83ea851774cf28c - - default default] Lock "64fbd54d-f574-44e6-a788-53938d2219e8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:32:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:32:13 np0005542249 nova_compute[254900]: 2025-12-02 11:32:13.047 254904 DEBUG nova.compute.manager [req-1209d030-8ce4-4da0-a802-f9079e3d8d7e req-991ae06d-d153-422b-be91-1548d2e5a1b3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Received event network-vif-plugged-1c8dc20d-b649-4107-a95c-d427a6c8f59a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:32:13 np0005542249 nova_compute[254900]: 2025-12-02 11:32:13.047 254904 DEBUG oslo_concurrency.lockutils [req-1209d030-8ce4-4da0-a802-f9079e3d8d7e req-991ae06d-d153-422b-be91-1548d2e5a1b3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "64fbd54d-f574-44e6-a788-53938d2219e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:32:13 np0005542249 nova_compute[254900]: 2025-12-02 11:32:13.048 254904 DEBUG oslo_concurrency.lockutils [req-1209d030-8ce4-4da0-a802-f9079e3d8d7e req-991ae06d-d153-422b-be91-1548d2e5a1b3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "64fbd54d-f574-44e6-a788-53938d2219e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:32:13 np0005542249 nova_compute[254900]: 2025-12-02 11:32:13.048 254904 DEBUG oslo_concurrency.lockutils [req-1209d030-8ce4-4da0-a802-f9079e3d8d7e req-991ae06d-d153-422b-be91-1548d2e5a1b3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "64fbd54d-f574-44e6-a788-53938d2219e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:32:13 np0005542249 nova_compute[254900]: 2025-12-02 11:32:13.048 254904 DEBUG nova.compute.manager [req-1209d030-8ce4-4da0-a802-f9079e3d8d7e req-991ae06d-d153-422b-be91-1548d2e5a1b3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] No waiting events found dispatching network-vif-plugged-1c8dc20d-b649-4107-a95c-d427a6c8f59a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:32:13 np0005542249 nova_compute[254900]: 2025-12-02 11:32:13.049 254904 WARNING nova.compute.manager [req-1209d030-8ce4-4da0-a802-f9079e3d8d7e req-991ae06d-d153-422b-be91-1548d2e5a1b3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Received unexpected event network-vif-plugged-1c8dc20d-b649-4107-a95c-d427a6c8f59a for instance with vm_state deleted and task_state None.#033[00m
Dec  2 06:32:13 np0005542249 nova_compute[254900]: 2025-12-02 11:32:13.049 254904 DEBUG nova.compute.manager [req-1209d030-8ce4-4da0-a802-f9079e3d8d7e req-991ae06d-d153-422b-be91-1548d2e5a1b3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Received event network-vif-deleted-1c8dc20d-b649-4107-a95c-d427a6c8f59a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:32:13 np0005542249 nova_compute[254900]: 2025-12-02 11:32:13.049 254904 INFO nova.compute.manager [req-1209d030-8ce4-4da0-a802-f9079e3d8d7e req-991ae06d-d153-422b-be91-1548d2e5a1b3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Neutron deleted interface 1c8dc20d-b649-4107-a95c-d427a6c8f59a; detaching it from the instance and deleting it from the info cache#033[00m
Dec  2 06:32:13 np0005542249 nova_compute[254900]: 2025-12-02 11:32:13.050 254904 DEBUG nova.network.neutron [req-1209d030-8ce4-4da0-a802-f9079e3d8d7e req-991ae06d-d153-422b-be91-1548d2e5a1b3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Dec  2 06:32:13 np0005542249 nova_compute[254900]: 2025-12-02 11:32:13.053 254904 DEBUG nova.compute.manager [req-1209d030-8ce4-4da0-a802-f9079e3d8d7e req-991ae06d-d153-422b-be91-1548d2e5a1b3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Detach interface failed, port_id=1c8dc20d-b649-4107-a95c-d427a6c8f59a, reason: Instance 64fbd54d-f574-44e6-a788-53938d2219e8 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec  2 06:32:14 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1679: 321 pgs: 321 active+clean; 352 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 160 op/s
Dec  2 06:32:14 np0005542249 nova_compute[254900]: 2025-12-02 11:32:14.373 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:32:15 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1314113874' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:32:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:32:15 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1314113874' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:32:15 np0005542249 nova_compute[254900]: 2025-12-02 11:32:15.122 254904 DEBUG nova.compute.manager [req-13c0e3a7-85e3-430e-9cb5-fc3baf20ac05 req-4dcd8136-4c5d-4df6-9478-17e39c99929c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Received event network-changed-4df50564-1d9f-49d0-a86a-a8166845d7bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:32:15 np0005542249 nova_compute[254900]: 2025-12-02 11:32:15.122 254904 DEBUG nova.compute.manager [req-13c0e3a7-85e3-430e-9cb5-fc3baf20ac05 req-4dcd8136-4c5d-4df6-9478-17e39c99929c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Refreshing instance network info cache due to event network-changed-4df50564-1d9f-49d0-a86a-a8166845d7bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:32:15 np0005542249 nova_compute[254900]: 2025-12-02 11:32:15.123 254904 DEBUG oslo_concurrency.lockutils [req-13c0e3a7-85e3-430e-9cb5-fc3baf20ac05 req-4dcd8136-4c5d-4df6-9478-17e39c99929c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-c5811f91-4c9d-4c71-81dd-47e6a637fc29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:32:15 np0005542249 nova_compute[254900]: 2025-12-02 11:32:15.123 254904 DEBUG oslo_concurrency.lockutils [req-13c0e3a7-85e3-430e-9cb5-fc3baf20ac05 req-4dcd8136-4c5d-4df6-9478-17e39c99929c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-c5811f91-4c9d-4c71-81dd-47e6a637fc29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:32:15 np0005542249 nova_compute[254900]: 2025-12-02 11:32:15.123 254904 DEBUG nova.network.neutron [req-13c0e3a7-85e3-430e-9cb5-fc3baf20ac05 req-4dcd8136-4c5d-4df6-9478-17e39c99929c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Refreshing network info cache for port 4df50564-1d9f-49d0-a86a-a8166845d7bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:32:15 np0005542249 nova_compute[254900]: 2025-12-02 11:32:15.560 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:16 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1680: 321 pgs: 321 active+clean; 352 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 18 KiB/s wr, 164 op/s
Dec  2 06:32:16 np0005542249 nova_compute[254900]: 2025-12-02 11:32:16.821 254904 DEBUG nova.network.neutron [req-13c0e3a7-85e3-430e-9cb5-fc3baf20ac05 req-4dcd8136-4c5d-4df6-9478-17e39c99929c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Updated VIF entry in instance network info cache for port 4df50564-1d9f-49d0-a86a-a8166845d7bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:32:16 np0005542249 nova_compute[254900]: 2025-12-02 11:32:16.822 254904 DEBUG nova.network.neutron [req-13c0e3a7-85e3-430e-9cb5-fc3baf20ac05 req-4dcd8136-4c5d-4df6-9478-17e39c99929c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Updating instance_info_cache with network_info: [{"id": "4df50564-1d9f-49d0-a86a-a8166845d7bd", "address": "fa:16:3e:07:41:77", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4df50564-1d", "ovs_interfaceid": "4df50564-1d9f-49d0-a86a-a8166845d7bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:32:16 np0005542249 nova_compute[254900]: 2025-12-02 11:32:16.850 254904 DEBUG oslo_concurrency.lockutils [req-13c0e3a7-85e3-430e-9cb5-fc3baf20ac05 req-4dcd8136-4c5d-4df6-9478-17e39c99929c 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-c5811f91-4c9d-4c71-81dd-47e6a637fc29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:32:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:32:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e405 do_prune osdmap full prune enabled
Dec  2 06:32:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e406 e406: 3 total, 3 up, 3 in
Dec  2 06:32:17 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e406: 3 total, 3 up, 3 in
Dec  2 06:32:18 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1682: 321 pgs: 321 active+clean; 300 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 17 KiB/s wr, 141 op/s
Dec  2 06:32:18 np0005542249 nova_compute[254900]: 2025-12-02 11:32:18.740 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764675123.737917, 300cc277-5780-4174-88ed-a942194a10b9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:32:18 np0005542249 nova_compute[254900]: 2025-12-02 11:32:18.741 254904 INFO nova.compute.manager [-] [instance: 300cc277-5780-4174-88ed-a942194a10b9] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:32:18 np0005542249 nova_compute[254900]: 2025-12-02 11:32:18.774 254904 DEBUG nova.compute.manager [None req-354ae45a-5383-4f14-a338-b912be6eb361 - - - - - -] [instance: 300cc277-5780-4174-88ed-a942194a10b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:32:18 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:18.899 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:23:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:25:d0:a1:75:ed'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:32:18 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:18.900 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 06:32:18 np0005542249 nova_compute[254900]: 2025-12-02 11:32:18.939 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:19 np0005542249 ovn_controller[153849]: 2025-12-02T11:32:19Z|00255|binding|INFO|Releasing lport 244504fe-2e21-493b-8e56-0db40be1f53e from this chassis (sb_readonly=0)
Dec  2 06:32:19 np0005542249 podman[292853]: 2025-12-02 11:32:19.115264557 +0000 UTC m=+0.143526022 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 06:32:19 np0005542249 nova_compute[254900]: 2025-12-02 11:32:19.131 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:19 np0005542249 nova_compute[254900]: 2025-12-02 11:32:19.376 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:19.846 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:32:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:19.850 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:32:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:19.850 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:32:20 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1683: 321 pgs: 321 active+clean; 270 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 17 KiB/s wr, 142 op/s
Dec  2 06:32:20 np0005542249 nova_compute[254900]: 2025-12-02 11:32:20.564 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:22 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1684: 321 pgs: 321 active+clean; 270 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 KiB/s wr, 115 op/s
Dec  2 06:32:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec  2 06:32:22 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  2 06:32:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:32:22 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:32:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:32:22 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:32:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:32:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:32:22 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:32:22 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 5b48c675-0bf5-4372-a876-aae9abfa6cae does not exist
Dec  2 06:32:22 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 43ba0d36-e4cd-43de-84d6-6d754d3eed62 does not exist
Dec  2 06:32:22 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev cbd1c089-3e04-4aa6-aeec-8e742ff71779 does not exist
Dec  2 06:32:22 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:32:22 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:32:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:32:23 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:32:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:32:23 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:32:23 np0005542249 ovn_controller[153849]: 2025-12-02T11:32:23Z|00256|binding|INFO|Releasing lport 244504fe-2e21-493b-8e56-0db40be1f53e from this chassis (sb_readonly=0)
Dec  2 06:32:23 np0005542249 nova_compute[254900]: 2025-12-02 11:32:23.280 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:23 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  2 06:32:23 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:32:23 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:32:23 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:32:23 np0005542249 ovn_controller[153849]: 2025-12-02T11:32:23Z|00064|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.6 does not match offer 10.100.0.13
Dec  2 06:32:23 np0005542249 ovn_controller[153849]: 2025-12-02T11:32:23Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:07:41:77 10.100.0.13
Dec  2 06:32:23 np0005542249 podman[293145]: 2025-12-02 11:32:23.853713208 +0000 UTC m=+0.048615283 container create ec4049d523bb5e1d4b0712c7c58bdc0eea58b6ed3f83429395e64997946c49e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khayyam, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  2 06:32:23 np0005542249 systemd[1]: Started libpod-conmon-ec4049d523bb5e1d4b0712c7c58bdc0eea58b6ed3f83429395e64997946c49e4.scope.
Dec  2 06:32:23 np0005542249 podman[293145]: 2025-12-02 11:32:23.835213869 +0000 UTC m=+0.030115964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:32:23 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:32:23 np0005542249 podman[293145]: 2025-12-02 11:32:23.973866489 +0000 UTC m=+0.168768644 container init ec4049d523bb5e1d4b0712c7c58bdc0eea58b6ed3f83429395e64997946c49e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:32:23 np0005542249 podman[293145]: 2025-12-02 11:32:23.98173045 +0000 UTC m=+0.176632525 container start ec4049d523bb5e1d4b0712c7c58bdc0eea58b6ed3f83429395e64997946c49e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khayyam, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  2 06:32:23 np0005542249 podman[293145]: 2025-12-02 11:32:23.985695697 +0000 UTC m=+0.180597782 container attach ec4049d523bb5e1d4b0712c7c58bdc0eea58b6ed3f83429395e64997946c49e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khayyam, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  2 06:32:23 np0005542249 lucid_khayyam[293161]: 167 167
Dec  2 06:32:23 np0005542249 systemd[1]: libpod-ec4049d523bb5e1d4b0712c7c58bdc0eea58b6ed3f83429395e64997946c49e4.scope: Deactivated successfully.
Dec  2 06:32:23 np0005542249 conmon[293161]: conmon ec4049d523bb5e1d4b07 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ec4049d523bb5e1d4b0712c7c58bdc0eea58b6ed3f83429395e64997946c49e4.scope/container/memory.events
Dec  2 06:32:23 np0005542249 podman[293145]: 2025-12-02 11:32:23.992324196 +0000 UTC m=+0.187226281 container died ec4049d523bb5e1d4b0712c7c58bdc0eea58b6ed3f83429395e64997946c49e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khayyam, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  2 06:32:24 np0005542249 systemd[1]: var-lib-containers-storage-overlay-0bfb58ad94a512668d7db27d05f6bcad94eb94d7c9a6e3dea25043b2939f3e56-merged.mount: Deactivated successfully.
Dec  2 06:32:24 np0005542249 podman[293145]: 2025-12-02 11:32:24.033312702 +0000 UTC m=+0.228214767 container remove ec4049d523bb5e1d4b0712c7c58bdc0eea58b6ed3f83429395e64997946c49e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  2 06:32:24 np0005542249 systemd[1]: libpod-conmon-ec4049d523bb5e1d4b0712c7c58bdc0eea58b6ed3f83429395e64997946c49e4.scope: Deactivated successfully.
Dec  2 06:32:24 np0005542249 podman[293185]: 2025-12-02 11:32:24.254096617 +0000 UTC m=+0.058112929 container create 7fdc71259cbf0bc93a2632db58c07512d77928e4263a99f8f22a022836a31fc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:32:24 np0005542249 systemd[1]: Started libpod-conmon-7fdc71259cbf0bc93a2632db58c07512d77928e4263a99f8f22a022836a31fc7.scope.
Dec  2 06:32:24 np0005542249 podman[293185]: 2025-12-02 11:32:24.220613764 +0000 UTC m=+0.024630036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:32:24 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:32:24 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a434962bff8e723bac7f975a2e46d615f55d1aac6b2c94151c399eb33541170f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:32:24 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a434962bff8e723bac7f975a2e46d615f55d1aac6b2c94151c399eb33541170f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:32:24 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a434962bff8e723bac7f975a2e46d615f55d1aac6b2c94151c399eb33541170f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:32:24 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a434962bff8e723bac7f975a2e46d615f55d1aac6b2c94151c399eb33541170f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:32:24 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a434962bff8e723bac7f975a2e46d615f55d1aac6b2c94151c399eb33541170f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:32:24 np0005542249 podman[293185]: 2025-12-02 11:32:24.359762716 +0000 UTC m=+0.163778988 container init 7fdc71259cbf0bc93a2632db58c07512d77928e4263a99f8f22a022836a31fc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:32:24 np0005542249 podman[293185]: 2025-12-02 11:32:24.366498228 +0000 UTC m=+0.170514500 container start 7fdc71259cbf0bc93a2632db58c07512d77928e4263a99f8f22a022836a31fc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dijkstra, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  2 06:32:24 np0005542249 podman[293185]: 2025-12-02 11:32:24.369676055 +0000 UTC m=+0.173692317 container attach 7fdc71259cbf0bc93a2632db58c07512d77928e4263a99f8f22a022836a31fc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dijkstra, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  2 06:32:24 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1685: 321 pgs: 321 active+clean; 270 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 601 KiB/s rd, 818 B/s wr, 42 op/s
Dec  2 06:32:24 np0005542249 nova_compute[254900]: 2025-12-02 11:32:24.377 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:25 np0005542249 nova_compute[254900]: 2025-12-02 11:32:25.454 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764675130.4518254, 64fbd54d-f574-44e6-a788-53938d2219e8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:32:25 np0005542249 nova_compute[254900]: 2025-12-02 11:32:25.454 254904 INFO nova.compute.manager [-] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:32:25 np0005542249 nova_compute[254900]: 2025-12-02 11:32:25.485 254904 DEBUG nova.compute.manager [None req-3caf363e-bcaf-4154-ace6-00eab27eb069 - - - - - -] [instance: 64fbd54d-f574-44e6-a788-53938d2219e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:32:25 np0005542249 nova_compute[254900]: 2025-12-02 11:32:25.617 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:25 np0005542249 charming_dijkstra[293202]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:32:25 np0005542249 charming_dijkstra[293202]: --> relative data size: 1.0
Dec  2 06:32:25 np0005542249 charming_dijkstra[293202]: --> All data devices are unavailable
Dec  2 06:32:25 np0005542249 systemd[1]: libpod-7fdc71259cbf0bc93a2632db58c07512d77928e4263a99f8f22a022836a31fc7.scope: Deactivated successfully.
Dec  2 06:32:25 np0005542249 podman[293185]: 2025-12-02 11:32:25.691091704 +0000 UTC m=+1.495107986 container died 7fdc71259cbf0bc93a2632db58c07512d77928e4263a99f8f22a022836a31fc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Dec  2 06:32:25 np0005542249 systemd[1]: libpod-7fdc71259cbf0bc93a2632db58c07512d77928e4263a99f8f22a022836a31fc7.scope: Consumed 1.233s CPU time.
Dec  2 06:32:25 np0005542249 systemd[1]: var-lib-containers-storage-overlay-a434962bff8e723bac7f975a2e46d615f55d1aac6b2c94151c399eb33541170f-merged.mount: Deactivated successfully.
Dec  2 06:32:25 np0005542249 podman[293185]: 2025-12-02 11:32:25.762101619 +0000 UTC m=+1.566117891 container remove 7fdc71259cbf0bc93a2632db58c07512d77928e4263a99f8f22a022836a31fc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dijkstra, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:32:25 np0005542249 systemd[1]: libpod-conmon-7fdc71259cbf0bc93a2632db58c07512d77928e4263a99f8f22a022836a31fc7.scope: Deactivated successfully.
Dec  2 06:32:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:25.902 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:32:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:32:26
Dec  2 06:32:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:32:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:32:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'volumes', 'backups']
Dec  2 06:32:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:32:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:32:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:32:26 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1686: 321 pgs: 321 active+clean; 270 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 491 KiB/s rd, 818 B/s wr, 38 op/s
Dec  2 06:32:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:32:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:32:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:32:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:32:26 np0005542249 ovn_controller[153849]: 2025-12-02T11:32:26Z|00066|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.6 does not match offer 10.100.0.13
Dec  2 06:32:26 np0005542249 ovn_controller[153849]: 2025-12-02T11:32:26Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:07:41:77 10.100.0.13
Dec  2 06:32:26 np0005542249 podman[293385]: 2025-12-02 11:32:26.744453275 +0000 UTC m=+0.064552702 container create bfcc9e01586dcd166f18bc14b971da56c9c5b52edaa025b5cb019e67df877ec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kirch, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:32:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:32:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:32:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:32:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:32:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:32:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:32:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:32:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:32:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:32:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:32:26 np0005542249 podman[293385]: 2025-12-02 11:32:26.712987995 +0000 UTC m=+0.033087472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:32:26 np0005542249 systemd[1]: Started libpod-conmon-bfcc9e01586dcd166f18bc14b971da56c9c5b52edaa025b5cb019e67df877ec5.scope.
Dec  2 06:32:26 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:32:26 np0005542249 podman[293385]: 2025-12-02 11:32:26.887651436 +0000 UTC m=+0.207750863 container init bfcc9e01586dcd166f18bc14b971da56c9c5b52edaa025b5cb019e67df877ec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:32:26 np0005542249 podman[293385]: 2025-12-02 11:32:26.89927279 +0000 UTC m=+0.219372207 container start bfcc9e01586dcd166f18bc14b971da56c9c5b52edaa025b5cb019e67df877ec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:32:26 np0005542249 podman[293385]: 2025-12-02 11:32:26.904350197 +0000 UTC m=+0.224449724 container attach bfcc9e01586dcd166f18bc14b971da56c9c5b52edaa025b5cb019e67df877ec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kirch, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:32:26 np0005542249 sharp_kirch[293402]: 167 167
Dec  2 06:32:26 np0005542249 systemd[1]: libpod-bfcc9e01586dcd166f18bc14b971da56c9c5b52edaa025b5cb019e67df877ec5.scope: Deactivated successfully.
Dec  2 06:32:26 np0005542249 podman[293407]: 2025-12-02 11:32:26.987473049 +0000 UTC m=+0.050606196 container died bfcc9e01586dcd166f18bc14b971da56c9c5b52edaa025b5cb019e67df877ec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kirch, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  2 06:32:27 np0005542249 systemd[1]: var-lib-containers-storage-overlay-b353d3e387a653fb338196fbd5d5bfd971214963689d966ee3b7ffc0091c276c-merged.mount: Deactivated successfully.
Dec  2 06:32:27 np0005542249 podman[293407]: 2025-12-02 11:32:27.047378094 +0000 UTC m=+0.110511181 container remove bfcc9e01586dcd166f18bc14b971da56c9c5b52edaa025b5cb019e67df877ec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kirch, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:32:27 np0005542249 systemd[1]: libpod-conmon-bfcc9e01586dcd166f18bc14b971da56c9c5b52edaa025b5cb019e67df877ec5.scope: Deactivated successfully.
Dec  2 06:32:27 np0005542249 podman[293429]: 2025-12-02 11:32:27.366747728 +0000 UTC m=+0.080560243 container create 8bdc1c90d97778aad8c26953822be27bdc32a7bbc1d5a1e06ef5e79ed7f05b28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chaplygin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:32:27 np0005542249 podman[293429]: 2025-12-02 11:32:27.333495981 +0000 UTC m=+0.047308556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:32:27 np0005542249 systemd[1]: Started libpod-conmon-8bdc1c90d97778aad8c26953822be27bdc32a7bbc1d5a1e06ef5e79ed7f05b28.scope.
Dec  2 06:32:27 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:32:27 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f120e7395788d3ea6cd4d94a13f744975fb3c29867fcdbedf3011ce5581247e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:32:27 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f120e7395788d3ea6cd4d94a13f744975fb3c29867fcdbedf3011ce5581247e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:32:27 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f120e7395788d3ea6cd4d94a13f744975fb3c29867fcdbedf3011ce5581247e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:32:27 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f120e7395788d3ea6cd4d94a13f744975fb3c29867fcdbedf3011ce5581247e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:32:27 np0005542249 podman[293429]: 2025-12-02 11:32:27.499696544 +0000 UTC m=+0.213509069 container init 8bdc1c90d97778aad8c26953822be27bdc32a7bbc1d5a1e06ef5e79ed7f05b28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chaplygin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  2 06:32:27 np0005542249 podman[293429]: 2025-12-02 11:32:27.507968247 +0000 UTC m=+0.221780742 container start 8bdc1c90d97778aad8c26953822be27bdc32a7bbc1d5a1e06ef5e79ed7f05b28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  2 06:32:27 np0005542249 podman[293429]: 2025-12-02 11:32:27.512272313 +0000 UTC m=+0.226084798 container attach 8bdc1c90d97778aad8c26953822be27bdc32a7bbc1d5a1e06ef5e79ed7f05b28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:32:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]: {
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:    "0": [
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:        {
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "devices": [
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "/dev/loop3"
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            ],
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "lv_name": "ceph_lv0",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "lv_size": "21470642176",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "name": "ceph_lv0",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "tags": {
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.cluster_name": "ceph",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.crush_device_class": "",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.encrypted": "0",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.osd_id": "0",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.type": "block",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.vdo": "0"
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            },
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "type": "block",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "vg_name": "ceph_vg0"
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:        }
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:    ],
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:    "1": [
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:        {
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "devices": [
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "/dev/loop4"
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            ],
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "lv_name": "ceph_lv1",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "lv_size": "21470642176",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "name": "ceph_lv1",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "tags": {
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.cluster_name": "ceph",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.crush_device_class": "",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.encrypted": "0",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.osd_id": "1",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.type": "block",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.vdo": "0"
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            },
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "type": "block",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "vg_name": "ceph_vg1"
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:        }
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:    ],
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:    "2": [
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:        {
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "devices": [
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "/dev/loop5"
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            ],
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "lv_name": "ceph_lv2",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "lv_size": "21470642176",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "name": "ceph_lv2",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "tags": {
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.cluster_name": "ceph",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.crush_device_class": "",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.encrypted": "0",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.osd_id": "2",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.type": "block",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:                "ceph.vdo": "0"
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            },
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "type": "block",
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:            "vg_name": "ceph_vg2"
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:        }
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]:    ]
Dec  2 06:32:28 np0005542249 pensive_chaplygin[293446]: }
Dec  2 06:32:28 np0005542249 systemd[1]: libpod-8bdc1c90d97778aad8c26953822be27bdc32a7bbc1d5a1e06ef5e79ed7f05b28.scope: Deactivated successfully.
Dec  2 06:32:28 np0005542249 podman[293429]: 2025-12-02 11:32:28.368490077 +0000 UTC m=+1.082302592 container died 8bdc1c90d97778aad8c26953822be27bdc32a7bbc1d5a1e06ef5e79ed7f05b28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chaplygin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:32:28 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1687: 321 pgs: 321 active+clean; 270 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 693 KiB/s rd, 9.0 KiB/s wr, 54 op/s
Dec  2 06:32:28 np0005542249 systemd[1]: var-lib-containers-storage-overlay-f120e7395788d3ea6cd4d94a13f744975fb3c29867fcdbedf3011ce5581247e6-merged.mount: Deactivated successfully.
Dec  2 06:32:28 np0005542249 podman[293429]: 2025-12-02 11:32:28.443851288 +0000 UTC m=+1.157663773 container remove 8bdc1c90d97778aad8c26953822be27bdc32a7bbc1d5a1e06ef5e79ed7f05b28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chaplygin, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:32:28 np0005542249 systemd[1]: libpod-conmon-8bdc1c90d97778aad8c26953822be27bdc32a7bbc1d5a1e06ef5e79ed7f05b28.scope: Deactivated successfully.
Dec  2 06:32:28 np0005542249 ovn_controller[153849]: 2025-12-02T11:32:28Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:07:41:77 10.100.0.13
Dec  2 06:32:28 np0005542249 ovn_controller[153849]: 2025-12-02T11:32:28Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:07:41:77 10.100.0.13
Dec  2 06:32:29 np0005542249 podman[293608]: 2025-12-02 11:32:29.34821725 +0000 UTC m=+0.047853601 container create b7e65b32e95256eb18f68803531719c837a7b7dd5ed87b0475989ece39240a77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  2 06:32:29 np0005542249 nova_compute[254900]: 2025-12-02 11:32:29.379 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:29 np0005542249 systemd[1]: Started libpod-conmon-b7e65b32e95256eb18f68803531719c837a7b7dd5ed87b0475989ece39240a77.scope.
Dec  2 06:32:29 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:32:29 np0005542249 podman[293608]: 2025-12-02 11:32:29.328736795 +0000 UTC m=+0.028373156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:32:29 np0005542249 podman[293608]: 2025-12-02 11:32:29.45056026 +0000 UTC m=+0.150196671 container init b7e65b32e95256eb18f68803531719c837a7b7dd5ed87b0475989ece39240a77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  2 06:32:29 np0005542249 podman[293608]: 2025-12-02 11:32:29.463923381 +0000 UTC m=+0.163559752 container start b7e65b32e95256eb18f68803531719c837a7b7dd5ed87b0475989ece39240a77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  2 06:32:29 np0005542249 podman[293608]: 2025-12-02 11:32:29.470217501 +0000 UTC m=+0.169853872 container attach b7e65b32e95256eb18f68803531719c837a7b7dd5ed87b0475989ece39240a77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:32:29 np0005542249 kind_goodall[293627]: 167 167
Dec  2 06:32:29 np0005542249 systemd[1]: libpod-b7e65b32e95256eb18f68803531719c837a7b7dd5ed87b0475989ece39240a77.scope: Deactivated successfully.
Dec  2 06:32:29 np0005542249 podman[293608]: 2025-12-02 11:32:29.474093095 +0000 UTC m=+0.173729476 container died b7e65b32e95256eb18f68803531719c837a7b7dd5ed87b0475989ece39240a77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  2 06:32:29 np0005542249 systemd[1]: var-lib-containers-storage-overlay-658b3365c7a9ef4a293ef8c2c900e8f91416324701017199d48f542eb4008e5f-merged.mount: Deactivated successfully.
Dec  2 06:32:29 np0005542249 podman[293608]: 2025-12-02 11:32:29.519063758 +0000 UTC m=+0.218700099 container remove b7e65b32e95256eb18f68803531719c837a7b7dd5ed87b0475989ece39240a77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:32:29 np0005542249 podman[293623]: 2025-12-02 11:32:29.528060471 +0000 UTC m=+0.128461956 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  2 06:32:29 np0005542249 systemd[1]: libpod-conmon-b7e65b32e95256eb18f68803531719c837a7b7dd5ed87b0475989ece39240a77.scope: Deactivated successfully.
Dec  2 06:32:29 np0005542249 podman[293626]: 2025-12-02 11:32:29.553048655 +0000 UTC m=+0.153735877 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  2 06:32:29 np0005542249 podman[293696]: 2025-12-02 11:32:29.813779837 +0000 UTC m=+0.074257763 container create 23dddc18740cfaa38e24dad122e34792c41e2433f7edbebd4e00c67bb4041b70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 06:32:29 np0005542249 systemd[1]: Started libpod-conmon-23dddc18740cfaa38e24dad122e34792c41e2433f7edbebd4e00c67bb4041b70.scope.
Dec  2 06:32:29 np0005542249 podman[293696]: 2025-12-02 11:32:29.784536369 +0000 UTC m=+0.045014385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:32:29 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:32:29 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2553457b4041420f2537d79c8873db9cdcb6e5f30940bd921d3b892b00c99cf1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:32:29 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2553457b4041420f2537d79c8873db9cdcb6e5f30940bd921d3b892b00c99cf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:32:29 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2553457b4041420f2537d79c8873db9cdcb6e5f30940bd921d3b892b00c99cf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:32:29 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2553457b4041420f2537d79c8873db9cdcb6e5f30940bd921d3b892b00c99cf1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:32:29 np0005542249 podman[293696]: 2025-12-02 11:32:29.945841849 +0000 UTC m=+0.206319845 container init 23dddc18740cfaa38e24dad122e34792c41e2433f7edbebd4e00c67bb4041b70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermat, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:32:29 np0005542249 podman[293696]: 2025-12-02 11:32:29.960636198 +0000 UTC m=+0.221114154 container start 23dddc18740cfaa38e24dad122e34792c41e2433f7edbebd4e00c67bb4041b70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermat, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:32:29 np0005542249 podman[293696]: 2025-12-02 11:32:29.964647497 +0000 UTC m=+0.225125503 container attach 23dddc18740cfaa38e24dad122e34792c41e2433f7edbebd4e00c67bb4041b70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:32:30 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1688: 321 pgs: 321 active+clean; 270 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 605 KiB/s rd, 7.8 KiB/s wr, 47 op/s
Dec  2 06:32:30 np0005542249 nova_compute[254900]: 2025-12-02 11:32:30.620 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:31 np0005542249 tender_fermat[293712]: {
Dec  2 06:32:31 np0005542249 tender_fermat[293712]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:32:31 np0005542249 tender_fermat[293712]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:32:31 np0005542249 tender_fermat[293712]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:32:31 np0005542249 tender_fermat[293712]:        "osd_id": 0,
Dec  2 06:32:31 np0005542249 tender_fermat[293712]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:32:31 np0005542249 tender_fermat[293712]:        "type": "bluestore"
Dec  2 06:32:31 np0005542249 tender_fermat[293712]:    },
Dec  2 06:32:31 np0005542249 tender_fermat[293712]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:32:31 np0005542249 tender_fermat[293712]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:32:31 np0005542249 tender_fermat[293712]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:32:31 np0005542249 tender_fermat[293712]:        "osd_id": 2,
Dec  2 06:32:31 np0005542249 tender_fermat[293712]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:32:31 np0005542249 tender_fermat[293712]:        "type": "bluestore"
Dec  2 06:32:31 np0005542249 tender_fermat[293712]:    },
Dec  2 06:32:31 np0005542249 tender_fermat[293712]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:32:31 np0005542249 tender_fermat[293712]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:32:31 np0005542249 tender_fermat[293712]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:32:31 np0005542249 tender_fermat[293712]:        "osd_id": 1,
Dec  2 06:32:31 np0005542249 tender_fermat[293712]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:32:31 np0005542249 tender_fermat[293712]:        "type": "bluestore"
Dec  2 06:32:31 np0005542249 tender_fermat[293712]:    }
Dec  2 06:32:31 np0005542249 tender_fermat[293712]: }
Dec  2 06:32:31 np0005542249 systemd[1]: libpod-23dddc18740cfaa38e24dad122e34792c41e2433f7edbebd4e00c67bb4041b70.scope: Deactivated successfully.
Dec  2 06:32:31 np0005542249 podman[293696]: 2025-12-02 11:32:31.141411185 +0000 UTC m=+1.401889131 container died 23dddc18740cfaa38e24dad122e34792c41e2433f7edbebd4e00c67bb4041b70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermat, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  2 06:32:31 np0005542249 systemd[1]: libpod-23dddc18740cfaa38e24dad122e34792c41e2433f7edbebd4e00c67bb4041b70.scope: Consumed 1.191s CPU time.
Dec  2 06:32:31 np0005542249 systemd[1]: var-lib-containers-storage-overlay-2553457b4041420f2537d79c8873db9cdcb6e5f30940bd921d3b892b00c99cf1-merged.mount: Deactivated successfully.
Dec  2 06:32:31 np0005542249 podman[293696]: 2025-12-02 11:32:31.225508073 +0000 UTC m=+1.485986029 container remove 23dddc18740cfaa38e24dad122e34792c41e2433f7edbebd4e00c67bb4041b70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermat, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  2 06:32:31 np0005542249 systemd[1]: libpod-conmon-23dddc18740cfaa38e24dad122e34792c41e2433f7edbebd4e00c67bb4041b70.scope: Deactivated successfully.
Dec  2 06:32:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:32:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:32:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:32:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:32:31 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 271ecc64-20a6-4d14-b12f-63f926909bff does not exist
Dec  2 06:32:31 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 953ac308-471e-4fe2-8474-4e6a39a4fd2f does not exist
Dec  2 06:32:31 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:32:31 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:32:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:32:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3140301661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:32:32 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1689: 321 pgs: 321 active+clean; 270 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 7.6 KiB/s wr, 44 op/s
Dec  2 06:32:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:32:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e406 do_prune osdmap full prune enabled
Dec  2 06:32:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e407 e407: 3 total, 3 up, 3 in
Dec  2 06:32:33 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e407: 3 total, 3 up, 3 in
Dec  2 06:32:34 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1691: 321 pgs: 321 active+clean; 270 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 665 KiB/s rd, 26 KiB/s wr, 58 op/s
Dec  2 06:32:34 np0005542249 nova_compute[254900]: 2025-12-02 11:32:34.382 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e407 do_prune osdmap full prune enabled
Dec  2 06:32:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e408 e408: 3 total, 3 up, 3 in
Dec  2 06:32:34 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e408: 3 total, 3 up, 3 in
Dec  2 06:32:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e408 do_prune osdmap full prune enabled
Dec  2 06:32:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e409 e409: 3 total, 3 up, 3 in
Dec  2 06:32:35 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e409: 3 total, 3 up, 3 in
Dec  2 06:32:35 np0005542249 nova_compute[254900]: 2025-12-02 11:32:35.624 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:32:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:32:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:32:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:32:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Dec  2 06:32:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:32:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0028902612611512667 of space, bias 1.0, pg target 0.86707837834538 quantized to 32 (current 32)
Dec  2 06:32:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:32:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Dec  2 06:32:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:32:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Dec  2 06:32:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:32:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:32:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:32:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:32:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:32:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:32:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:32:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:32:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:32:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:32:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:32:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:32:36 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1694: 321 pgs: 321 active+clean; 270 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 29 KiB/s wr, 17 op/s
Dec  2 06:32:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:32:38 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1695: 321 pgs: 321 active+clean; 270 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 38 KiB/s wr, 62 op/s
Dec  2 06:32:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:32:38 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1362012021' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:32:39 np0005542249 nova_compute[254900]: 2025-12-02 11:32:39.384 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e409 do_prune osdmap full prune enabled
Dec  2 06:32:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e410 e410: 3 total, 3 up, 3 in
Dec  2 06:32:39 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e410: 3 total, 3 up, 3 in
Dec  2 06:32:40 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1697: 321 pgs: 321 active+clean; 270 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 10 KiB/s wr, 54 op/s
Dec  2 06:32:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e410 do_prune osdmap full prune enabled
Dec  2 06:32:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e411 e411: 3 total, 3 up, 3 in
Dec  2 06:32:40 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e411: 3 total, 3 up, 3 in
Dec  2 06:32:40 np0005542249 nova_compute[254900]: 2025-12-02 11:32:40.628 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:42 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1699: 321 pgs: 321 active+clean; 270 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 10 KiB/s wr, 66 op/s
Dec  2 06:32:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e411 do_prune osdmap full prune enabled
Dec  2 06:32:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e412 e412: 3 total, 3 up, 3 in
Dec  2 06:32:42 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e412: 3 total, 3 up, 3 in
Dec  2 06:32:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e412 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:32:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e412 do_prune osdmap full prune enabled
Dec  2 06:32:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e413 e413: 3 total, 3 up, 3 in
Dec  2 06:32:42 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e413: 3 total, 3 up, 3 in
Dec  2 06:32:44 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1702: 321 pgs: 321 active+clean; 270 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 29 KiB/s wr, 83 op/s
Dec  2 06:32:44 np0005542249 nova_compute[254900]: 2025-12-02 11:32:44.386 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:45 np0005542249 nova_compute[254900]: 2025-12-02 11:32:45.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:32:45 np0005542249 nova_compute[254900]: 2025-12-02 11:32:45.382 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:32:45 np0005542249 nova_compute[254900]: 2025-12-02 11:32:45.631 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.125 254904 DEBUG oslo_concurrency.lockutils [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "c5811f91-4c9d-4c71-81dd-47e6a637fc29" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.126 254904 DEBUG oslo_concurrency.lockutils [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "c5811f91-4c9d-4c71-81dd-47e6a637fc29" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.126 254904 DEBUG oslo_concurrency.lockutils [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "c5811f91-4c9d-4c71-81dd-47e6a637fc29-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.127 254904 DEBUG oslo_concurrency.lockutils [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "c5811f91-4c9d-4c71-81dd-47e6a637fc29-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.127 254904 DEBUG oslo_concurrency.lockutils [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "c5811f91-4c9d-4c71-81dd-47e6a637fc29-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.130 254904 INFO nova.compute.manager [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Terminating instance#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.132 254904 DEBUG nova.compute.manager [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:32:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:32:46 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3990980880' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:32:46 np0005542249 kernel: tap4df50564-1d (unregistering): left promiscuous mode
Dec  2 06:32:46 np0005542249 NetworkManager[48987]: <info>  [1764675166.2012] device (tap4df50564-1d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.214 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:46 np0005542249 ovn_controller[153849]: 2025-12-02T11:32:46Z|00257|binding|INFO|Releasing lport 4df50564-1d9f-49d0-a86a-a8166845d7bd from this chassis (sb_readonly=0)
Dec  2 06:32:46 np0005542249 ovn_controller[153849]: 2025-12-02T11:32:46Z|00258|binding|INFO|Setting lport 4df50564-1d9f-49d0-a86a-a8166845d7bd down in Southbound
Dec  2 06:32:46 np0005542249 ovn_controller[153849]: 2025-12-02T11:32:46Z|00259|binding|INFO|Removing iface tap4df50564-1d ovn-installed in OVS
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.219 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:46 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:46.223 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:41:77 10.100.0.13'], port_security=['fa:16:3e:07:41:77 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c5811f91-4c9d-4c71-81dd-47e6a637fc29', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a893d0c223f746328e706d7491d73b20', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3d9ac6f0-d677-4372-91b9-169b8de31d1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.195'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9a246c4-d9fe-402e-8fa6-6099b55c4866, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=4df50564-1d9f-49d0-a86a-a8166845d7bd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:32:46 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:46.225 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 4df50564-1d9f-49d0-a86a-a8166845d7bd in datapath 4f9f73cb-9730-4829-ae15-1f03b97e60f8 unbound from our chassis#033[00m
Dec  2 06:32:46 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:46.227 163757 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4f9f73cb-9730-4829-ae15-1f03b97e60f8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 06:32:46 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:46.230 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[294a4c6b-42c0-4458-9c4d-a6bcfdbb9141]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:46 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:46.231 163757 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8 namespace which is not needed anymore#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.253 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:46 np0005542249 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Dec  2 06:32:46 np0005542249 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000001b.scope: Consumed 17.211s CPU time.
Dec  2 06:32:46 np0005542249 systemd-machined[216222]: Machine qemu-27-instance-0000001b terminated.
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.368 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.376 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.382 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.382 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.385 254904 INFO nova.virt.libvirt.driver [-] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Instance destroyed successfully.#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.385 254904 DEBUG nova.objects.instance [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lazy-loading 'resources' on Instance uuid c5811f91-4c9d-4c71-81dd-47e6a637fc29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:32:46 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1703: 321 pgs: 321 active+clean; 270 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 24 KiB/s wr, 68 op/s
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.404 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.405 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.406 254904 DEBUG nova.virt.libvirt.vif [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:32:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1199496089',display_name='tempest-TransferEncryptedVolumeTest-server-1199496089',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1199496089',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFgf6Glrr4mOxvEUPVgKAzbMgkblDirrH2khctJvQZz94LiWsbLQ82mzb2Y9Rt3SfEin9Hpb/echGrZ3sf80MCARzFat1FEJ1OH4HmTSACHUm+YJhtZATdTFOe7LkwhRFQ==',key_name='tempest-TransferEncryptedVolumeTest-1310282930',keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:32:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a893d0c223f746328e706d7491d73b20',ramdisk_id='',reservation_id='r-om6k07uf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1499588457',owner_user_name='tempest-TransferEncryptedVolumeTest-1499588457-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:32:11Z,user_data=None,user_id='1caa62e7ee8b42be98bc34780a7197f9',uuid=c5811f91-4c9d-4c71-81dd-47e6a637fc29,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4df50564-1d9f-49d0-a86a-a8166845d7bd", "address": "fa:16:3e:07:41:77", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4df50564-1d", "ovs_interfaceid": "4df50564-1d9f-49d0-a86a-a8166845d7bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.406 254904 DEBUG nova.network.os_vif_util [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Converting VIF {"id": "4df50564-1d9f-49d0-a86a-a8166845d7bd", "address": "fa:16:3e:07:41:77", "network": {"id": "4f9f73cb-9730-4829-ae15-1f03b97e60f8", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1284091372-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a893d0c223f746328e706d7491d73b20", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4df50564-1d", "ovs_interfaceid": "4df50564-1d9f-49d0-a86a-a8166845d7bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.407 254904 DEBUG nova.network.os_vif_util [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:07:41:77,bridge_name='br-int',has_traffic_filtering=True,id=4df50564-1d9f-49d0-a86a-a8166845d7bd,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4df50564-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.407 254904 DEBUG os_vif [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:41:77,bridge_name='br-int',has_traffic_filtering=True,id=4df50564-1d9f-49d0-a86a-a8166845d7bd,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4df50564-1d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.409 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.410 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4df50564-1d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.411 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.413 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.416 254904 INFO os_vif [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:41:77,bridge_name='br-int',has_traffic_filtering=True,id=4df50564-1d9f-49d0-a86a-a8166845d7bd,network=Network(4f9f73cb-9730-4829-ae15-1f03b97e60f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4df50564-1d')#033[00m
Dec  2 06:32:46 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[292715]: [NOTICE]   (292719) : haproxy version is 2.8.14-c23fe91
Dec  2 06:32:46 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[292715]: [NOTICE]   (292719) : path to executable is /usr/sbin/haproxy
Dec  2 06:32:46 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[292715]: [WARNING]  (292719) : Exiting Master process...
Dec  2 06:32:46 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[292715]: [ALERT]    (292719) : Current worker (292721) exited with code 143 (Terminated)
Dec  2 06:32:46 np0005542249 neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8[292715]: [WARNING]  (292719) : All workers exited. Exiting... (0)
Dec  2 06:32:46 np0005542249 systemd[1]: libpod-e7022f01b2f0a8a6bd66bf7daad72ad48bb4f1d5d18810d9de1678a26b75f8f3.scope: Deactivated successfully.
Dec  2 06:32:46 np0005542249 conmon[292715]: conmon e7022f01b2f0a8a6bd66 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e7022f01b2f0a8a6bd66bf7daad72ad48bb4f1d5d18810d9de1678a26b75f8f3.scope/container/memory.events
Dec  2 06:32:46 np0005542249 podman[293835]: 2025-12-02 11:32:46.448356464 +0000 UTC m=+0.070688137 container died e7022f01b2f0a8a6bd66bf7daad72ad48bb4f1d5d18810d9de1678a26b75f8f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:32:46 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e7022f01b2f0a8a6bd66bf7daad72ad48bb4f1d5d18810d9de1678a26b75f8f3-userdata-shm.mount: Deactivated successfully.
Dec  2 06:32:46 np0005542249 systemd[1]: var-lib-containers-storage-overlay-a8ba161d7c6fd0b36c6edc77e285dc302bf7054ecdc259861cbec5541af92c3b-merged.mount: Deactivated successfully.
Dec  2 06:32:46 np0005542249 podman[293835]: 2025-12-02 11:32:46.496568545 +0000 UTC m=+0.118900208 container cleanup e7022f01b2f0a8a6bd66bf7daad72ad48bb4f1d5d18810d9de1678a26b75f8f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:32:46 np0005542249 systemd[1]: libpod-conmon-e7022f01b2f0a8a6bd66bf7daad72ad48bb4f1d5d18810d9de1678a26b75f8f3.scope: Deactivated successfully.
Dec  2 06:32:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e413 do_prune osdmap full prune enabled
Dec  2 06:32:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e414 e414: 3 total, 3 up, 3 in
Dec  2 06:32:46 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e414: 3 total, 3 up, 3 in
Dec  2 06:32:46 np0005542249 podman[293888]: 2025-12-02 11:32:46.570743445 +0000 UTC m=+0.048835388 container remove e7022f01b2f0a8a6bd66bf7daad72ad48bb4f1d5d18810d9de1678a26b75f8f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.569 254904 DEBUG nova.compute.manager [req-3de1e39a-575e-4f45-ab6c-daaab2e39a9c req-b48cb682-2280-45b4-9d59-31283853a3b4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Received event network-vif-unplugged-4df50564-1d9f-49d0-a86a-a8166845d7bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.571 254904 DEBUG oslo_concurrency.lockutils [req-3de1e39a-575e-4f45-ab6c-daaab2e39a9c req-b48cb682-2280-45b4-9d59-31283853a3b4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "c5811f91-4c9d-4c71-81dd-47e6a637fc29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.572 254904 DEBUG oslo_concurrency.lockutils [req-3de1e39a-575e-4f45-ab6c-daaab2e39a9c req-b48cb682-2280-45b4-9d59-31283853a3b4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "c5811f91-4c9d-4c71-81dd-47e6a637fc29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.572 254904 DEBUG oslo_concurrency.lockutils [req-3de1e39a-575e-4f45-ab6c-daaab2e39a9c req-b48cb682-2280-45b4-9d59-31283853a3b4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "c5811f91-4c9d-4c71-81dd-47e6a637fc29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.572 254904 DEBUG nova.compute.manager [req-3de1e39a-575e-4f45-ab6c-daaab2e39a9c req-b48cb682-2280-45b4-9d59-31283853a3b4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] No waiting events found dispatching network-vif-unplugged-4df50564-1d9f-49d0-a86a-a8166845d7bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.572 254904 DEBUG nova.compute.manager [req-3de1e39a-575e-4f45-ab6c-daaab2e39a9c req-b48cb682-2280-45b4-9d59-31283853a3b4 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Received event network-vif-unplugged-4df50564-1d9f-49d0-a86a-a8166845d7bd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 06:32:46 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:46.579 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[37357f75-2be3-4b5a-95f6-bfbfa9cad2e3]: (4, ('Tue Dec  2 11:32:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8 (e7022f01b2f0a8a6bd66bf7daad72ad48bb4f1d5d18810d9de1678a26b75f8f3)\ne7022f01b2f0a8a6bd66bf7daad72ad48bb4f1d5d18810d9de1678a26b75f8f3\nTue Dec  2 11:32:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8 (e7022f01b2f0a8a6bd66bf7daad72ad48bb4f1d5d18810d9de1678a26b75f8f3)\ne7022f01b2f0a8a6bd66bf7daad72ad48bb4f1d5d18810d9de1678a26b75f8f3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:46 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:46.581 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[50d81b62-c951-495d-8df4-050b4adcc0ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:46 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:46.582 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f9f73cb-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.584 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:46 np0005542249 kernel: tap4f9f73cb-90: left promiscuous mode
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.603 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:46 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:46.609 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[a550a3cd-4fa4-41e9-85d9-7a2ecec16abd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:46 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:46.627 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[deea00dd-5989-4cb9-a534-e313491c4617]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:46 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:46.629 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[3dcddedf-cba9-46ff-a510-962052c3e66a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:46 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:46.655 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[3108d9a3-5032-460e-b12d-c269222c6bb7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532119, 'reachable_time': 32741, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293902, 'error': None, 'target': 'ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.656 254904 INFO nova.virt.libvirt.driver [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Deleting instance files /var/lib/nova/instances/c5811f91-4c9d-4c71-81dd-47e6a637fc29_del#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.657 254904 INFO nova.virt.libvirt.driver [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Deletion of /var/lib/nova/instances/c5811f91-4c9d-4c71-81dd-47e6a637fc29_del complete#033[00m
Dec  2 06:32:46 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:46.658 164036 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4f9f73cb-9730-4829-ae15-1f03b97e60f8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 06:32:46 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:32:46.658 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[6219c5f5-72cf-47dc-9c05-4eede7182d11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:32:46 np0005542249 systemd[1]: run-netns-ovnmeta\x2d4f9f73cb\x2d9730\x2d4829\x2dae15\x2d1f03b97e60f8.mount: Deactivated successfully.
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.715 254904 INFO nova.compute.manager [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Took 0.58 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.716 254904 DEBUG oslo.service.loopingcall [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.716 254904 DEBUG nova.compute.manager [-] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:32:46 np0005542249 nova_compute[254900]: 2025-12-02 11:32:46.717 254904 DEBUG nova.network.neutron [-] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:32:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e414 do_prune osdmap full prune enabled
Dec  2 06:32:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e415 e415: 3 total, 3 up, 3 in
Dec  2 06:32:47 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e415: 3 total, 3 up, 3 in
Dec  2 06:32:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:32:47 np0005542249 nova_compute[254900]: 2025-12-02 11:32:47.995 254904 DEBUG nova.network.neutron [-] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:32:48 np0005542249 nova_compute[254900]: 2025-12-02 11:32:48.016 254904 INFO nova.compute.manager [-] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Took 1.30 seconds to deallocate network for instance.#033[00m
Dec  2 06:32:48 np0005542249 nova_compute[254900]: 2025-12-02 11:32:48.214 254904 INFO nova.compute.manager [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Took 0.20 seconds to detach 1 volumes for instance.#033[00m
Dec  2 06:32:48 np0005542249 nova_compute[254900]: 2025-12-02 11:32:48.264 254904 DEBUG oslo_concurrency.lockutils [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:32:48 np0005542249 nova_compute[254900]: 2025-12-02 11:32:48.265 254904 DEBUG oslo_concurrency.lockutils [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:32:48 np0005542249 nova_compute[254900]: 2025-12-02 11:32:48.329 254904 DEBUG oslo_concurrency.processutils [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:32:48 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1706: 321 pgs: 321 active+clean; 270 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 194 KiB/s rd, 26 KiB/s wr, 110 op/s
Dec  2 06:32:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:32:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1872252950' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:32:48 np0005542249 nova_compute[254900]: 2025-12-02 11:32:48.654 254904 DEBUG nova.compute.manager [req-e0405633-8e11-45c7-8b2b-f68e86666aec req-2195f185-0e8d-4bd3-9dd5-0ea15e19c3fd 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Received event network-vif-plugged-4df50564-1d9f-49d0-a86a-a8166845d7bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:32:48 np0005542249 nova_compute[254900]: 2025-12-02 11:32:48.655 254904 DEBUG oslo_concurrency.lockutils [req-e0405633-8e11-45c7-8b2b-f68e86666aec req-2195f185-0e8d-4bd3-9dd5-0ea15e19c3fd 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "c5811f91-4c9d-4c71-81dd-47e6a637fc29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:32:48 np0005542249 nova_compute[254900]: 2025-12-02 11:32:48.656 254904 DEBUG oslo_concurrency.lockutils [req-e0405633-8e11-45c7-8b2b-f68e86666aec req-2195f185-0e8d-4bd3-9dd5-0ea15e19c3fd 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "c5811f91-4c9d-4c71-81dd-47e6a637fc29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:32:48 np0005542249 nova_compute[254900]: 2025-12-02 11:32:48.656 254904 DEBUG oslo_concurrency.lockutils [req-e0405633-8e11-45c7-8b2b-f68e86666aec req-2195f185-0e8d-4bd3-9dd5-0ea15e19c3fd 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "c5811f91-4c9d-4c71-81dd-47e6a637fc29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:32:48 np0005542249 nova_compute[254900]: 2025-12-02 11:32:48.657 254904 DEBUG nova.compute.manager [req-e0405633-8e11-45c7-8b2b-f68e86666aec req-2195f185-0e8d-4bd3-9dd5-0ea15e19c3fd 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] No waiting events found dispatching network-vif-plugged-4df50564-1d9f-49d0-a86a-a8166845d7bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:32:48 np0005542249 nova_compute[254900]: 2025-12-02 11:32:48.657 254904 WARNING nova.compute.manager [req-e0405633-8e11-45c7-8b2b-f68e86666aec req-2195f185-0e8d-4bd3-9dd5-0ea15e19c3fd 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Received unexpected event network-vif-plugged-4df50564-1d9f-49d0-a86a-a8166845d7bd for instance with vm_state deleted and task_state None.#033[00m
Dec  2 06:32:48 np0005542249 nova_compute[254900]: 2025-12-02 11:32:48.658 254904 DEBUG nova.compute.manager [req-e0405633-8e11-45c7-8b2b-f68e86666aec req-2195f185-0e8d-4bd3-9dd5-0ea15e19c3fd 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Received event network-vif-deleted-4df50564-1d9f-49d0-a86a-a8166845d7bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:32:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:32:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1129056508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:32:48 np0005542249 nova_compute[254900]: 2025-12-02 11:32:48.826 254904 DEBUG oslo_concurrency.processutils [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:32:48 np0005542249 nova_compute[254900]: 2025-12-02 11:32:48.836 254904 DEBUG nova.compute.provider_tree [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:32:48 np0005542249 nova_compute[254900]: 2025-12-02 11:32:48.855 254904 DEBUG nova.scheduler.client.report [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:32:48 np0005542249 nova_compute[254900]: 2025-12-02 11:32:48.879 254904 DEBUG oslo_concurrency.lockutils [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:32:48 np0005542249 nova_compute[254900]: 2025-12-02 11:32:48.907 254904 INFO nova.scheduler.client.report [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Deleted allocations for instance c5811f91-4c9d-4c71-81dd-47e6a637fc29#033[00m
Dec  2 06:32:48 np0005542249 nova_compute[254900]: 2025-12-02 11:32:48.981 254904 DEBUG oslo_concurrency.lockutils [None req-b5daf12e-ec92-4b15-9488-52282d3bc245 1caa62e7ee8b42be98bc34780a7197f9 a893d0c223f746328e706d7491d73b20 - - default default] Lock "c5811f91-4c9d-4c71-81dd-47e6a637fc29" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:32:49 np0005542249 nova_compute[254900]: 2025-12-02 11:32:49.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:32:49 np0005542249 nova_compute[254900]: 2025-12-02 11:32:49.388 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e415 do_prune osdmap full prune enabled
Dec  2 06:32:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e416 e416: 3 total, 3 up, 3 in
Dec  2 06:32:49 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e416: 3 total, 3 up, 3 in
Dec  2 06:32:50 np0005542249 podman[293926]: 2025-12-02 11:32:50.040646542 +0000 UTC m=+0.117165721 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:32:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:32:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3551171722' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:32:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:32:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3551171722' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:32:50 np0005542249 nova_compute[254900]: 2025-12-02 11:32:50.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:32:50 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1708: 321 pgs: 321 active+clean; 270 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 461 KiB/s rd, 3.5 KiB/s wr, 68 op/s
Dec  2 06:32:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e416 do_prune osdmap full prune enabled
Dec  2 06:32:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e417 e417: 3 total, 3 up, 3 in
Dec  2 06:32:50 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e417: 3 total, 3 up, 3 in
Dec  2 06:32:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:32:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/23531857' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:32:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:32:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/23531857' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:32:51 np0005542249 nova_compute[254900]: 2025-12-02 11:32:51.414 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:32:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3597056238' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:32:52 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1710: 321 pgs: 321 active+clean; 217 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 486 KiB/s rd, 5.8 KiB/s wr, 94 op/s
Dec  2 06:32:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e417 do_prune osdmap full prune enabled
Dec  2 06:32:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e418 e418: 3 total, 3 up, 3 in
Dec  2 06:32:52 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e418: 3 total, 3 up, 3 in
Dec  2 06:32:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:32:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e418 do_prune osdmap full prune enabled
Dec  2 06:32:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e419 e419: 3 total, 3 up, 3 in
Dec  2 06:32:52 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e419: 3 total, 3 up, 3 in
Dec  2 06:32:54 np0005542249 nova_compute[254900]: 2025-12-02 11:32:54.378 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:32:54 np0005542249 nova_compute[254900]: 2025-12-02 11:32:54.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:32:54 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1713: 321 pgs: 321 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 5.4 KiB/s wr, 127 op/s
Dec  2 06:32:54 np0005542249 nova_compute[254900]: 2025-12-02 11:32:54.390 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:54 np0005542249 nova_compute[254900]: 2025-12-02 11:32:54.412 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:32:54 np0005542249 nova_compute[254900]: 2025-12-02 11:32:54.413 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:32:54 np0005542249 nova_compute[254900]: 2025-12-02 11:32:54.413 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:32:54 np0005542249 nova_compute[254900]: 2025-12-02 11:32:54.413 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:32:54 np0005542249 nova_compute[254900]: 2025-12-02 11:32:54.414 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:32:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:32:54 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4271555916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:32:54 np0005542249 nova_compute[254900]: 2025-12-02 11:32:54.912 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:32:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:32:55 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3307481288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:32:55 np0005542249 nova_compute[254900]: 2025-12-02 11:32:55.137 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:32:55 np0005542249 nova_compute[254900]: 2025-12-02 11:32:55.138 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4379MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:32:55 np0005542249 nova_compute[254900]: 2025-12-02 11:32:55.138 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:32:55 np0005542249 nova_compute[254900]: 2025-12-02 11:32:55.139 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:32:55 np0005542249 nova_compute[254900]: 2025-12-02 11:32:55.207 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:32:55 np0005542249 nova_compute[254900]: 2025-12-02 11:32:55.208 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:32:55 np0005542249 nova_compute[254900]: 2025-12-02 11:32:55.222 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:32:55 np0005542249 nova_compute[254900]: 2025-12-02 11:32:55.242 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:55 np0005542249 nova_compute[254900]: 2025-12-02 11:32:55.460 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e419 do_prune osdmap full prune enabled
Dec  2 06:32:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:32:55 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/905170387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:32:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e420 e420: 3 total, 3 up, 3 in
Dec  2 06:32:55 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e420: 3 total, 3 up, 3 in
Dec  2 06:32:55 np0005542249 nova_compute[254900]: 2025-12-02 11:32:55.654 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:32:55 np0005542249 nova_compute[254900]: 2025-12-02 11:32:55.660 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:32:55 np0005542249 nova_compute[254900]: 2025-12-02 11:32:55.678 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:32:55 np0005542249 nova_compute[254900]: 2025-12-02 11:32:55.699 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:32:55 np0005542249 nova_compute[254900]: 2025-12-02 11:32:55.699 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:32:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:32:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:32:56 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1715: 321 pgs: 321 active+clean; 88 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 4.5 KiB/s wr, 105 op/s
Dec  2 06:32:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:32:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:32:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:32:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:32:56 np0005542249 nova_compute[254900]: 2025-12-02 11:32:56.416 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:32:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e420 do_prune osdmap full prune enabled
Dec  2 06:32:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e421 e421: 3 total, 3 up, 3 in
Dec  2 06:32:56 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e421: 3 total, 3 up, 3 in
Dec  2 06:32:56 np0005542249 nova_compute[254900]: 2025-12-02 11:32:56.701 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:32:57 np0005542249 nova_compute[254900]: 2025-12-02 11:32:57.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:32:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:32:58 np0005542249 nova_compute[254900]: 2025-12-02 11:32:58.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:32:58 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1717: 321 pgs: 321 active+clean; 88 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 3.8 KiB/s wr, 141 op/s
Dec  2 06:32:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e421 do_prune osdmap full prune enabled
Dec  2 06:32:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e422 e422: 3 total, 3 up, 3 in
Dec  2 06:32:58 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e422: 3 total, 3 up, 3 in
Dec  2 06:32:59 np0005542249 nova_compute[254900]: 2025-12-02 11:32:59.424 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:00 np0005542249 podman[293993]: 2025-12-02 11:33:00.023085462 +0000 UTC m=+0.100204154 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  2 06:33:00 np0005542249 podman[293994]: 2025-12-02 11:33:00.061844876 +0000 UTC m=+0.134492988 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 06:33:00 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1719: 321 pgs: 321 active+clean; 88 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 KiB/s wr, 57 op/s
Dec  2 06:33:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e422 do_prune osdmap full prune enabled
Dec  2 06:33:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e423 e423: 3 total, 3 up, 3 in
Dec  2 06:33:00 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e423: 3 total, 3 up, 3 in
Dec  2 06:33:01 np0005542249 nova_compute[254900]: 2025-12-02 11:33:01.382 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764675166.3805652, c5811f91-4c9d-4c71-81dd-47e6a637fc29 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:33:01 np0005542249 nova_compute[254900]: 2025-12-02 11:33:01.382 254904 INFO nova.compute.manager [-] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:33:01 np0005542249 nova_compute[254900]: 2025-12-02 11:33:01.404 254904 DEBUG nova.compute.manager [None req-ae0d22ca-7e2f-488a-98db-7dc55252cd44 - - - - - -] [instance: c5811f91-4c9d-4c71-81dd-47e6a637fc29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:33:01 np0005542249 nova_compute[254900]: 2025-12-02 11:33:01.420 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e423 do_prune osdmap full prune enabled
Dec  2 06:33:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e424 e424: 3 total, 3 up, 3 in
Dec  2 06:33:01 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e424: 3 total, 3 up, 3 in
Dec  2 06:33:02 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1722: 321 pgs: 321 active+clean; 88 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 3.3 KiB/s wr, 104 op/s
Dec  2 06:33:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e424 do_prune osdmap full prune enabled
Dec  2 06:33:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e425 e425: 3 total, 3 up, 3 in
Dec  2 06:33:02 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e425: 3 total, 3 up, 3 in
Dec  2 06:33:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:33:04 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1724: 321 pgs: 321 active+clean; 88 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 3.7 KiB/s wr, 81 op/s
Dec  2 06:33:04 np0005542249 nova_compute[254900]: 2025-12-02 11:33:04.427 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e425 do_prune osdmap full prune enabled
Dec  2 06:33:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e426 e426: 3 total, 3 up, 3 in
Dec  2 06:33:04 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e426: 3 total, 3 up, 3 in
Dec  2 06:33:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e426 do_prune osdmap full prune enabled
Dec  2 06:33:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e427 e427: 3 total, 3 up, 3 in
Dec  2 06:33:05 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e427: 3 total, 3 up, 3 in
Dec  2 06:33:06 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1727: 321 pgs: 321 active+clean; 88 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 4.1 KiB/s wr, 97 op/s
Dec  2 06:33:06 np0005542249 nova_compute[254900]: 2025-12-02 11:33:06.423 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e427 do_prune osdmap full prune enabled
Dec  2 06:33:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e428 e428: 3 total, 3 up, 3 in
Dec  2 06:33:06 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e428: 3 total, 3 up, 3 in
Dec  2 06:33:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e428 do_prune osdmap full prune enabled
Dec  2 06:33:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e429 e429: 3 total, 3 up, 3 in
Dec  2 06:33:07 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e429: 3 total, 3 up, 3 in
Dec  2 06:33:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:33:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e429 do_prune osdmap full prune enabled
Dec  2 06:33:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e430 e430: 3 total, 3 up, 3 in
Dec  2 06:33:07 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e430: 3 total, 3 up, 3 in
Dec  2 06:33:08 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1731: 321 pgs: 321 active+clean; 88 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 5.0 KiB/s wr, 113 op/s
Dec  2 06:33:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:33:09 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3851014667' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:33:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:33:09 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3851014667' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:33:09 np0005542249 nova_compute[254900]: 2025-12-02 11:33:09.461 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:10 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1732: 321 pgs: 321 active+clean; 88 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 3.9 KiB/s wr, 89 op/s
Dec  2 06:33:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e430 do_prune osdmap full prune enabled
Dec  2 06:33:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e431 e431: 3 total, 3 up, 3 in
Dec  2 06:33:11 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e431: 3 total, 3 up, 3 in
Dec  2 06:33:11 np0005542249 nova_compute[254900]: 2025-12-02 11:33:11.427 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:33:11 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3146179226' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:33:11 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:33:11 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3146179226' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:33:12 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1734: 321 pgs: 321 active+clean; 88 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 4.0 KiB/s wr, 97 op/s
Dec  2 06:33:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:33:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e431 do_prune osdmap full prune enabled
Dec  2 06:33:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e432 e432: 3 total, 3 up, 3 in
Dec  2 06:33:12 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e432: 3 total, 3 up, 3 in
Dec  2 06:33:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:33:13 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1076031483' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:33:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:33:13 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1076031483' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:33:14 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1736: 321 pgs: 321 active+clean; 88 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 2.3 KiB/s wr, 86 op/s
Dec  2 06:33:14 np0005542249 nova_compute[254900]: 2025-12-02 11:33:14.464 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:16 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1737: 321 pgs: 321 active+clean; 88 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 1.9 KiB/s wr, 68 op/s
Dec  2 06:33:16 np0005542249 nova_compute[254900]: 2025-12-02 11:33:16.432 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:33:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e432 do_prune osdmap full prune enabled
Dec  2 06:33:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e433 e433: 3 total, 3 up, 3 in
Dec  2 06:33:18 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e433: 3 total, 3 up, 3 in
Dec  2 06:33:18 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1739: 321 pgs: 321 active+clean; 88 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 2.5 KiB/s wr, 82 op/s
Dec  2 06:33:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:19.173 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:23:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:25:d0:a1:75:ed'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:33:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:19.173 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 06:33:19 np0005542249 nova_compute[254900]: 2025-12-02 11:33:19.212 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:19 np0005542249 nova_compute[254900]: 2025-12-02 11:33:19.468 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:19.847 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:33:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:19.847 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:33:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:19.847 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:33:20 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1740: 321 pgs: 321 active+clean; 88 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1.9 KiB/s wr, 47 op/s
Dec  2 06:33:21 np0005542249 podman[294051]: 2025-12-02 11:33:21.000771928 +0000 UTC m=+0.060645157 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Dec  2 06:33:21 np0005542249 nova_compute[254900]: 2025-12-02 11:33:21.437 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e433 do_prune osdmap full prune enabled
Dec  2 06:33:21 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e434 e434: 3 total, 3 up, 3 in
Dec  2 06:33:21 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e434: 3 total, 3 up, 3 in
Dec  2 06:33:22 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:22.176 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:33:22 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1742: 321 pgs: 321 active+clean; 88 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.4 KiB/s wr, 16 op/s
Dec  2 06:33:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:33:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e434 do_prune osdmap full prune enabled
Dec  2 06:33:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e435 e435: 3 total, 3 up, 3 in
Dec  2 06:33:23 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e435: 3 total, 3 up, 3 in
Dec  2 06:33:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:33:24 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1860672321' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:33:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:33:24 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1860672321' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:33:24 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1744: 321 pgs: 321 active+clean; 88 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.6 KiB/s wr, 37 op/s
Dec  2 06:33:24 np0005542249 nova_compute[254900]: 2025-12-02 11:33:24.537 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:33:25 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/594028610' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:33:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:33:25 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/594028610' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:33:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:33:26
Dec  2 06:33:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:33:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:33:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'volumes', 'vms', 'images', '.rgw.root', 'default.rgw.meta', 'default.rgw.log']
Dec  2 06:33:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:33:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:33:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:33:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:33:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:33:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:33:26 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1745: 321 pgs: 321 active+clean; 88 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 KiB/s wr, 28 op/s
Dec  2 06:33:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:33:26 np0005542249 nova_compute[254900]: 2025-12-02 11:33:26.441 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:33:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:33:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:33:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:33:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:33:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:33:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:33:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:33:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:33:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:33:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:33:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3351719796' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:33:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:33:26 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3351719796' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:33:28.218291) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675208218630, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1443, "num_deletes": 270, "total_data_size": 1887985, "memory_usage": 1935952, "flush_reason": "Manual Compaction"}
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675208233497, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1862688, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34199, "largest_seqno": 35641, "table_properties": {"data_size": 1855817, "index_size": 3946, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15245, "raw_average_key_size": 20, "raw_value_size": 1841718, "raw_average_value_size": 2505, "num_data_blocks": 173, "num_entries": 735, "num_filter_entries": 735, "num_deletions": 270, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764675117, "oldest_key_time": 1764675117, "file_creation_time": 1764675208, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 15472 microseconds, and 7489 cpu microseconds.
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:33:28.233751) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1862688 bytes OK
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:33:28.233847) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:33:28.235934) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:33:28.235961) EVENT_LOG_v1 {"time_micros": 1764675208235951, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:33:28.235996) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1881274, prev total WAL file size 1881274, number of live WAL files 2.
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:33:28.238962) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303034' seq:72057594037927935, type:22 .. '6C6F676D0031323536' seq:0, type:0; will stop at (end)
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1819KB)], [71(8610KB)]
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675208239112, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 10680335, "oldest_snapshot_seqno": -1}
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6513 keys, 10528302 bytes, temperature: kUnknown
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675208326639, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 10528302, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10479378, "index_size": 31534, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 164671, "raw_average_key_size": 25, "raw_value_size": 10357085, "raw_average_value_size": 1590, "num_data_blocks": 1264, "num_entries": 6513, "num_filter_entries": 6513, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672515, "oldest_key_time": 0, "file_creation_time": 1764675208, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:33:28.326991) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 10528302 bytes
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:33:28.328977) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 121.8 rd, 120.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 8.4 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(11.4) write-amplify(5.7) OK, records in: 7057, records dropped: 544 output_compression: NoCompression
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:33:28.329075) EVENT_LOG_v1 {"time_micros": 1764675208329054, "job": 40, "event": "compaction_finished", "compaction_time_micros": 87670, "compaction_time_cpu_micros": 50096, "output_level": 6, "num_output_files": 1, "total_output_size": 10528302, "num_input_records": 7057, "num_output_records": 6513, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675208329963, "job": 40, "event": "table_file_deletion", "file_number": 73}
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675208333543, "job": 40, "event": "table_file_deletion", "file_number": 71}
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:33:28.238000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:33:28.333585) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:33:28.333590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:33:28.333593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:33:28.333595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:33:28.333597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:33:28 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1746: 321 pgs: 321 active+clean; 88 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 4.9 KiB/s wr, 63 op/s
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3550234607' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:33:28 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3550234607' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:33:29 np0005542249 nova_compute[254900]: 2025-12-02 11:33:29.570 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:33:29 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/603640349' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:33:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:33:29 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/603640349' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:33:30 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1747: 321 pgs: 321 active+clean; 88 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 4.4 KiB/s wr, 85 op/s
Dec  2 06:33:31 np0005542249 podman[294074]: 2025-12-02 11:33:31.014875641 +0000 UTC m=+0.084833039 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:33:31 np0005542249 podman[294075]: 2025-12-02 11:33:31.076208075 +0000 UTC m=+0.143084740 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 06:33:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e435 do_prune osdmap full prune enabled
Dec  2 06:33:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e436 e436: 3 total, 3 up, 3 in
Dec  2 06:33:31 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e436: 3 total, 3 up, 3 in
Dec  2 06:33:31 np0005542249 nova_compute[254900]: 2025-12-02 11:33:31.444 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:33:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2972334420' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:33:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:33:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2972334420' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:33:32 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1749: 321 pgs: 321 active+clean; 88 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 3.9 KiB/s wr, 117 op/s
Dec  2 06:33:32 np0005542249 podman[294293]: 2025-12-02 11:33:32.716518786 +0000 UTC m=+0.099876175 container exec cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:33:32 np0005542249 podman[294293]: 2025-12-02 11:33:32.841164138 +0000 UTC m=+0.224521527 container exec_died cfead6f8cdae3fb33ff10b470724c55f63ec4997c8e0a95beaf5732ac7b8da1b (image=quay.io/ceph/ceph:v18, name=ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 06:33:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:33:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:33:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:33:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:33:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:33:34 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1750: 321 pgs: 321 active+clean; 88 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 30 KiB/s wr, 103 op/s
Dec  2 06:33:34 np0005542249 nova_compute[254900]: 2025-12-02 11:33:34.574 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:33:34 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:33:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:33:34 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:33:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:33:34 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:33:34 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev b0db6089-622f-461e-b75e-2d11cf135244 does not exist
Dec  2 06:33:34 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 80726295-5519-464f-b2fe-2da87c86111c does not exist
Dec  2 06:33:34 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 51cc9b7a-6f0d-4b89-8e22-b63113a1c054 does not exist
Dec  2 06:33:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:33:34 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:33:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:33:34 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:33:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:33:34 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:33:34 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:33:34 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:33:34 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:33:34 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:33:34 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:33:35 np0005542249 podman[294718]: 2025-12-02 11:33:35.677045935 +0000 UTC m=+0.129937606 container create 6170fb0c18215a009542e361f2d68d7feaeb98d7f1dc07a76f9da8d1d51c58a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  2 06:33:35 np0005542249 podman[294718]: 2025-12-02 11:33:35.590382617 +0000 UTC m=+0.043274348 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:33:35 np0005542249 systemd[1]: Started libpod-conmon-6170fb0c18215a009542e361f2d68d7feaeb98d7f1dc07a76f9da8d1d51c58a8.scope.
Dec  2 06:33:35 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:33:35 np0005542249 podman[294718]: 2025-12-02 11:33:35.788678116 +0000 UTC m=+0.241569807 container init 6170fb0c18215a009542e361f2d68d7feaeb98d7f1dc07a76f9da8d1d51c58a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  2 06:33:35 np0005542249 podman[294718]: 2025-12-02 11:33:35.80219381 +0000 UTC m=+0.255085481 container start 6170fb0c18215a009542e361f2d68d7feaeb98d7f1dc07a76f9da8d1d51c58a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  2 06:33:35 np0005542249 podman[294718]: 2025-12-02 11:33:35.807341659 +0000 UTC m=+0.260233360 container attach 6170fb0c18215a009542e361f2d68d7feaeb98d7f1dc07a76f9da8d1d51c58a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:33:35 np0005542249 xenodochial_merkle[294734]: 167 167
Dec  2 06:33:35 np0005542249 systemd[1]: libpod-6170fb0c18215a009542e361f2d68d7feaeb98d7f1dc07a76f9da8d1d51c58a8.scope: Deactivated successfully.
Dec  2 06:33:35 np0005542249 podman[294718]: 2025-12-02 11:33:35.812064587 +0000 UTC m=+0.264956258 container died 6170fb0c18215a009542e361f2d68d7feaeb98d7f1dc07a76f9da8d1d51c58a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  2 06:33:35 np0005542249 systemd[1]: var-lib-containers-storage-overlay-6491575f7a5a8075fcecebe2b511fb60b6523e5555b10fca3ec688f5c8a6bda4-merged.mount: Deactivated successfully.
Dec  2 06:33:35 np0005542249 podman[294718]: 2025-12-02 11:33:35.87560367 +0000 UTC m=+0.328495361 container remove 6170fb0c18215a009542e361f2d68d7feaeb98d7f1dc07a76f9da8d1d51c58a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_merkle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  2 06:33:35 np0005542249 systemd[1]: libpod-conmon-6170fb0c18215a009542e361f2d68d7feaeb98d7f1dc07a76f9da8d1d51c58a8.scope: Deactivated successfully.
Dec  2 06:33:36 np0005542249 podman[294757]: 2025-12-02 11:33:36.108228794 +0000 UTC m=+0.049694511 container create a9f4587bb2179c69d9655a3bdb52253119871c7fbeb175f6d8ed966b6a273884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:33:36 np0005542249 systemd[1]: Started libpod-conmon-a9f4587bb2179c69d9655a3bdb52253119871c7fbeb175f6d8ed966b6a273884.scope.
Dec  2 06:33:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:33:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:33:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:33:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:33:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  2 06:33:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:33:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003513386607084717 of space, bias 1.0, pg target 0.10540159821254151 quantized to 32 (current 32)
Dec  2 06:33:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:33:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  2 06:33:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:33:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Dec  2 06:33:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:33:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:33:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:33:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:33:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:33:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:33:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:33:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:33:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:33:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:33:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:33:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:33:36 np0005542249 podman[294757]: 2025-12-02 11:33:36.08656546 +0000 UTC m=+0.028031207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:33:36 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:33:36 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43d39d8197f114766ac882eee1f387bdfa26cf7a3e4709957c1e8b040c6e10ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:33:36 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43d39d8197f114766ac882eee1f387bdfa26cf7a3e4709957c1e8b040c6e10ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:33:36 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43d39d8197f114766ac882eee1f387bdfa26cf7a3e4709957c1e8b040c6e10ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:33:36 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43d39d8197f114766ac882eee1f387bdfa26cf7a3e4709957c1e8b040c6e10ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:33:36 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43d39d8197f114766ac882eee1f387bdfa26cf7a3e4709957c1e8b040c6e10ab/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:33:36 np0005542249 podman[294757]: 2025-12-02 11:33:36.22041842 +0000 UTC m=+0.161884137 container init a9f4587bb2179c69d9655a3bdb52253119871c7fbeb175f6d8ed966b6a273884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_archimedes, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:33:36 np0005542249 podman[294757]: 2025-12-02 11:33:36.231986532 +0000 UTC m=+0.173452239 container start a9f4587bb2179c69d9655a3bdb52253119871c7fbeb175f6d8ed966b6a273884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_archimedes, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  2 06:33:36 np0005542249 podman[294757]: 2025-12-02 11:33:36.236214176 +0000 UTC m=+0.177679883 container attach a9f4587bb2179c69d9655a3bdb52253119871c7fbeb175f6d8ed966b6a273884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_archimedes, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:33:36 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1751: 321 pgs: 321 active+clean; 88 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 30 KiB/s wr, 103 op/s
Dec  2 06:33:36 np0005542249 nova_compute[254900]: 2025-12-02 11:33:36.447 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:37 np0005542249 modest_archimedes[294773]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:33:37 np0005542249 modest_archimedes[294773]: --> relative data size: 1.0
Dec  2 06:33:37 np0005542249 modest_archimedes[294773]: --> All data devices are unavailable
Dec  2 06:33:37 np0005542249 systemd[1]: libpod-a9f4587bb2179c69d9655a3bdb52253119871c7fbeb175f6d8ed966b6a273884.scope: Deactivated successfully.
Dec  2 06:33:37 np0005542249 podman[294757]: 2025-12-02 11:33:37.396323025 +0000 UTC m=+1.337788722 container died a9f4587bb2179c69d9655a3bdb52253119871c7fbeb175f6d8ed966b6a273884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  2 06:33:37 np0005542249 systemd[1]: libpod-a9f4587bb2179c69d9655a3bdb52253119871c7fbeb175f6d8ed966b6a273884.scope: Consumed 1.113s CPU time.
Dec  2 06:33:37 np0005542249 systemd[1]: var-lib-containers-storage-overlay-43d39d8197f114766ac882eee1f387bdfa26cf7a3e4709957c1e8b040c6e10ab-merged.mount: Deactivated successfully.
Dec  2 06:33:37 np0005542249 podman[294757]: 2025-12-02 11:33:37.448596015 +0000 UTC m=+1.390061702 container remove a9f4587bb2179c69d9655a3bdb52253119871c7fbeb175f6d8ed966b6a273884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_archimedes, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:33:37 np0005542249 systemd[1]: libpod-conmon-a9f4587bb2179c69d9655a3bdb52253119871c7fbeb175f6d8ed966b6a273884.scope: Deactivated successfully.
Dec  2 06:33:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e436 do_prune osdmap full prune enabled
Dec  2 06:33:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e437 e437: 3 total, 3 up, 3 in
Dec  2 06:33:37 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e437: 3 total, 3 up, 3 in
Dec  2 06:33:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:33:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e437 do_prune osdmap full prune enabled
Dec  2 06:33:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e438 e438: 3 total, 3 up, 3 in
Dec  2 06:33:38 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e438: 3 total, 3 up, 3 in
Dec  2 06:33:38 np0005542249 podman[294954]: 2025-12-02 11:33:38.231341036 +0000 UTC m=+0.065812026 container create d6601d71d7a1e4dc8d2a4e734cbb3791ddc615238e8c7030ffd7a682af2f3d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kowalevski, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:33:38 np0005542249 systemd[1]: Started libpod-conmon-d6601d71d7a1e4dc8d2a4e734cbb3791ddc615238e8c7030ffd7a682af2f3d53.scope.
Dec  2 06:33:38 np0005542249 podman[294954]: 2025-12-02 11:33:38.208632744 +0000 UTC m=+0.043103714 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:33:38 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:33:38 np0005542249 podman[294954]: 2025-12-02 11:33:38.345092025 +0000 UTC m=+0.179562995 container init d6601d71d7a1e4dc8d2a4e734cbb3791ddc615238e8c7030ffd7a682af2f3d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kowalevski, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  2 06:33:38 np0005542249 podman[294954]: 2025-12-02 11:33:38.35787742 +0000 UTC m=+0.192348400 container start d6601d71d7a1e4dc8d2a4e734cbb3791ddc615238e8c7030ffd7a682af2f3d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kowalevski, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:33:38 np0005542249 podman[294954]: 2025-12-02 11:33:38.363609234 +0000 UTC m=+0.198080204 container attach d6601d71d7a1e4dc8d2a4e734cbb3791ddc615238e8c7030ffd7a682af2f3d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  2 06:33:38 np0005542249 adoring_kowalevski[294971]: 167 167
Dec  2 06:33:38 np0005542249 systemd[1]: libpod-d6601d71d7a1e4dc8d2a4e734cbb3791ddc615238e8c7030ffd7a682af2f3d53.scope: Deactivated successfully.
Dec  2 06:33:38 np0005542249 podman[294954]: 2025-12-02 11:33:38.367855839 +0000 UTC m=+0.202326859 container died d6601d71d7a1e4dc8d2a4e734cbb3791ddc615238e8c7030ffd7a682af2f3d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kowalevski, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  2 06:33:38 np0005542249 systemd[1]: var-lib-containers-storage-overlay-4ccae408d4df842415992a6efc7db9c073fccd2bb3d35c802bd6ee68dfed05d1-merged.mount: Deactivated successfully.
Dec  2 06:33:38 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1754: 321 pgs: 321 active+clean; 88 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 38 KiB/s wr, 73 op/s
Dec  2 06:33:38 np0005542249 podman[294954]: 2025-12-02 11:33:38.415458353 +0000 UTC m=+0.249929303 container remove d6601d71d7a1e4dc8d2a4e734cbb3791ddc615238e8c7030ffd7a682af2f3d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  2 06:33:38 np0005542249 systemd[1]: libpod-conmon-d6601d71d7a1e4dc8d2a4e734cbb3791ddc615238e8c7030ffd7a682af2f3d53.scope: Deactivated successfully.
Dec  2 06:33:38 np0005542249 podman[294996]: 2025-12-02 11:33:38.657791929 +0000 UTC m=+0.058001495 container create 570d8dbe9e612caf911d482a93d92a0bd2de79e34766051f1a3dd1f1dd79564c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 06:33:38 np0005542249 systemd[1]: Started libpod-conmon-570d8dbe9e612caf911d482a93d92a0bd2de79e34766051f1a3dd1f1dd79564c.scope.
Dec  2 06:33:38 np0005542249 podman[294996]: 2025-12-02 11:33:38.623826483 +0000 UTC m=+0.024036109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:33:38 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:33:38 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e50f7808a81d729e82106fe37533473fb940eff241b85985f58ff543807c86f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:33:38 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e50f7808a81d729e82106fe37533473fb940eff241b85985f58ff543807c86f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:33:38 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e50f7808a81d729e82106fe37533473fb940eff241b85985f58ff543807c86f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:33:38 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e50f7808a81d729e82106fe37533473fb940eff241b85985f58ff543807c86f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:33:38 np0005542249 podman[294996]: 2025-12-02 11:33:38.773325105 +0000 UTC m=+0.173534661 container init 570d8dbe9e612caf911d482a93d92a0bd2de79e34766051f1a3dd1f1dd79564c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 06:33:38 np0005542249 podman[294996]: 2025-12-02 11:33:38.78837516 +0000 UTC m=+0.188584706 container start 570d8dbe9e612caf911d482a93d92a0bd2de79e34766051f1a3dd1f1dd79564c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_haibt, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:33:38 np0005542249 podman[294996]: 2025-12-02 11:33:38.792138152 +0000 UTC m=+0.192347708 container attach 570d8dbe9e612caf911d482a93d92a0bd2de79e34766051f1a3dd1f1dd79564c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  2 06:33:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e438 do_prune osdmap full prune enabled
Dec  2 06:33:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e439 e439: 3 total, 3 up, 3 in
Dec  2 06:33:39 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e439: 3 total, 3 up, 3 in
Dec  2 06:33:39 np0005542249 nova_compute[254900]: 2025-12-02 11:33:39.576 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:39 np0005542249 clever_haibt[295013]: {
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:    "0": [
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:        {
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "devices": [
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "/dev/loop3"
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            ],
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "lv_name": "ceph_lv0",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "lv_size": "21470642176",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "name": "ceph_lv0",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "tags": {
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.cluster_name": "ceph",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.crush_device_class": "",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.encrypted": "0",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.osd_id": "0",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.type": "block",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.vdo": "0"
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            },
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "type": "block",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "vg_name": "ceph_vg0"
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:        }
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:    ],
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:    "1": [
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:        {
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "devices": [
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "/dev/loop4"
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            ],
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "lv_name": "ceph_lv1",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "lv_size": "21470642176",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "name": "ceph_lv1",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "tags": {
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.cluster_name": "ceph",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.crush_device_class": "",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.encrypted": "0",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.osd_id": "1",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.type": "block",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.vdo": "0"
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            },
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "type": "block",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "vg_name": "ceph_vg1"
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:        }
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:    ],
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:    "2": [
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:        {
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "devices": [
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "/dev/loop5"
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            ],
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "lv_name": "ceph_lv2",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "lv_size": "21470642176",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "name": "ceph_lv2",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "tags": {
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.cluster_name": "ceph",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.crush_device_class": "",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.encrypted": "0",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.osd_id": "2",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.type": "block",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:                "ceph.vdo": "0"
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            },
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "type": "block",
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:            "vg_name": "ceph_vg2"
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:        }
Dec  2 06:33:39 np0005542249 clever_haibt[295013]:    ]
Dec  2 06:33:39 np0005542249 clever_haibt[295013]: }
Dec  2 06:33:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:33:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2845153063' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:33:39 np0005542249 systemd[1]: libpod-570d8dbe9e612caf911d482a93d92a0bd2de79e34766051f1a3dd1f1dd79564c.scope: Deactivated successfully.
Dec  2 06:33:39 np0005542249 podman[294996]: 2025-12-02 11:33:39.669846545 +0000 UTC m=+1.070056131 container died 570d8dbe9e612caf911d482a93d92a0bd2de79e34766051f1a3dd1f1dd79564c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_haibt, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Dec  2 06:33:39 np0005542249 systemd[1]: var-lib-containers-storage-overlay-2e50f7808a81d729e82106fe37533473fb940eff241b85985f58ff543807c86f-merged.mount: Deactivated successfully.
Dec  2 06:33:39 np0005542249 podman[294996]: 2025-12-02 11:33:39.745401773 +0000 UTC m=+1.145611349 container remove 570d8dbe9e612caf911d482a93d92a0bd2de79e34766051f1a3dd1f1dd79564c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_haibt, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:33:39 np0005542249 systemd[1]: libpod-conmon-570d8dbe9e612caf911d482a93d92a0bd2de79e34766051f1a3dd1f1dd79564c.scope: Deactivated successfully.
Dec  2 06:33:40 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1756: 321 pgs: 321 active+clean; 88 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 KiB/s wr, 28 op/s
Dec  2 06:33:40 np0005542249 podman[295178]: 2025-12-02 11:33:40.660135254 +0000 UTC m=+0.064979843 container create ec05b586c86d428327f6921f171d566f3387e294d3000e07a2cc465ebf69ca2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cray, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:33:40 np0005542249 systemd[1]: Started libpod-conmon-ec05b586c86d428327f6921f171d566f3387e294d3000e07a2cc465ebf69ca2e.scope.
Dec  2 06:33:40 np0005542249 podman[295178]: 2025-12-02 11:33:40.641500732 +0000 UTC m=+0.046345391 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:33:40 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:33:40 np0005542249 podman[295178]: 2025-12-02 11:33:40.762271029 +0000 UTC m=+0.167115698 container init ec05b586c86d428327f6921f171d566f3387e294d3000e07a2cc465ebf69ca2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cray, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  2 06:33:40 np0005542249 podman[295178]: 2025-12-02 11:33:40.773347647 +0000 UTC m=+0.178192256 container start ec05b586c86d428327f6921f171d566f3387e294d3000e07a2cc465ebf69ca2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec  2 06:33:40 np0005542249 podman[295178]: 2025-12-02 11:33:40.777197742 +0000 UTC m=+0.182042351 container attach ec05b586c86d428327f6921f171d566f3387e294d3000e07a2cc465ebf69ca2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cray, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:33:40 np0005542249 peaceful_cray[295194]: 167 167
Dec  2 06:33:40 np0005542249 systemd[1]: libpod-ec05b586c86d428327f6921f171d566f3387e294d3000e07a2cc465ebf69ca2e.scope: Deactivated successfully.
Dec  2 06:33:40 np0005542249 podman[295178]: 2025-12-02 11:33:40.784053406 +0000 UTC m=+0.188897985 container died ec05b586c86d428327f6921f171d566f3387e294d3000e07a2cc465ebf69ca2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cray, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  2 06:33:40 np0005542249 systemd[1]: var-lib-containers-storage-overlay-d1a7dbb8b00a71f47cc63338069857c15a26060e30556795242ef155209e30b2-merged.mount: Deactivated successfully.
Dec  2 06:33:40 np0005542249 podman[295178]: 2025-12-02 11:33:40.824951129 +0000 UTC m=+0.229795748 container remove ec05b586c86d428327f6921f171d566f3387e294d3000e07a2cc465ebf69ca2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cray, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  2 06:33:40 np0005542249 systemd[1]: libpod-conmon-ec05b586c86d428327f6921f171d566f3387e294d3000e07a2cc465ebf69ca2e.scope: Deactivated successfully.
Dec  2 06:33:41 np0005542249 podman[295216]: 2025-12-02 11:33:41.034152621 +0000 UTC m=+0.059458304 container create 1028dd64135615b7695f70a89a66cb2b142728868ae27731996290929f20f901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kilby, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  2 06:33:41 np0005542249 systemd[1]: Started libpod-conmon-1028dd64135615b7695f70a89a66cb2b142728868ae27731996290929f20f901.scope.
Dec  2 06:33:41 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:33:41 np0005542249 podman[295216]: 2025-12-02 11:33:41.014409359 +0000 UTC m=+0.039715052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:33:41 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84c060b2933a63ebf15163865a624655e39404951c2242d841c7a88b7ad33172/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:33:41 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84c060b2933a63ebf15163865a624655e39404951c2242d841c7a88b7ad33172/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:33:41 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84c060b2933a63ebf15163865a624655e39404951c2242d841c7a88b7ad33172/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:33:41 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84c060b2933a63ebf15163865a624655e39404951c2242d841c7a88b7ad33172/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:33:41 np0005542249 podman[295216]: 2025-12-02 11:33:41.127173951 +0000 UTC m=+0.152479724 container init 1028dd64135615b7695f70a89a66cb2b142728868ae27731996290929f20f901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kilby, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:33:41 np0005542249 podman[295216]: 2025-12-02 11:33:41.135986798 +0000 UTC m=+0.161292471 container start 1028dd64135615b7695f70a89a66cb2b142728868ae27731996290929f20f901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kilby, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:33:41 np0005542249 podman[295216]: 2025-12-02 11:33:41.139607646 +0000 UTC m=+0.164913339 container attach 1028dd64135615b7695f70a89a66cb2b142728868ae27731996290929f20f901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kilby, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  2 06:33:41 np0005542249 nova_compute[254900]: 2025-12-02 11:33:41.452 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:42 np0005542249 vigilant_kilby[295232]: {
Dec  2 06:33:42 np0005542249 vigilant_kilby[295232]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:33:42 np0005542249 vigilant_kilby[295232]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:33:42 np0005542249 vigilant_kilby[295232]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:33:42 np0005542249 vigilant_kilby[295232]:        "osd_id": 0,
Dec  2 06:33:42 np0005542249 vigilant_kilby[295232]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:33:42 np0005542249 vigilant_kilby[295232]:        "type": "bluestore"
Dec  2 06:33:42 np0005542249 vigilant_kilby[295232]:    },
Dec  2 06:33:42 np0005542249 vigilant_kilby[295232]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:33:42 np0005542249 vigilant_kilby[295232]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:33:42 np0005542249 vigilant_kilby[295232]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:33:42 np0005542249 vigilant_kilby[295232]:        "osd_id": 2,
Dec  2 06:33:42 np0005542249 vigilant_kilby[295232]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:33:42 np0005542249 vigilant_kilby[295232]:        "type": "bluestore"
Dec  2 06:33:42 np0005542249 vigilant_kilby[295232]:    },
Dec  2 06:33:42 np0005542249 vigilant_kilby[295232]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:33:42 np0005542249 vigilant_kilby[295232]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:33:42 np0005542249 vigilant_kilby[295232]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:33:42 np0005542249 vigilant_kilby[295232]:        "osd_id": 1,
Dec  2 06:33:42 np0005542249 vigilant_kilby[295232]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:33:42 np0005542249 vigilant_kilby[295232]:        "type": "bluestore"
Dec  2 06:33:42 np0005542249 vigilant_kilby[295232]:    }
Dec  2 06:33:42 np0005542249 vigilant_kilby[295232]: }
Dec  2 06:33:42 np0005542249 systemd[1]: libpod-1028dd64135615b7695f70a89a66cb2b142728868ae27731996290929f20f901.scope: Deactivated successfully.
Dec  2 06:33:42 np0005542249 podman[295216]: 2025-12-02 11:33:42.31287708 +0000 UTC m=+1.338182763 container died 1028dd64135615b7695f70a89a66cb2b142728868ae27731996290929f20f901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  2 06:33:42 np0005542249 systemd[1]: libpod-1028dd64135615b7695f70a89a66cb2b142728868ae27731996290929f20f901.scope: Consumed 1.177s CPU time.
Dec  2 06:33:42 np0005542249 systemd[1]: var-lib-containers-storage-overlay-84c060b2933a63ebf15163865a624655e39404951c2242d841c7a88b7ad33172-merged.mount: Deactivated successfully.
Dec  2 06:33:42 np0005542249 podman[295216]: 2025-12-02 11:33:42.387230496 +0000 UTC m=+1.412536209 container remove 1028dd64135615b7695f70a89a66cb2b142728868ae27731996290929f20f901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 06:33:42 np0005542249 systemd[1]: libpod-conmon-1028dd64135615b7695f70a89a66cb2b142728868ae27731996290929f20f901.scope: Deactivated successfully.
Dec  2 06:33:42 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1757: 321 pgs: 321 active+clean; 88 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 3.8 KiB/s wr, 56 op/s
Dec  2 06:33:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:33:42 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:33:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:33:42 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:33:42 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev d83ed32b-e6bc-4f67-b47d-80aa0e0d3f3e does not exist
Dec  2 06:33:42 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev e317a027-6123-430e-b4bd-87eb5deecff3 does not exist
Dec  2 06:33:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:33:43 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:33:43 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:33:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e439 do_prune osdmap full prune enabled
Dec  2 06:33:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e440 e440: 3 total, 3 up, 3 in
Dec  2 06:33:43 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e440: 3 total, 3 up, 3 in
Dec  2 06:33:44 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1759: 321 pgs: 321 active+clean; 88 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 5.5 KiB/s wr, 92 op/s
Dec  2 06:33:44 np0005542249 nova_compute[254900]: 2025-12-02 11:33:44.578 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:44 np0005542249 ovn_controller[153849]: 2025-12-02T11:33:44Z|00260|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec  2 06:33:45 np0005542249 nova_compute[254900]: 2025-12-02 11:33:45.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:33:45 np0005542249 nova_compute[254900]: 2025-12-02 11:33:45.382 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:33:46 np0005542249 nova_compute[254900]: 2025-12-02 11:33:46.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:33:46 np0005542249 nova_compute[254900]: 2025-12-02 11:33:46.383 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:33:46 np0005542249 nova_compute[254900]: 2025-12-02 11:33:46.383 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 06:33:46 np0005542249 nova_compute[254900]: 2025-12-02 11:33:46.404 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 06:33:46 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1760: 321 pgs: 321 active+clean; 88 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 4.2 KiB/s wr, 71 op/s
Dec  2 06:33:46 np0005542249 nova_compute[254900]: 2025-12-02 11:33:46.457 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e440 do_prune osdmap full prune enabled
Dec  2 06:33:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e441 e441: 3 total, 3 up, 3 in
Dec  2 06:33:46 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e441: 3 total, 3 up, 3 in
Dec  2 06:33:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:33:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1158004659' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:33:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:33:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1158004659' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:33:47 np0005542249 nova_compute[254900]: 2025-12-02 11:33:47.496 254904 DEBUG oslo_concurrency.lockutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:33:47 np0005542249 nova_compute[254900]: 2025-12-02 11:33:47.498 254904 DEBUG oslo_concurrency.lockutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:33:47 np0005542249 nova_compute[254900]: 2025-12-02 11:33:47.526 254904 DEBUG nova.compute.manager [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:33:47 np0005542249 nova_compute[254900]: 2025-12-02 11:33:47.619 254904 DEBUG oslo_concurrency.lockutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:33:47 np0005542249 nova_compute[254900]: 2025-12-02 11:33:47.620 254904 DEBUG oslo_concurrency.lockutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:33:47 np0005542249 nova_compute[254900]: 2025-12-02 11:33:47.629 254904 DEBUG nova.virt.hardware [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:33:47 np0005542249 nova_compute[254900]: 2025-12-02 11:33:47.630 254904 INFO nova.compute.claims [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:33:47 np0005542249 nova_compute[254900]: 2025-12-02 11:33:47.762 254904 DEBUG oslo_concurrency.processutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:33:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e441 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:33:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:33:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1690889683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:33:48 np0005542249 nova_compute[254900]: 2025-12-02 11:33:48.287 254904 DEBUG oslo_concurrency.processutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:33:48 np0005542249 nova_compute[254900]: 2025-12-02 11:33:48.296 254904 DEBUG nova.compute.provider_tree [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:33:48 np0005542249 nova_compute[254900]: 2025-12-02 11:33:48.396 254904 DEBUG nova.scheduler.client.report [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:33:48 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1762: 321 pgs: 321 active+clean; 88 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 3.5 KiB/s wr, 54 op/s
Dec  2 06:33:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e441 do_prune osdmap full prune enabled
Dec  2 06:33:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e442 e442: 3 total, 3 up, 3 in
Dec  2 06:33:48 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e442: 3 total, 3 up, 3 in
Dec  2 06:33:48 np0005542249 nova_compute[254900]: 2025-12-02 11:33:48.536 254904 DEBUG oslo_concurrency.lockutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:33:48 np0005542249 nova_compute[254900]: 2025-12-02 11:33:48.538 254904 DEBUG nova.compute.manager [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:33:48 np0005542249 nova_compute[254900]: 2025-12-02 11:33:48.690 254904 DEBUG nova.compute.manager [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:33:48 np0005542249 nova_compute[254900]: 2025-12-02 11:33:48.691 254904 DEBUG nova.network.neutron [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:33:48 np0005542249 nova_compute[254900]: 2025-12-02 11:33:48.729 254904 INFO nova.virt.libvirt.driver [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:33:48 np0005542249 nova_compute[254900]: 2025-12-02 11:33:48.757 254904 DEBUG nova.compute.manager [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:33:48 np0005542249 nova_compute[254900]: 2025-12-02 11:33:48.876 254904 DEBUG nova.compute.manager [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:33:48 np0005542249 nova_compute[254900]: 2025-12-02 11:33:48.879 254904 DEBUG nova.virt.libvirt.driver [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:33:48 np0005542249 nova_compute[254900]: 2025-12-02 11:33:48.879 254904 INFO nova.virt.libvirt.driver [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Creating image(s)#033[00m
Dec  2 06:33:48 np0005542249 nova_compute[254900]: 2025-12-02 11:33:48.926 254904 DEBUG nova.storage.rbd_utils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] rbd image 40ecc98a-3a9e-4cd0-8546-64b7420be45f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:33:48 np0005542249 nova_compute[254900]: 2025-12-02 11:33:48.967 254904 DEBUG nova.storage.rbd_utils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] rbd image 40ecc98a-3a9e-4cd0-8546-64b7420be45f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:33:49 np0005542249 nova_compute[254900]: 2025-12-02 11:33:49.004 254904 DEBUG nova.storage.rbd_utils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] rbd image 40ecc98a-3a9e-4cd0-8546-64b7420be45f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:33:49 np0005542249 nova_compute[254900]: 2025-12-02 11:33:49.009 254904 DEBUG oslo_concurrency.processutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:33:49 np0005542249 nova_compute[254900]: 2025-12-02 11:33:49.050 254904 DEBUG nova.policy [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a003d9cef7684ec48ed996b22c11419e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '58574186a4fd405e83f1a4b650ea8e8c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:33:49 np0005542249 nova_compute[254900]: 2025-12-02 11:33:49.109 254904 DEBUG oslo_concurrency.processutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:33:49 np0005542249 nova_compute[254900]: 2025-12-02 11:33:49.112 254904 DEBUG oslo_concurrency.lockutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:33:49 np0005542249 nova_compute[254900]: 2025-12-02 11:33:49.114 254904 DEBUG oslo_concurrency.lockutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:33:49 np0005542249 nova_compute[254900]: 2025-12-02 11:33:49.115 254904 DEBUG oslo_concurrency.lockutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "ee4efcc5560259dbbd6acb151f29af7f98aa58b2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:33:49 np0005542249 nova_compute[254900]: 2025-12-02 11:33:49.149 254904 DEBUG nova.storage.rbd_utils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] rbd image 40ecc98a-3a9e-4cd0-8546-64b7420be45f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:33:49 np0005542249 nova_compute[254900]: 2025-12-02 11:33:49.155 254904 DEBUG oslo_concurrency.processutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 40ecc98a-3a9e-4cd0-8546-64b7420be45f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:33:49 np0005542249 nova_compute[254900]: 2025-12-02 11:33:49.492 254904 DEBUG oslo_concurrency.processutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ee4efcc5560259dbbd6acb151f29af7f98aa58b2 40ecc98a-3a9e-4cd0-8546-64b7420be45f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:33:49 np0005542249 nova_compute[254900]: 2025-12-02 11:33:49.562 254904 DEBUG nova.storage.rbd_utils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] resizing rbd image 40ecc98a-3a9e-4cd0-8546-64b7420be45f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  2 06:33:49 np0005542249 nova_compute[254900]: 2025-12-02 11:33:49.592 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:49 np0005542249 nova_compute[254900]: 2025-12-02 11:33:49.667 254904 DEBUG nova.objects.instance [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lazy-loading 'migration_context' on Instance uuid 40ecc98a-3a9e-4cd0-8546-64b7420be45f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:33:49 np0005542249 nova_compute[254900]: 2025-12-02 11:33:49.693 254904 DEBUG nova.virt.libvirt.driver [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  2 06:33:49 np0005542249 nova_compute[254900]: 2025-12-02 11:33:49.694 254904 DEBUG nova.virt.libvirt.driver [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Ensure instance console log exists: /var/lib/nova/instances/40ecc98a-3a9e-4cd0-8546-64b7420be45f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:33:49 np0005542249 nova_compute[254900]: 2025-12-02 11:33:49.695 254904 DEBUG oslo_concurrency.lockutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:33:49 np0005542249 nova_compute[254900]: 2025-12-02 11:33:49.695 254904 DEBUG oslo_concurrency.lockutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:33:49 np0005542249 nova_compute[254900]: 2025-12-02 11:33:49.695 254904 DEBUG oslo_concurrency.lockutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:33:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:33:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1093715709' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:33:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:33:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1093715709' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:33:50 np0005542249 nova_compute[254900]: 2025-12-02 11:33:50.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:33:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:33:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/292169095' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:33:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:33:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/292169095' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:33:50 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1764: 321 pgs: 321 active+clean; 88 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 3.7 KiB/s wr, 71 op/s
Dec  2 06:33:50 np0005542249 nova_compute[254900]: 2025-12-02 11:33:50.657 254904 DEBUG nova.network.neutron [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Successfully created port: 889242f4-cffc-42cc-8e15-9e4f88ce1b9f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:33:51 np0005542249 nova_compute[254900]: 2025-12-02 11:33:51.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:33:51 np0005542249 nova_compute[254900]: 2025-12-02 11:33:51.462 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:52 np0005542249 nova_compute[254900]: 2025-12-02 11:33:52.032 254904 DEBUG nova.network.neutron [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Successfully updated port: 889242f4-cffc-42cc-8e15-9e4f88ce1b9f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:33:52 np0005542249 podman[295519]: 2025-12-02 11:33:52.038345216 +0000 UTC m=+0.099801474 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  2 06:33:52 np0005542249 nova_compute[254900]: 2025-12-02 11:33:52.058 254904 DEBUG oslo_concurrency.lockutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "refresh_cache-40ecc98a-3a9e-4cd0-8546-64b7420be45f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:33:52 np0005542249 nova_compute[254900]: 2025-12-02 11:33:52.059 254904 DEBUG oslo_concurrency.lockutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquired lock "refresh_cache-40ecc98a-3a9e-4cd0-8546-64b7420be45f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:33:52 np0005542249 nova_compute[254900]: 2025-12-02 11:33:52.059 254904 DEBUG nova.network.neutron [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:33:52 np0005542249 nova_compute[254900]: 2025-12-02 11:33:52.159 254904 DEBUG nova.compute.manager [req-2b3eb682-ec63-4b1a-b7fd-e5656347604f req-0c1f5cdd-d63a-4382-8084-e60bec342f9f 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Received event network-changed-889242f4-cffc-42cc-8e15-9e4f88ce1b9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:33:52 np0005542249 nova_compute[254900]: 2025-12-02 11:33:52.160 254904 DEBUG nova.compute.manager [req-2b3eb682-ec63-4b1a-b7fd-e5656347604f req-0c1f5cdd-d63a-4382-8084-e60bec342f9f 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Refreshing instance network info cache due to event network-changed-889242f4-cffc-42cc-8e15-9e4f88ce1b9f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:33:52 np0005542249 nova_compute[254900]: 2025-12-02 11:33:52.161 254904 DEBUG oslo_concurrency.lockutils [req-2b3eb682-ec63-4b1a-b7fd-e5656347604f req-0c1f5cdd-d63a-4382-8084-e60bec342f9f 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-40ecc98a-3a9e-4cd0-8546-64b7420be45f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:33:52 np0005542249 nova_compute[254900]: 2025-12-02 11:33:52.211 254904 DEBUG nova.network.neutron [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:33:52 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1765: 321 pgs: 321 active+clean; 102 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 1.0 MiB/s wr, 74 op/s
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.176 254904 DEBUG nova.network.neutron [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Updating instance_info_cache with network_info: [{"id": "889242f4-cffc-42cc-8e15-9e4f88ce1b9f", "address": "fa:16:3e:64:04:5e", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap889242f4-cf", "ovs_interfaceid": "889242f4-cffc-42cc-8e15-9e4f88ce1b9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:33:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e442 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:33:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e442 do_prune osdmap full prune enabled
Dec  2 06:33:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e443 e443: 3 total, 3 up, 3 in
Dec  2 06:33:53 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e443: 3 total, 3 up, 3 in
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.310 254904 DEBUG oslo_concurrency.lockutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Releasing lock "refresh_cache-40ecc98a-3a9e-4cd0-8546-64b7420be45f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.311 254904 DEBUG nova.compute.manager [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Instance network_info: |[{"id": "889242f4-cffc-42cc-8e15-9e4f88ce1b9f", "address": "fa:16:3e:64:04:5e", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap889242f4-cf", "ovs_interfaceid": "889242f4-cffc-42cc-8e15-9e4f88ce1b9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.312 254904 DEBUG oslo_concurrency.lockutils [req-2b3eb682-ec63-4b1a-b7fd-e5656347604f req-0c1f5cdd-d63a-4382-8084-e60bec342f9f 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-40ecc98a-3a9e-4cd0-8546-64b7420be45f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.313 254904 DEBUG nova.network.neutron [req-2b3eb682-ec63-4b1a-b7fd-e5656347604f req-0c1f5cdd-d63a-4382-8084-e60bec342f9f 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Refreshing network info cache for port 889242f4-cffc-42cc-8e15-9e4f88ce1b9f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.319 254904 DEBUG nova.virt.libvirt.driver [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Start _get_guest_xml network_info=[{"id": "889242f4-cffc-42cc-8e15-9e4f88ce1b9f", "address": "fa:16:3e:64:04:5e", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap889242f4-cf", "ovs_interfaceid": "889242f4-cffc-42cc-8e15-9e4f88ce1b9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'image_id': '5a40f66c-ab43-47dd-9880-e59f9fa2c60e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.330 254904 WARNING nova.virt.libvirt.driver [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.343 254904 DEBUG nova.virt.libvirt.host [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.344 254904 DEBUG nova.virt.libvirt.host [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.349 254904 DEBUG nova.virt.libvirt.host [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.350 254904 DEBUG nova.virt.libvirt.host [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.351 254904 DEBUG nova.virt.libvirt.driver [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.352 254904 DEBUG nova.virt.hardware [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T11:15:09Z,direct_url=<?>,disk_format='qcow2',id=5a40f66c-ab43-47dd-9880-e59f9fa2c60e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7e7531b1b0ed4d30936cf8bae1080939',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T11:15:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.353 254904 DEBUG nova.virt.hardware [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.353 254904 DEBUG nova.virt.hardware [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.354 254904 DEBUG nova.virt.hardware [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.354 254904 DEBUG nova.virt.hardware [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.354 254904 DEBUG nova.virt.hardware [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.355 254904 DEBUG nova.virt.hardware [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.355 254904 DEBUG nova.virt.hardware [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.356 254904 DEBUG nova.virt.hardware [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.356 254904 DEBUG nova.virt.hardware [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.357 254904 DEBUG nova.virt.hardware [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.362 254904 DEBUG oslo_concurrency.processutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:33:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:33:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3323855056' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.866 254904 DEBUG oslo_concurrency.processutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.908 254904 DEBUG nova.storage.rbd_utils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] rbd image 40ecc98a-3a9e-4cd0-8546-64b7420be45f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:33:53 np0005542249 nova_compute[254900]: 2025-12-02 11:33:53.914 254904 DEBUG oslo_concurrency.processutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:33:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:33:54 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1945131706' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.410 254904 DEBUG oslo_concurrency.processutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.413 254904 DEBUG nova.virt.libvirt.vif [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:33:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-165834699',display_name='tempest-TestEncryptedCinderVolumes-server-165834699',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-165834699',id=28,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOJaPboaOa6uJHrsO2ao3rYwxmsZvzR/4lsVKCuOmmEhTjmIU2xPM9z7bpX4IbAFcb+u8hSQH3jDbrwYw+zDE/3XLvANn6+hOVGVetf9FOHitJV2GO1PjENl+cxBSc7TQ==',key_name='tempest-keypair-1610868451',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='58574186a4fd405e83f1a4b650ea8e8c',ramdisk_id='',reservation_id='r-oiph9p2e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-337876243',owner_user_name='tempest-TestEncryptedCinderVolumes-337876243-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:33:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a003d9cef7684ec48ed996b22c11419e',uuid=40ecc98a-3a9e-4cd0-8546-64b7420be45f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "889242f4-cffc-42cc-8e15-9e4f88ce1b9f", "address": "fa:16:3e:64:04:5e", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap889242f4-cf", "ovs_interfaceid": "889242f4-cffc-42cc-8e15-9e4f88ce1b9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.414 254904 DEBUG nova.network.os_vif_util [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Converting VIF {"id": "889242f4-cffc-42cc-8e15-9e4f88ce1b9f", "address": "fa:16:3e:64:04:5e", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap889242f4-cf", "ovs_interfaceid": "889242f4-cffc-42cc-8e15-9e4f88ce1b9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.415 254904 DEBUG nova.network.os_vif_util [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:04:5e,bridge_name='br-int',has_traffic_filtering=True,id=889242f4-cffc-42cc-8e15-9e4f88ce1b9f,network=Network(28b69a92-5b45-421b-9985-afeebc6820aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap889242f4-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.417 254904 DEBUG nova.objects.instance [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lazy-loading 'pci_devices' on Instance uuid 40ecc98a-3a9e-4cd0-8546-64b7420be45f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:33:54 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1767: 321 pgs: 321 active+clean; 134 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 2.7 MiB/s wr, 109 op/s
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.442 254904 DEBUG nova.virt.libvirt.driver [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:33:54 np0005542249 nova_compute[254900]:  <uuid>40ecc98a-3a9e-4cd0-8546-64b7420be45f</uuid>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:  <name>instance-0000001c</name>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-165834699</nova:name>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:33:53</nova:creationTime>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:33:54 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:        <nova:user uuid="a003d9cef7684ec48ed996b22c11419e">tempest-TestEncryptedCinderVolumes-337876243-project-member</nova:user>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:        <nova:project uuid="58574186a4fd405e83f1a4b650ea8e8c">tempest-TestEncryptedCinderVolumes-337876243</nova:project>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <nova:root type="image" uuid="5a40f66c-ab43-47dd-9880-e59f9fa2c60e"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:        <nova:port uuid="889242f4-cffc-42cc-8e15-9e4f88ce1b9f">
Dec  2 06:33:54 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <entry name="serial">40ecc98a-3a9e-4cd0-8546-64b7420be45f</entry>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <entry name="uuid">40ecc98a-3a9e-4cd0-8546-64b7420be45f</entry>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/40ecc98a-3a9e-4cd0-8546-64b7420be45f_disk">
Dec  2 06:33:54 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:33:54 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/40ecc98a-3a9e-4cd0-8546-64b7420be45f_disk.config">
Dec  2 06:33:54 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:33:54 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:64:04:5e"/>
Dec  2 06:33:54 np0005542249 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <target dev="tap889242f4-cf"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/40ecc98a-3a9e-4cd0-8546-64b7420be45f/console.log" append="off"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:33:54 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:33:54 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:33:54 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:33:54 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.443 254904 DEBUG nova.compute.manager [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Preparing to wait for external event network-vif-plugged-889242f4-cffc-42cc-8e15-9e4f88ce1b9f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.444 254904 DEBUG oslo_concurrency.lockutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.444 254904 DEBUG oslo_concurrency.lockutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.444 254904 DEBUG oslo_concurrency.lockutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.445 254904 DEBUG nova.virt.libvirt.vif [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:33:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-165834699',display_name='tempest-TestEncryptedCinderVolumes-server-165834699',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-165834699',id=28,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOJaPboaOa6uJHrsO2ao3rYwxmsZvzR/4lsVKCuOmmEhTjmIU2xPM9z7bpX4IbAFcb+u8hSQH3jDbrwYw+zDE/3XLvANn6+hOVGVetf9FOHitJV2GO1PjENl+cxBSc7TQ==',key_name='tempest-keypair-1610868451',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='58574186a4fd405e83f1a4b650ea8e8c',ramdisk_id='',reservation_id='r-oiph9p2e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-337876243',owner_user_name='tempest-TestEncryptedCinderVolumes-337876243-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:33:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a003d9cef7684ec48ed996b22c11419e',uuid=40ecc98a-3a9e-4cd0-8546-64b7420be45f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "889242f4-cffc-42cc-8e15-9e4f88ce1b9f", "address": "fa:16:3e:64:04:5e", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap889242f4-cf", "ovs_interfaceid": "889242f4-cffc-42cc-8e15-9e4f88ce1b9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.445 254904 DEBUG nova.network.os_vif_util [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Converting VIF {"id": "889242f4-cffc-42cc-8e15-9e4f88ce1b9f", "address": "fa:16:3e:64:04:5e", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap889242f4-cf", "ovs_interfaceid": "889242f4-cffc-42cc-8e15-9e4f88ce1b9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.446 254904 DEBUG nova.network.os_vif_util [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:04:5e,bridge_name='br-int',has_traffic_filtering=True,id=889242f4-cffc-42cc-8e15-9e4f88ce1b9f,network=Network(28b69a92-5b45-421b-9985-afeebc6820aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap889242f4-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.446 254904 DEBUG os_vif [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:04:5e,bridge_name='br-int',has_traffic_filtering=True,id=889242f4-cffc-42cc-8e15-9e4f88ce1b9f,network=Network(28b69a92-5b45-421b-9985-afeebc6820aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap889242f4-cf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.447 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.447 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.448 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.455 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.455 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap889242f4-cf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.456 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap889242f4-cf, col_values=(('external_ids', {'iface-id': '889242f4-cffc-42cc-8e15-9e4f88ce1b9f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:04:5e', 'vm-uuid': '40ecc98a-3a9e-4cd0-8546-64b7420be45f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.457 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.458 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:33:54 np0005542249 NetworkManager[48987]: <info>  [1764675234.4591] manager: (tap889242f4-cf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/136)
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.465 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.466 254904 INFO os_vif [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:04:5e,bridge_name='br-int',has_traffic_filtering=True,id=889242f4-cffc-42cc-8e15-9e4f88ce1b9f,network=Network(28b69a92-5b45-421b-9985-afeebc6820aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap889242f4-cf')#033[00m
Dec  2 06:33:54 np0005542249 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.546 254904 DEBUG nova.virt.libvirt.driver [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.547 254904 DEBUG nova.virt.libvirt.driver [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.547 254904 DEBUG nova.virt.libvirt.driver [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] No VIF found with MAC fa:16:3e:64:04:5e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.548 254904 INFO nova.virt.libvirt.driver [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Using config drive#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.580 254904 DEBUG nova.storage.rbd_utils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] rbd image 40ecc98a-3a9e-4cd0-8546-64b7420be45f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.587 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:54 np0005542249 nova_compute[254900]: 2025-12-02 11:33:54.997 254904 INFO nova.virt.libvirt.driver [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Creating config drive at /var/lib/nova/instances/40ecc98a-3a9e-4cd0-8546-64b7420be45f/disk.config#033[00m
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.003 254904 DEBUG oslo_concurrency.processutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/40ecc98a-3a9e-4cd0-8546-64b7420be45f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp6vs_z_s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.034 254904 DEBUG nova.network.neutron [req-2b3eb682-ec63-4b1a-b7fd-e5656347604f req-0c1f5cdd-d63a-4382-8084-e60bec342f9f 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Updated VIF entry in instance network info cache for port 889242f4-cffc-42cc-8e15-9e4f88ce1b9f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.035 254904 DEBUG nova.network.neutron [req-2b3eb682-ec63-4b1a-b7fd-e5656347604f req-0c1f5cdd-d63a-4382-8084-e60bec342f9f 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Updating instance_info_cache with network_info: [{"id": "889242f4-cffc-42cc-8e15-9e4f88ce1b9f", "address": "fa:16:3e:64:04:5e", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap889242f4-cf", "ovs_interfaceid": "889242f4-cffc-42cc-8e15-9e4f88ce1b9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.055 254904 DEBUG oslo_concurrency.lockutils [req-2b3eb682-ec63-4b1a-b7fd-e5656347604f req-0c1f5cdd-d63a-4382-8084-e60bec342f9f 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-40ecc98a-3a9e-4cd0-8546-64b7420be45f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.149 254904 DEBUG oslo_concurrency.processutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/40ecc98a-3a9e-4cd0-8546-64b7420be45f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp6vs_z_s" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.178 254904 DEBUG nova.storage.rbd_utils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] rbd image 40ecc98a-3a9e-4cd0-8546-64b7420be45f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.183 254904 DEBUG oslo_concurrency.processutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/40ecc98a-3a9e-4cd0-8546-64b7420be45f/disk.config 40ecc98a-3a9e-4cd0-8546-64b7420be45f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.371 254904 DEBUG oslo_concurrency.processutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/40ecc98a-3a9e-4cd0-8546-64b7420be45f/disk.config 40ecc98a-3a9e-4cd0-8546-64b7420be45f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.373 254904 INFO nova.virt.libvirt.driver [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Deleting local config drive /var/lib/nova/instances/40ecc98a-3a9e-4cd0-8546-64b7420be45f/disk.config because it was imported into RBD.#033[00m
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.378 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.403 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.436 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.437 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.437 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.437 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.438 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:33:55 np0005542249 kernel: tap889242f4-cf: entered promiscuous mode
Dec  2 06:33:55 np0005542249 NetworkManager[48987]: <info>  [1764675235.4507] manager: (tap889242f4-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/137)
Dec  2 06:33:55 np0005542249 ovn_controller[153849]: 2025-12-02T11:33:55Z|00261|binding|INFO|Claiming lport 889242f4-cffc-42cc-8e15-9e4f88ce1b9f for this chassis.
Dec  2 06:33:55 np0005542249 ovn_controller[153849]: 2025-12-02T11:33:55Z|00262|binding|INFO|889242f4-cffc-42cc-8e15-9e4f88ce1b9f: Claiming fa:16:3e:64:04:5e 10.100.0.9
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.463 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.473 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:04:5e 10.100.0.9'], port_security=['fa:16:3e:64:04:5e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '40ecc98a-3a9e-4cd0-8546-64b7420be45f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28b69a92-5b45-421b-9985-afeebc6820aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58574186a4fd405e83f1a4b650ea8e8c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1ba0a22d-186b-4c46-bef4-6771a6cf2be0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81aeb855-c9bf-4f95-90d1-85f514f075e1, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=889242f4-cffc-42cc-8e15-9e4f88ce1b9f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.475 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 889242f4-cffc-42cc-8e15-9e4f88ce1b9f in datapath 28b69a92-5b45-421b-9985-afeebc6820aa bound to our chassis#033[00m
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.476 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 28b69a92-5b45-421b-9985-afeebc6820aa#033[00m
Dec  2 06:33:55 np0005542249 systemd-udevd[295673]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.494 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[5a50baa4-9188-4713-8312-150a5c410895]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.496 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap28b69a92-51 in ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 06:33:55 np0005542249 NetworkManager[48987]: <info>  [1764675235.4994] device (tap889242f4-cf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:33:55 np0005542249 NetworkManager[48987]: <info>  [1764675235.5007] device (tap889242f4-cf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.500 262398 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap28b69a92-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.500 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[da01b375-b731-40db-bcec-2dd054724e57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.502 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae80f09-9856-4787-9a25-0d36e32eea88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:33:55 np0005542249 systemd-machined[216222]: New machine qemu-28-instance-0000001c.
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.527 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[f5f42a41-19e6-47e4-b228-721a2959bf09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:33:55 np0005542249 systemd[1]: Started Virtual Machine qemu-28-instance-0000001c.
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.563 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.564 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[06fa7800-05c4-4a01-9069-a78a33fc6650]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:33:55 np0005542249 ovn_controller[153849]: 2025-12-02T11:33:55Z|00263|binding|INFO|Setting lport 889242f4-cffc-42cc-8e15-9e4f88ce1b9f ovn-installed in OVS
Dec  2 06:33:55 np0005542249 ovn_controller[153849]: 2025-12-02T11:33:55Z|00264|binding|INFO|Setting lport 889242f4-cffc-42cc-8e15-9e4f88ce1b9f up in Southbound
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.572 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.617 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[755b488f-930c-4704-82dd-a64b054305e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:33:55 np0005542249 NetworkManager[48987]: <info>  [1764675235.6346] manager: (tap28b69a92-50): new Veth device (/org/freedesktop/NetworkManager/Devices/138)
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.633 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[757b70e0-e105-44bb-9c4e-e58cc98658c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:33:55 np0005542249 systemd-udevd[295678]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.671 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[168364cb-2aca-41c1-9b4f-62e9204b1aba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.676 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[8734ba43-16e6-4963-892d-7dae5524fd94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:33:55 np0005542249 NetworkManager[48987]: <info>  [1764675235.7089] device (tap28b69a92-50): carrier: link connected
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.716 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[38917d8e-0322-4a7e-8814-8f43b575838c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.745 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[62b626d4-fc63-4b6d-a8a5-e084ae80b3b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28b69a92-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:96:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542853, 'reachable_time': 21116, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295729, 'error': None, 'target': 'ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.772 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[eaeb004f-3cf4-47c7-9416-ff14a4f551f6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1f:96b2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 542853, 'tstamp': 542853}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295730, 'error': None, 'target': 'ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.796 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[7b966218-f5c9-41d5-8e28-89f265bcb8b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28b69a92-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:96:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542853, 'reachable_time': 21116, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295731, 'error': None, 'target': 'ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.835 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[3b762279-6b44-4fdf-b5d2-fc6a3c88e42d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:33:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:33:55 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/841457026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.885 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.923 254904 DEBUG nova.compute.manager [req-fd8c82fc-0ccf-4b6b-9727-217b7d1294b9 req-c063376c-9ace-46d7-bf93-15c429ed2faa 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Received event network-vif-plugged-889242f4-cffc-42cc-8e15-9e4f88ce1b9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.925 254904 DEBUG oslo_concurrency.lockutils [req-fd8c82fc-0ccf-4b6b-9727-217b7d1294b9 req-c063376c-9ace-46d7-bf93-15c429ed2faa 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.925 254904 DEBUG oslo_concurrency.lockutils [req-fd8c82fc-0ccf-4b6b-9727-217b7d1294b9 req-c063376c-9ace-46d7-bf93-15c429ed2faa 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.925 254904 DEBUG oslo_concurrency.lockutils [req-fd8c82fc-0ccf-4b6b-9727-217b7d1294b9 req-c063376c-9ace-46d7-bf93-15c429ed2faa 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.926 254904 DEBUG nova.compute.manager [req-fd8c82fc-0ccf-4b6b-9727-217b7d1294b9 req-c063376c-9ace-46d7-bf93-15c429ed2faa 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Processing event network-vif-plugged-889242f4-cffc-42cc-8e15-9e4f88ce1b9f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.931 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[a77a4c5d-a03d-4133-89c9-4c6bae35322f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.933 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28b69a92-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.933 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.933 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28b69a92-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.935 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:55 np0005542249 NetworkManager[48987]: <info>  [1764675235.9363] manager: (tap28b69a92-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/139)
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.938 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:55 np0005542249 kernel: tap28b69a92-50: entered promiscuous mode
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.939 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap28b69a92-50, col_values=(('external_ids', {'iface-id': '82e6ca5f-5089-4718-9fe8-4d0d719de187'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.940 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:55 np0005542249 ovn_controller[153849]: 2025-12-02T11:33:55Z|00265|binding|INFO|Releasing lport 82e6ca5f-5089-4718-9fe8-4d0d719de187 from this chassis (sb_readonly=0)
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.944 163757 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/28b69a92-5b45-421b-9985-afeebc6820aa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/28b69a92-5b45-421b-9985-afeebc6820aa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.945 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[9a379764-c44f-4de2-ac99-9ae0eccd4df7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.946 163757 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: global
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]:    log         /dev/log local0 debug
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]:    log-tag     haproxy-metadata-proxy-28b69a92-5b45-421b-9985-afeebc6820aa
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]:    user        root
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]:    group       root
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]:    maxconn     1024
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]:    pidfile     /var/lib/neutron/external/pids/28b69a92-5b45-421b-9985-afeebc6820aa.pid.haproxy
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]:    daemon
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: defaults
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]:    log global
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]:    mode http
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]:    option httplog
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]:    option dontlognull
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]:    option http-server-close
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]:    option forwardfor
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]:    retries                 3
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]:    timeout http-request    30s
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]:    timeout connect         30s
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]:    timeout client          32s
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]:    timeout server          32s
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]:    timeout http-keep-alive 30s
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: listen listener
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]:    bind 169.254.169.254:80
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]:    http-request add-header X-OVN-Network-ID 28b69a92-5b45-421b-9985-afeebc6820aa
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 06:33:55 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:33:55.947 163757 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa', 'env', 'PROCESS_TAG=haproxy-28b69a92-5b45-421b-9985-afeebc6820aa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/28b69a92-5b45-421b-9985-afeebc6820aa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.959 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.983 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:33:55 np0005542249 nova_compute[254900]: 2025-12-02 11:33:55.983 254904 DEBUG nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.022 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764675236.0214314, 40ecc98a-3a9e-4cd0-8546-64b7420be45f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.023 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] VM Started (Lifecycle Event)#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.025 254904 DEBUG nova.compute.manager [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.043 254904 DEBUG nova.virt.libvirt.driver [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.047 254904 INFO nova.virt.libvirt.driver [-] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Instance spawned successfully.#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.048 254904 DEBUG nova.virt.libvirt.driver [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.062 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.067 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.078 254904 DEBUG nova.virt.libvirt.driver [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.079 254904 DEBUG nova.virt.libvirt.driver [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.079 254904 DEBUG nova.virt.libvirt.driver [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.080 254904 DEBUG nova.virt.libvirt.driver [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.080 254904 DEBUG nova.virt.libvirt.driver [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.080 254904 DEBUG nova.virt.libvirt.driver [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.088 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.088 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764675236.0249178, 40ecc98a-3a9e-4cd0-8546-64b7420be45f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.088 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.109 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.113 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764675236.027719, 40ecc98a-3a9e-4cd0-8546-64b7420be45f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.114 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.136 254904 INFO nova.compute.manager [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Took 7.26 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.137 254904 DEBUG nova.compute.manager [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.145 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.153 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.186 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.216 254904 INFO nova.compute.manager [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Took 8.63 seconds to build instance.#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.254 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.256 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4309MB free_disk=59.967525482177734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.256 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.256 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.271 254904 DEBUG oslo_concurrency.lockutils [None req-898209ff-41a3-4227-86b2-05cedc713110 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:33:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:33:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:33:56 np0005542249 podman[295808]: 2025-12-02 11:33:56.372412821 +0000 UTC m=+0.071961482 container create f9ca6c997bfac90061c335cc9112d9cf0919a269d6d5be9451b25dadf5296173 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec  2 06:33:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:33:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:33:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:33:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:33:56 np0005542249 podman[295808]: 2025-12-02 11:33:56.321777555 +0000 UTC m=+0.021326266 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:33:56 np0005542249 systemd[1]: Started libpod-conmon-f9ca6c997bfac90061c335cc9112d9cf0919a269d6d5be9451b25dadf5296173.scope.
Dec  2 06:33:56 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1768: 321 pgs: 321 active+clean; 134 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 2.7 MiB/s wr, 106 op/s
Dec  2 06:33:56 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:33:56 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5da5603a9d6b21748a887b97430345936c3d9fcef42428bba9ffbf492f024d6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 06:33:56 np0005542249 podman[295808]: 2025-12-02 11:33:56.471658577 +0000 UTC m=+0.171207288 container init f9ca6c997bfac90061c335cc9112d9cf0919a269d6d5be9451b25dadf5296173 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec  2 06:33:56 np0005542249 podman[295808]: 2025-12-02 11:33:56.47843201 +0000 UTC m=+0.177980681 container start f9ca6c997bfac90061c335cc9112d9cf0919a269d6d5be9451b25dadf5296173 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.483 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Instance 40ecc98a-3a9e-4cd0-8546-64b7420be45f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.484 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.484 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:33:56 np0005542249 neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa[295823]: [NOTICE]   (295827) : New worker (295829) forked
Dec  2 06:33:56 np0005542249 neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa[295823]: [NOTICE]   (295827) : Loading success.
Dec  2 06:33:56 np0005542249 nova_compute[254900]: 2025-12-02 11:33:56.538 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:33:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:33:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2072171591' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:33:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:33:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2231199044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:33:57 np0005542249 nova_compute[254900]: 2025-12-02 11:33:57.021 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:33:57 np0005542249 nova_compute[254900]: 2025-12-02 11:33:57.029 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:33:57 np0005542249 nova_compute[254900]: 2025-12-02 11:33:57.047 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:33:57 np0005542249 nova_compute[254900]: 2025-12-02 11:33:57.070 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:33:57 np0005542249 nova_compute[254900]: 2025-12-02 11:33:57.071 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:33:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e443 do_prune osdmap full prune enabled
Dec  2 06:33:58 np0005542249 nova_compute[254900]: 2025-12-02 11:33:58.024 254904 DEBUG nova.compute.manager [req-b300ac48-9743-46bb-bc84-f0348a81a2f7 req-e666dcfb-de2e-4c59-b369-3b48c41986d3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Received event network-vif-plugged-889242f4-cffc-42cc-8e15-9e4f88ce1b9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:33:58 np0005542249 nova_compute[254900]: 2025-12-02 11:33:58.025 254904 DEBUG oslo_concurrency.lockutils [req-b300ac48-9743-46bb-bc84-f0348a81a2f7 req-e666dcfb-de2e-4c59-b369-3b48c41986d3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:33:58 np0005542249 nova_compute[254900]: 2025-12-02 11:33:58.027 254904 DEBUG oslo_concurrency.lockutils [req-b300ac48-9743-46bb-bc84-f0348a81a2f7 req-e666dcfb-de2e-4c59-b369-3b48c41986d3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:33:58 np0005542249 nova_compute[254900]: 2025-12-02 11:33:58.027 254904 DEBUG oslo_concurrency.lockutils [req-b300ac48-9743-46bb-bc84-f0348a81a2f7 req-e666dcfb-de2e-4c59-b369-3b48c41986d3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:33:58 np0005542249 nova_compute[254900]: 2025-12-02 11:33:58.028 254904 DEBUG nova.compute.manager [req-b300ac48-9743-46bb-bc84-f0348a81a2f7 req-e666dcfb-de2e-4c59-b369-3b48c41986d3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] No waiting events found dispatching network-vif-plugged-889242f4-cffc-42cc-8e15-9e4f88ce1b9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:33:58 np0005542249 nova_compute[254900]: 2025-12-02 11:33:58.028 254904 WARNING nova.compute.manager [req-b300ac48-9743-46bb-bc84-f0348a81a2f7 req-e666dcfb-de2e-4c59-b369-3b48c41986d3 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Received unexpected event network-vif-plugged-889242f4-cffc-42cc-8e15-9e4f88ce1b9f for instance with vm_state active and task_state None.#033[00m
Dec  2 06:33:58 np0005542249 nova_compute[254900]: 2025-12-02 11:33:58.050 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:33:58 np0005542249 nova_compute[254900]: 2025-12-02 11:33:58.051 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:33:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e444 e444: 3 total, 3 up, 3 in
Dec  2 06:33:58 np0005542249 nova_compute[254900]: 2025-12-02 11:33:58.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:33:58 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e444: 3 total, 3 up, 3 in
Dec  2 06:33:58 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1770: 321 pgs: 321 active+clean; 134 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 145 op/s
Dec  2 06:33:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e444 do_prune osdmap full prune enabled
Dec  2 06:33:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e445 e445: 3 total, 3 up, 3 in
Dec  2 06:33:59 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e445: 3 total, 3 up, 3 in
Dec  2 06:33:59 np0005542249 nova_compute[254900]: 2025-12-02 11:33:59.460 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:33:59 np0005542249 nova_compute[254900]: 2025-12-02 11:33:59.584 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:00 np0005542249 nova_compute[254900]: 2025-12-02 11:34:00.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:34:00 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1772: 321 pgs: 321 active+clean; 134 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 189 KiB/s wr, 129 op/s
Dec  2 06:34:01 np0005542249 nova_compute[254900]: 2025-12-02 11:34:01.031 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:01 np0005542249 NetworkManager[48987]: <info>  [1764675241.0333] manager: (patch-br-int-to-provnet-236defd8-cd9f-40fb-be9d-c3108d5fe981): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Dec  2 06:34:01 np0005542249 NetworkManager[48987]: <info>  [1764675241.0349] manager: (patch-provnet-236defd8-cd9f-40fb-be9d-c3108d5fe981-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Dec  2 06:34:01 np0005542249 nova_compute[254900]: 2025-12-02 11:34:01.153 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:01 np0005542249 ovn_controller[153849]: 2025-12-02T11:34:01Z|00266|binding|INFO|Releasing lport 82e6ca5f-5089-4718-9fe8-4d0d719de187 from this chassis (sb_readonly=0)
Dec  2 06:34:01 np0005542249 nova_compute[254900]: 2025-12-02 11:34:01.198 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:01 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:34:01.294 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:23:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:25:d0:a1:75:ed'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:34:01 np0005542249 nova_compute[254900]: 2025-12-02 11:34:01.294 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:01 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:34:01.295 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 06:34:01 np0005542249 nova_compute[254900]: 2025-12-02 11:34:01.310 254904 DEBUG nova.compute.manager [req-47c2979b-df57-4a5c-9be5-a5d2a0e6aad7 req-3c2776ca-fa0e-4ad0-9fa9-21521c8ea32b 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Received event network-changed-889242f4-cffc-42cc-8e15-9e4f88ce1b9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:34:01 np0005542249 nova_compute[254900]: 2025-12-02 11:34:01.311 254904 DEBUG nova.compute.manager [req-47c2979b-df57-4a5c-9be5-a5d2a0e6aad7 req-3c2776ca-fa0e-4ad0-9fa9-21521c8ea32b 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Refreshing instance network info cache due to event network-changed-889242f4-cffc-42cc-8e15-9e4f88ce1b9f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:34:01 np0005542249 nova_compute[254900]: 2025-12-02 11:34:01.312 254904 DEBUG oslo_concurrency.lockutils [req-47c2979b-df57-4a5c-9be5-a5d2a0e6aad7 req-3c2776ca-fa0e-4ad0-9fa9-21521c8ea32b 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-40ecc98a-3a9e-4cd0-8546-64b7420be45f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:34:01 np0005542249 nova_compute[254900]: 2025-12-02 11:34:01.312 254904 DEBUG oslo_concurrency.lockutils [req-47c2979b-df57-4a5c-9be5-a5d2a0e6aad7 req-3c2776ca-fa0e-4ad0-9fa9-21521c8ea32b 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-40ecc98a-3a9e-4cd0-8546-64b7420be45f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:34:01 np0005542249 nova_compute[254900]: 2025-12-02 11:34:01.313 254904 DEBUG nova.network.neutron [req-47c2979b-df57-4a5c-9be5-a5d2a0e6aad7 req-3c2776ca-fa0e-4ad0-9fa9-21521c8ea32b 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Refreshing network info cache for port 889242f4-cffc-42cc-8e15-9e4f88ce1b9f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:34:02 np0005542249 podman[295862]: 2025-12-02 11:34:02.052088977 +0000 UTC m=+0.114043886 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  2 06:34:02 np0005542249 podman[295863]: 2025-12-02 11:34:02.132951638 +0000 UTC m=+0.192089621 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:34:02 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:34:02.298 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:34:02 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1773: 321 pgs: 321 active+clean; 134 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 120 op/s
Dec  2 06:34:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e445 do_prune osdmap full prune enabled
Dec  2 06:34:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e446 e446: 3 total, 3 up, 3 in
Dec  2 06:34:02 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e446: 3 total, 3 up, 3 in
Dec  2 06:34:03 np0005542249 nova_compute[254900]: 2025-12-02 11:34:03.079 254904 DEBUG nova.network.neutron [req-47c2979b-df57-4a5c-9be5-a5d2a0e6aad7 req-3c2776ca-fa0e-4ad0-9fa9-21521c8ea32b 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Updated VIF entry in instance network info cache for port 889242f4-cffc-42cc-8e15-9e4f88ce1b9f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:34:03 np0005542249 nova_compute[254900]: 2025-12-02 11:34:03.080 254904 DEBUG nova.network.neutron [req-47c2979b-df57-4a5c-9be5-a5d2a0e6aad7 req-3c2776ca-fa0e-4ad0-9fa9-21521c8ea32b 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Updating instance_info_cache with network_info: [{"id": "889242f4-cffc-42cc-8e15-9e4f88ce1b9f", "address": "fa:16:3e:64:04:5e", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap889242f4-cf", "ovs_interfaceid": "889242f4-cffc-42cc-8e15-9e4f88ce1b9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:34:03 np0005542249 nova_compute[254900]: 2025-12-02 11:34:03.101 254904 DEBUG oslo_concurrency.lockutils [req-47c2979b-df57-4a5c-9be5-a5d2a0e6aad7 req-3c2776ca-fa0e-4ad0-9fa9-21521c8ea32b 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-40ecc98a-3a9e-4cd0-8546-64b7420be45f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:34:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e446 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:34:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e446 do_prune osdmap full prune enabled
Dec  2 06:34:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e447 e447: 3 total, 3 up, 3 in
Dec  2 06:34:03 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e447: 3 total, 3 up, 3 in
Dec  2 06:34:04 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1776: 321 pgs: 321 active+clean; 134 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 KiB/s wr, 101 op/s
Dec  2 06:34:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e447 do_prune osdmap full prune enabled
Dec  2 06:34:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e448 e448: 3 total, 3 up, 3 in
Dec  2 06:34:04 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e448: 3 total, 3 up, 3 in
Dec  2 06:34:04 np0005542249 nova_compute[254900]: 2025-12-02 11:34:04.504 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:04 np0005542249 nova_compute[254900]: 2025-12-02 11:34:04.585 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:34:05 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3714704451' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:34:05 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:34:05 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3714704451' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:34:06 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1778: 321 pgs: 321 active+clean; 134 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.0 KiB/s wr, 84 op/s
Dec  2 06:34:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:34:08 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1779: 321 pgs: 321 active+clean; 137 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 340 KiB/s wr, 97 op/s
Dec  2 06:34:09 np0005542249 nova_compute[254900]: 2025-12-02 11:34:09.507 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:09 np0005542249 nova_compute[254900]: 2025-12-02 11:34:09.587 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:10 np0005542249 ovn_controller[153849]: 2025-12-02T11:34:10Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:64:04:5e 10.100.0.9
Dec  2 06:34:10 np0005542249 ovn_controller[153849]: 2025-12-02T11:34:10Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:64:04:5e 10.100.0.9
Dec  2 06:34:10 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1780: 321 pgs: 321 active+clean; 149 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 1.4 MiB/s wr, 95 op/s
Dec  2 06:34:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:34:12 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2201861118' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:34:12 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1781: 321 pgs: 321 active+clean; 159 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 186 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Dec  2 06:34:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e448 do_prune osdmap full prune enabled
Dec  2 06:34:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e449 e449: 3 total, 3 up, 3 in
Dec  2 06:34:12 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e449: 3 total, 3 up, 3 in
Dec  2 06:34:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e449 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:34:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e449 do_prune osdmap full prune enabled
Dec  2 06:34:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e450 e450: 3 total, 3 up, 3 in
Dec  2 06:34:13 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e450: 3 total, 3 up, 3 in
Dec  2 06:34:14 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1784: 321 pgs: 321 active+clean; 167 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 647 KiB/s rd, 3.2 MiB/s wr, 153 op/s
Dec  2 06:34:14 np0005542249 nova_compute[254900]: 2025-12-02 11:34:14.515 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:14 np0005542249 nova_compute[254900]: 2025-12-02 11:34:14.590 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:34:16 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2191839847' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:34:16 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1785: 321 pgs: 321 active+clean; 167 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 602 KiB/s rd, 2.9 MiB/s wr, 113 op/s
Dec  2 06:34:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e450 do_prune osdmap full prune enabled
Dec  2 06:34:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e451 e451: 3 total, 3 up, 3 in
Dec  2 06:34:16 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e451: 3 total, 3 up, 3 in
Dec  2 06:34:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e451 do_prune osdmap full prune enabled
Dec  2 06:34:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e452 e452: 3 total, 3 up, 3 in
Dec  2 06:34:17 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e452: 3 total, 3 up, 3 in
Dec  2 06:34:17 np0005542249 nova_compute[254900]: 2025-12-02 11:34:17.973 254904 DEBUG oslo_concurrency.lockutils [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:34:17 np0005542249 nova_compute[254900]: 2025-12-02 11:34:17.974 254904 DEBUG oslo_concurrency.lockutils [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:34:18 np0005542249 nova_compute[254900]: 2025-12-02 11:34:18.003 254904 DEBUG nova.objects.instance [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lazy-loading 'flavor' on Instance uuid 40ecc98a-3a9e-4cd0-8546-64b7420be45f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:34:18 np0005542249 nova_compute[254900]: 2025-12-02 11:34:18.040 254904 DEBUG oslo_concurrency.lockutils [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:34:18 np0005542249 nova_compute[254900]: 2025-12-02 11:34:18.229 254904 DEBUG oslo_concurrency.lockutils [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:34:18 np0005542249 nova_compute[254900]: 2025-12-02 11:34:18.230 254904 DEBUG oslo_concurrency.lockutils [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:34:18 np0005542249 nova_compute[254900]: 2025-12-02 11:34:18.231 254904 INFO nova.compute.manager [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Attaching volume 46a8d69e-d779-4e06-a50f-3f6942c72711 to /dev/vdb#033[00m
Dec  2 06:34:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e452 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:34:18 np0005542249 nova_compute[254900]: 2025-12-02 11:34:18.393 254904 DEBUG os_brick.utils [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:34:18 np0005542249 nova_compute[254900]: 2025-12-02 11:34:18.397 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:34:18 np0005542249 nova_compute[254900]: 2025-12-02 11:34:18.417 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:34:18 np0005542249 nova_compute[254900]: 2025-12-02 11:34:18.417 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[647bb512-dff0-4d1e-a5e2-bc456d27646e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:34:18 np0005542249 nova_compute[254900]: 2025-12-02 11:34:18.419 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:34:18 np0005542249 nova_compute[254900]: 2025-12-02 11:34:18.434 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:34:18 np0005542249 nova_compute[254900]: 2025-12-02 11:34:18.435 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[c316d5bb-0692-467f-bd9c-490e25401b49]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:34:18 np0005542249 nova_compute[254900]: 2025-12-02 11:34:18.437 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:34:18 np0005542249 nova_compute[254900]: 2025-12-02 11:34:18.447 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:34:18 np0005542249 nova_compute[254900]: 2025-12-02 11:34:18.447 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[32c87c61-bc88-4243-9c13-f45a3fa79dfd]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:34:18 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1788: 321 pgs: 321 active+clean; 167 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 623 KiB/s rd, 1.0 MiB/s wr, 136 op/s
Dec  2 06:34:18 np0005542249 nova_compute[254900]: 2025-12-02 11:34:18.449 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[895a1f8c-b5c3-4d9f-9368-7c69a1688d33]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:34:18 np0005542249 nova_compute[254900]: 2025-12-02 11:34:18.451 254904 DEBUG oslo_concurrency.processutils [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:34:18 np0005542249 nova_compute[254900]: 2025-12-02 11:34:18.487 254904 DEBUG oslo_concurrency.processutils [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CMD "nvme version" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:34:18 np0005542249 nova_compute[254900]: 2025-12-02 11:34:18.490 254904 DEBUG os_brick.initiator.connectors.lightos [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:34:18 np0005542249 nova_compute[254900]: 2025-12-02 11:34:18.490 254904 DEBUG os_brick.initiator.connectors.lightos [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:34:18 np0005542249 nova_compute[254900]: 2025-12-02 11:34:18.491 254904 DEBUG os_brick.initiator.connectors.lightos [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:34:18 np0005542249 nova_compute[254900]: 2025-12-02 11:34:18.491 254904 DEBUG os_brick.utils [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] <== get_connector_properties: return (96ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:34:18 np0005542249 nova_compute[254900]: 2025-12-02 11:34:18.491 254904 DEBUG nova.virt.block_device [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Updating existing volume attachment record: df230cbd-735c-418f-8723-9419970c14a5 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:34:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e452 do_prune osdmap full prune enabled
Dec  2 06:34:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e453 e453: 3 total, 3 up, 3 in
Dec  2 06:34:18 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e453: 3 total, 3 up, 3 in
Dec  2 06:34:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:34:19 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2459150040' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.328 254904 DEBUG os_brick.encryptors [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Using volume encryption metadata '{'encryption_key_id': 'b070e2a3-d67d-4a05-8b9e-0a2f2c49566f', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-46a8d69e-d779-4e06-a50f-3f6942c72711', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '46a8d69e-d779-4e06-a50f-3f6942c72711', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '40ecc98a-3a9e-4cd0-8546-64b7420be45f', 'attached_at': '', 'detached_at': '', 'volume_id': '46a8d69e-d779-4e06-a50f-3f6942c72711', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.339 254904 DEBUG barbicanclient.client [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.356 254904 DEBUG barbicanclient.v1.secrets [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/b070e2a3-d67d-4a05-8b9e-0a2f2c49566f get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.357 254904 INFO barbicanclient.base [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/b070e2a3-d67d-4a05-8b9e-0a2f2c49566f#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.378 254904 DEBUG barbicanclient.client [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.379 254904 INFO barbicanclient.base [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/b070e2a3-d67d-4a05-8b9e-0a2f2c49566f#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.399 254904 DEBUG barbicanclient.client [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.400 254904 INFO barbicanclient.base [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/b070e2a3-d67d-4a05-8b9e-0a2f2c49566f#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.420 254904 DEBUG barbicanclient.client [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.420 254904 INFO barbicanclient.base [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/b070e2a3-d67d-4a05-8b9e-0a2f2c49566f#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.444 254904 DEBUG barbicanclient.client [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.444 254904 INFO barbicanclient.base [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/b070e2a3-d67d-4a05-8b9e-0a2f2c49566f#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.466 254904 DEBUG barbicanclient.client [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.467 254904 INFO barbicanclient.base [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/b070e2a3-d67d-4a05-8b9e-0a2f2c49566f#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.484 254904 DEBUG barbicanclient.client [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.485 254904 INFO barbicanclient.base [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/b070e2a3-d67d-4a05-8b9e-0a2f2c49566f#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.548 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:34:19 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1651145428' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:34:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:34:19 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1651145428' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.572 254904 DEBUG barbicanclient.client [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.573 254904 INFO barbicanclient.base [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/b070e2a3-d67d-4a05-8b9e-0a2f2c49566f#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.593 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.604 254904 DEBUG barbicanclient.client [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.604 254904 INFO barbicanclient.base [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/b070e2a3-d67d-4a05-8b9e-0a2f2c49566f#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.640 254904 DEBUG barbicanclient.client [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.641 254904 INFO barbicanclient.base [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/b070e2a3-d67d-4a05-8b9e-0a2f2c49566f#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.671 254904 DEBUG barbicanclient.client [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.672 254904 INFO barbicanclient.base [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/b070e2a3-d67d-4a05-8b9e-0a2f2c49566f#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.693 254904 DEBUG barbicanclient.client [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.694 254904 INFO barbicanclient.base [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/b070e2a3-d67d-4a05-8b9e-0a2f2c49566f#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.721 254904 DEBUG barbicanclient.client [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.722 254904 INFO barbicanclient.base [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/b070e2a3-d67d-4a05-8b9e-0a2f2c49566f#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.744 254904 DEBUG barbicanclient.client [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.746 254904 INFO barbicanclient.base [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/b070e2a3-d67d-4a05-8b9e-0a2f2c49566f#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.773 254904 DEBUG barbicanclient.client [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.774 254904 INFO barbicanclient.base [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/b070e2a3-d67d-4a05-8b9e-0a2f2c49566f#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.801 254904 DEBUG barbicanclient.client [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.802 254904 DEBUG nova.virt.libvirt.host [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Secret XML: <secret ephemeral="no" private="no">
Dec  2 06:34:19 np0005542249 nova_compute[254900]:  <usage type="volume">
Dec  2 06:34:19 np0005542249 nova_compute[254900]:    <volume>46a8d69e-d779-4e06-a50f-3f6942c72711</volume>
Dec  2 06:34:19 np0005542249 nova_compute[254900]:  </usage>
Dec  2 06:34:19 np0005542249 nova_compute[254900]: </secret>
Dec  2 06:34:19 np0005542249 nova_compute[254900]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.818 254904 DEBUG nova.objects.instance [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lazy-loading 'flavor' on Instance uuid 40ecc98a-3a9e-4cd0-8546-64b7420be45f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.840 254904 DEBUG nova.virt.libvirt.driver [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Attempting to attach volume 46a8d69e-d779-4e06-a50f-3f6942c72711 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Dec  2 06:34:19 np0005542249 nova_compute[254900]: 2025-12-02 11:34:19.843 254904 DEBUG nova.virt.libvirt.guest [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] attach device xml: <disk type="network" device="disk">
Dec  2 06:34:19 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:34:19 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-46a8d69e-d779-4e06-a50f-3f6942c72711">
Dec  2 06:34:19 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:34:19 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:34:19 np0005542249 nova_compute[254900]:  <auth username="openstack">
Dec  2 06:34:19 np0005542249 nova_compute[254900]:    <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:34:19 np0005542249 nova_compute[254900]:  </auth>
Dec  2 06:34:19 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:34:19 np0005542249 nova_compute[254900]:  <serial>46a8d69e-d779-4e06-a50f-3f6942c72711</serial>
Dec  2 06:34:19 np0005542249 nova_compute[254900]:  <encryption format="luks">
Dec  2 06:34:19 np0005542249 nova_compute[254900]:    <secret type="passphrase" uuid="1ddd9680-e26b-42fb-8a09-48f9137d583c"/>
Dec  2 06:34:19 np0005542249 nova_compute[254900]:  </encryption>
Dec  2 06:34:19 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:34:19 np0005542249 nova_compute[254900]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  2 06:34:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:34:19.848 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:34:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:34:19.849 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:34:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:34:19.850 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:34:20 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1790: 321 pgs: 321 active+clean; 167 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 27 KiB/s wr, 47 op/s
Dec  2 06:34:22 np0005542249 nova_compute[254900]: 2025-12-02 11:34:22.364 254904 DEBUG nova.virt.libvirt.driver [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:34:22 np0005542249 nova_compute[254900]: 2025-12-02 11:34:22.365 254904 DEBUG nova.virt.libvirt.driver [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:34:22 np0005542249 nova_compute[254900]: 2025-12-02 11:34:22.365 254904 DEBUG nova.virt.libvirt.driver [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:34:22 np0005542249 nova_compute[254900]: 2025-12-02 11:34:22.365 254904 DEBUG nova.virt.libvirt.driver [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] No VIF found with MAC fa:16:3e:64:04:5e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:34:22 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1791: 321 pgs: 321 active+clean; 167 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 27 KiB/s wr, 52 op/s
Dec  2 06:34:22 np0005542249 nova_compute[254900]: 2025-12-02 11:34:22.696 254904 DEBUG oslo_concurrency.lockutils [None req-e9c1a336-84d3-420f-9b57-5372e5aa3333 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:34:23 np0005542249 podman[295935]: 2025-12-02 11:34:23.029978414 +0000 UTC m=+0.108406964 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 06:34:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:34:23 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1175910955' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:34:23 np0005542249 nova_compute[254900]: 2025-12-02 11:34:23.217 254904 DEBUG oslo_concurrency.lockutils [None req-f8c19ebf-4eaa-44ad-a7b9-52968fa7042e a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:34:23 np0005542249 nova_compute[254900]: 2025-12-02 11:34:23.218 254904 DEBUG oslo_concurrency.lockutils [None req-f8c19ebf-4eaa-44ad-a7b9-52968fa7042e a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:34:23 np0005542249 nova_compute[254900]: 2025-12-02 11:34:23.233 254904 INFO nova.compute.manager [None req-f8c19ebf-4eaa-44ad-a7b9-52968fa7042e a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Detaching volume 46a8d69e-d779-4e06-a50f-3f6942c72711#033[00m
Dec  2 06:34:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e453 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:34:23 np0005542249 nova_compute[254900]: 2025-12-02 11:34:23.357 254904 INFO nova.virt.block_device [None req-f8c19ebf-4eaa-44ad-a7b9-52968fa7042e a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Attempting to driver detach volume 46a8d69e-d779-4e06-a50f-3f6942c72711 from mountpoint /dev/vdb#033[00m
Dec  2 06:34:23 np0005542249 nova_compute[254900]: 2025-12-02 11:34:23.506 254904 DEBUG os_brick.encryptors [None req-f8c19ebf-4eaa-44ad-a7b9-52968fa7042e a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Using volume encryption metadata '{'encryption_key_id': 'b070e2a3-d67d-4a05-8b9e-0a2f2c49566f', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-46a8d69e-d779-4e06-a50f-3f6942c72711', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '46a8d69e-d779-4e06-a50f-3f6942c72711', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '40ecc98a-3a9e-4cd0-8546-64b7420be45f', 'attached_at': '', 'detached_at': '', 'volume_id': '46a8d69e-d779-4e06-a50f-3f6942c72711', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Dec  2 06:34:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e453 do_prune osdmap full prune enabled
Dec  2 06:34:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e454 e454: 3 total, 3 up, 3 in
Dec  2 06:34:23 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e454: 3 total, 3 up, 3 in
Dec  2 06:34:23 np0005542249 nova_compute[254900]: 2025-12-02 11:34:23.522 254904 DEBUG nova.virt.libvirt.driver [None req-f8c19ebf-4eaa-44ad-a7b9-52968fa7042e a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Attempting to detach device vdb from instance 40ecc98a-3a9e-4cd0-8546-64b7420be45f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  2 06:34:23 np0005542249 nova_compute[254900]: 2025-12-02 11:34:23.523 254904 DEBUG nova.virt.libvirt.guest [None req-f8c19ebf-4eaa-44ad-a7b9-52968fa7042e a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] detach device xml: <disk type="network" device="disk">
Dec  2 06:34:23 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:34:23 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-46a8d69e-d779-4e06-a50f-3f6942c72711">
Dec  2 06:34:23 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:34:23 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:34:23 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:34:23 np0005542249 nova_compute[254900]:  <serial>46a8d69e-d779-4e06-a50f-3f6942c72711</serial>
Dec  2 06:34:23 np0005542249 nova_compute[254900]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  2 06:34:23 np0005542249 nova_compute[254900]:  <encryption format="luks">
Dec  2 06:34:23 np0005542249 nova_compute[254900]:    <secret type="passphrase" uuid="1ddd9680-e26b-42fb-8a09-48f9137d583c"/>
Dec  2 06:34:23 np0005542249 nova_compute[254900]:  </encryption>
Dec  2 06:34:23 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:34:23 np0005542249 nova_compute[254900]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  2 06:34:23 np0005542249 nova_compute[254900]: 2025-12-02 11:34:23.536 254904 INFO nova.virt.libvirt.driver [None req-f8c19ebf-4eaa-44ad-a7b9-52968fa7042e a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Successfully detached device vdb from instance 40ecc98a-3a9e-4cd0-8546-64b7420be45f from the persistent domain config.#033[00m
Dec  2 06:34:23 np0005542249 nova_compute[254900]: 2025-12-02 11:34:23.536 254904 DEBUG nova.virt.libvirt.driver [None req-f8c19ebf-4eaa-44ad-a7b9-52968fa7042e a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 40ecc98a-3a9e-4cd0-8546-64b7420be45f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  2 06:34:23 np0005542249 nova_compute[254900]: 2025-12-02 11:34:23.536 254904 DEBUG nova.virt.libvirt.guest [None req-f8c19ebf-4eaa-44ad-a7b9-52968fa7042e a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] detach device xml: <disk type="network" device="disk">
Dec  2 06:34:23 np0005542249 nova_compute[254900]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:34:23 np0005542249 nova_compute[254900]:  <source protocol="rbd" name="volumes/volume-46a8d69e-d779-4e06-a50f-3f6942c72711">
Dec  2 06:34:23 np0005542249 nova_compute[254900]:    <host name="192.168.122.100" port="6789"/>
Dec  2 06:34:23 np0005542249 nova_compute[254900]:  </source>
Dec  2 06:34:23 np0005542249 nova_compute[254900]:  <target dev="vdb" bus="virtio"/>
Dec  2 06:34:23 np0005542249 nova_compute[254900]:  <serial>46a8d69e-d779-4e06-a50f-3f6942c72711</serial>
Dec  2 06:34:23 np0005542249 nova_compute[254900]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  2 06:34:23 np0005542249 nova_compute[254900]:  <encryption format="luks">
Dec  2 06:34:23 np0005542249 nova_compute[254900]:    <secret type="passphrase" uuid="1ddd9680-e26b-42fb-8a09-48f9137d583c"/>
Dec  2 06:34:23 np0005542249 nova_compute[254900]:  </encryption>
Dec  2 06:34:23 np0005542249 nova_compute[254900]: </disk>
Dec  2 06:34:23 np0005542249 nova_compute[254900]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  2 06:34:23 np0005542249 nova_compute[254900]: 2025-12-02 11:34:23.676 254904 DEBUG nova.virt.libvirt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Received event <DeviceRemovedEvent: 1764675263.6763496, 40ecc98a-3a9e-4cd0-8546-64b7420be45f => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  2 06:34:23 np0005542249 nova_compute[254900]: 2025-12-02 11:34:23.678 254904 DEBUG nova.virt.libvirt.driver [None req-f8c19ebf-4eaa-44ad-a7b9-52968fa7042e a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 40ecc98a-3a9e-4cd0-8546-64b7420be45f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  2 06:34:23 np0005542249 nova_compute[254900]: 2025-12-02 11:34:23.682 254904 INFO nova.virt.libvirt.driver [None req-f8c19ebf-4eaa-44ad-a7b9-52968fa7042e a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Successfully detached device vdb from instance 40ecc98a-3a9e-4cd0-8546-64b7420be45f from the live domain config.#033[00m
Dec  2 06:34:23 np0005542249 nova_compute[254900]: 2025-12-02 11:34:23.868 254904 DEBUG nova.objects.instance [None req-f8c19ebf-4eaa-44ad-a7b9-52968fa7042e a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lazy-loading 'flavor' on Instance uuid 40ecc98a-3a9e-4cd0-8546-64b7420be45f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:34:23 np0005542249 nova_compute[254900]: 2025-12-02 11:34:23.906 254904 DEBUG oslo_concurrency.lockutils [None req-f8c19ebf-4eaa-44ad-a7b9-52968fa7042e a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:34:24 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1793: 321 pgs: 321 active+clean; 167 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 32 KiB/s wr, 78 op/s
Dec  2 06:34:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e454 do_prune osdmap full prune enabled
Dec  2 06:34:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e455 e455: 3 total, 3 up, 3 in
Dec  2 06:34:24 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e455: 3 total, 3 up, 3 in
Dec  2 06:34:24 np0005542249 nova_compute[254900]: 2025-12-02 11:34:24.596 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:34:24 np0005542249 nova_compute[254900]: 2025-12-02 11:34:24.598 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:34:24 np0005542249 nova_compute[254900]: 2025-12-02 11:34:24.598 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  2 06:34:24 np0005542249 nova_compute[254900]: 2025-12-02 11:34:24.598 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  2 06:34:24 np0005542249 nova_compute[254900]: 2025-12-02 11:34:24.600 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:24 np0005542249 nova_compute[254900]: 2025-12-02 11:34:24.601 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  2 06:34:24 np0005542249 nova_compute[254900]: 2025-12-02 11:34:24.887 254904 DEBUG oslo_concurrency.lockutils [None req-66e9acf7-287f-4a87-8593-797a2153f533 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:34:24 np0005542249 nova_compute[254900]: 2025-12-02 11:34:24.887 254904 DEBUG oslo_concurrency.lockutils [None req-66e9acf7-287f-4a87-8593-797a2153f533 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:34:24 np0005542249 nova_compute[254900]: 2025-12-02 11:34:24.888 254904 DEBUG oslo_concurrency.lockutils [None req-66e9acf7-287f-4a87-8593-797a2153f533 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:34:24 np0005542249 nova_compute[254900]: 2025-12-02 11:34:24.888 254904 DEBUG oslo_concurrency.lockutils [None req-66e9acf7-287f-4a87-8593-797a2153f533 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:34:24 np0005542249 nova_compute[254900]: 2025-12-02 11:34:24.888 254904 DEBUG oslo_concurrency.lockutils [None req-66e9acf7-287f-4a87-8593-797a2153f533 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:34:24 np0005542249 nova_compute[254900]: 2025-12-02 11:34:24.889 254904 INFO nova.compute.manager [None req-66e9acf7-287f-4a87-8593-797a2153f533 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Terminating instance#033[00m
Dec  2 06:34:24 np0005542249 nova_compute[254900]: 2025-12-02 11:34:24.891 254904 DEBUG nova.compute.manager [None req-66e9acf7-287f-4a87-8593-797a2153f533 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:34:25 np0005542249 kernel: tap889242f4-cf (unregistering): left promiscuous mode
Dec  2 06:34:25 np0005542249 NetworkManager[48987]: <info>  [1764675265.0263] device (tap889242f4-cf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:34:25 np0005542249 nova_compute[254900]: 2025-12-02 11:34:25.041 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:25 np0005542249 ovn_controller[153849]: 2025-12-02T11:34:25Z|00267|binding|INFO|Releasing lport 889242f4-cffc-42cc-8e15-9e4f88ce1b9f from this chassis (sb_readonly=0)
Dec  2 06:34:25 np0005542249 ovn_controller[153849]: 2025-12-02T11:34:25Z|00268|binding|INFO|Setting lport 889242f4-cffc-42cc-8e15-9e4f88ce1b9f down in Southbound
Dec  2 06:34:25 np0005542249 ovn_controller[153849]: 2025-12-02T11:34:25Z|00269|binding|INFO|Removing iface tap889242f4-cf ovn-installed in OVS
Dec  2 06:34:25 np0005542249 nova_compute[254900]: 2025-12-02 11:34:25.047 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:34:25.052 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:04:5e 10.100.0.9'], port_security=['fa:16:3e:64:04:5e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '40ecc98a-3a9e-4cd0-8546-64b7420be45f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28b69a92-5b45-421b-9985-afeebc6820aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58574186a4fd405e83f1a4b650ea8e8c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1ba0a22d-186b-4c46-bef4-6771a6cf2be0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.211'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81aeb855-c9bf-4f95-90d1-85f514f075e1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=889242f4-cffc-42cc-8e15-9e4f88ce1b9f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:34:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:34:25.053 163757 INFO neutron.agent.ovn.metadata.agent [-] Port 889242f4-cffc-42cc-8e15-9e4f88ce1b9f in datapath 28b69a92-5b45-421b-9985-afeebc6820aa unbound from our chassis#033[00m
Dec  2 06:34:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:34:25.055 163757 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 28b69a92-5b45-421b-9985-afeebc6820aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 06:34:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:34:25.057 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[d198ac19-4cd4-439a-93c7-8a5972adceed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:34:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:34:25.058 163757 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa namespace which is not needed anymore#033[00m
Dec  2 06:34:25 np0005542249 nova_compute[254900]: 2025-12-02 11:34:25.090 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:25 np0005542249 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Dec  2 06:34:25 np0005542249 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001c.scope: Consumed 17.091s CPU time.
Dec  2 06:34:25 np0005542249 systemd-machined[216222]: Machine qemu-28-instance-0000001c terminated.
Dec  2 06:34:25 np0005542249 neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa[295823]: [NOTICE]   (295827) : haproxy version is 2.8.14-c23fe91
Dec  2 06:34:25 np0005542249 neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa[295823]: [NOTICE]   (295827) : path to executable is /usr/sbin/haproxy
Dec  2 06:34:25 np0005542249 neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa[295823]: [WARNING]  (295827) : Exiting Master process...
Dec  2 06:34:25 np0005542249 neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa[295823]: [WARNING]  (295827) : Exiting Master process...
Dec  2 06:34:25 np0005542249 neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa[295823]: [ALERT]    (295827) : Current worker (295829) exited with code 143 (Terminated)
Dec  2 06:34:25 np0005542249 neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa[295823]: [WARNING]  (295827) : All workers exited. Exiting... (0)
Dec  2 06:34:25 np0005542249 systemd[1]: libpod-f9ca6c997bfac90061c335cc9112d9cf0919a269d6d5be9451b25dadf5296173.scope: Deactivated successfully.
Dec  2 06:34:25 np0005542249 podman[295982]: 2025-12-02 11:34:25.261681476 +0000 UTC m=+0.069628689 container died f9ca6c997bfac90061c335cc9112d9cf0919a269d6d5be9451b25dadf5296173 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:34:25 np0005542249 nova_compute[254900]: 2025-12-02 11:34:25.316 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:25 np0005542249 nova_compute[254900]: 2025-12-02 11:34:25.324 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:25 np0005542249 nova_compute[254900]: 2025-12-02 11:34:25.340 254904 INFO nova.virt.libvirt.driver [-] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Instance destroyed successfully.#033[00m
Dec  2 06:34:25 np0005542249 nova_compute[254900]: 2025-12-02 11:34:25.340 254904 DEBUG nova.objects.instance [None req-66e9acf7-287f-4a87-8593-797a2153f533 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lazy-loading 'resources' on Instance uuid 40ecc98a-3a9e-4cd0-8546-64b7420be45f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:34:25 np0005542249 nova_compute[254900]: 2025-12-02 11:34:25.362 254904 DEBUG nova.virt.libvirt.vif [None req-66e9acf7-287f-4a87-8593-797a2153f533 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:33:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-165834699',display_name='tempest-TestEncryptedCinderVolumes-server-165834699',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-165834699',id=28,image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOJaPboaOa6uJHrsO2ao3rYwxmsZvzR/4lsVKCuOmmEhTjmIU2xPM9z7bpX4IbAFcb+u8hSQH3jDbrwYw+zDE/3XLvANn6+hOVGVetf9FOHitJV2GO1PjENl+cxBSc7TQ==',key_name='tempest-keypair-1610868451',keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:33:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='58574186a4fd405e83f1a4b650ea8e8c',ramdisk_id='',reservation_id='r-oiph9p2e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5a40f66c-ab43-47dd-9880-e59f9fa2c60e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-337876243',owner_user_name='tempest-TestEncryptedCinderVolumes-337876243-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:33:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a003d9cef7684ec48ed996b22c11419e',uuid=40ecc98a-3a9e-4cd0-8546-64b7420be45f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "889242f4-cffc-42cc-8e15-9e4f88ce1b9f", "address": "fa:16:3e:64:04:5e", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap889242f4-cf", "ovs_interfaceid": "889242f4-cffc-42cc-8e15-9e4f88ce1b9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:34:25 np0005542249 nova_compute[254900]: 2025-12-02 11:34:25.363 254904 DEBUG nova.network.os_vif_util [None req-66e9acf7-287f-4a87-8593-797a2153f533 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Converting VIF {"id": "889242f4-cffc-42cc-8e15-9e4f88ce1b9f", "address": "fa:16:3e:64:04:5e", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap889242f4-cf", "ovs_interfaceid": "889242f4-cffc-42cc-8e15-9e4f88ce1b9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:34:25 np0005542249 nova_compute[254900]: 2025-12-02 11:34:25.364 254904 DEBUG nova.network.os_vif_util [None req-66e9acf7-287f-4a87-8593-797a2153f533 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:64:04:5e,bridge_name='br-int',has_traffic_filtering=True,id=889242f4-cffc-42cc-8e15-9e4f88ce1b9f,network=Network(28b69a92-5b45-421b-9985-afeebc6820aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap889242f4-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:34:25 np0005542249 nova_compute[254900]: 2025-12-02 11:34:25.364 254904 DEBUG os_vif [None req-66e9acf7-287f-4a87-8593-797a2153f533 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:04:5e,bridge_name='br-int',has_traffic_filtering=True,id=889242f4-cffc-42cc-8e15-9e4f88ce1b9f,network=Network(28b69a92-5b45-421b-9985-afeebc6820aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap889242f4-cf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:34:25 np0005542249 nova_compute[254900]: 2025-12-02 11:34:25.367 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:25 np0005542249 nova_compute[254900]: 2025-12-02 11:34:25.367 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap889242f4-cf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:34:25 np0005542249 nova_compute[254900]: 2025-12-02 11:34:25.373 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:34:25 np0005542249 nova_compute[254900]: 2025-12-02 11:34:25.377 254904 INFO os_vif [None req-66e9acf7-287f-4a87-8593-797a2153f533 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:04:5e,bridge_name='br-int',has_traffic_filtering=True,id=889242f4-cffc-42cc-8e15-9e4f88ce1b9f,network=Network(28b69a92-5b45-421b-9985-afeebc6820aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap889242f4-cf')#033[00m
Dec  2 06:34:25 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f9ca6c997bfac90061c335cc9112d9cf0919a269d6d5be9451b25dadf5296173-userdata-shm.mount: Deactivated successfully.
Dec  2 06:34:25 np0005542249 systemd[1]: var-lib-containers-storage-overlay-f5da5603a9d6b21748a887b97430345936c3d9fcef42428bba9ffbf492f024d6-merged.mount: Deactivated successfully.
Dec  2 06:34:25 np0005542249 nova_compute[254900]: 2025-12-02 11:34:25.418 254904 DEBUG nova.compute.manager [req-160087fe-1bdc-4002-ad39-5b91a585165e req-7b56d3f8-1c7d-41fc-9639-729020e060b6 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Received event network-vif-unplugged-889242f4-cffc-42cc-8e15-9e4f88ce1b9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:34:25 np0005542249 nova_compute[254900]: 2025-12-02 11:34:25.418 254904 DEBUG oslo_concurrency.lockutils [req-160087fe-1bdc-4002-ad39-5b91a585165e req-7b56d3f8-1c7d-41fc-9639-729020e060b6 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:34:25 np0005542249 nova_compute[254900]: 2025-12-02 11:34:25.419 254904 DEBUG oslo_concurrency.lockutils [req-160087fe-1bdc-4002-ad39-5b91a585165e req-7b56d3f8-1c7d-41fc-9639-729020e060b6 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:34:25 np0005542249 nova_compute[254900]: 2025-12-02 11:34:25.419 254904 DEBUG oslo_concurrency.lockutils [req-160087fe-1bdc-4002-ad39-5b91a585165e req-7b56d3f8-1c7d-41fc-9639-729020e060b6 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:34:25 np0005542249 nova_compute[254900]: 2025-12-02 11:34:25.419 254904 DEBUG nova.compute.manager [req-160087fe-1bdc-4002-ad39-5b91a585165e req-7b56d3f8-1c7d-41fc-9639-729020e060b6 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] No waiting events found dispatching network-vif-unplugged-889242f4-cffc-42cc-8e15-9e4f88ce1b9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:34:25 np0005542249 nova_compute[254900]: 2025-12-02 11:34:25.420 254904 DEBUG nova.compute.manager [req-160087fe-1bdc-4002-ad39-5b91a585165e req-7b56d3f8-1c7d-41fc-9639-729020e060b6 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Received event network-vif-unplugged-889242f4-cffc-42cc-8e15-9e4f88ce1b9f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 06:34:25 np0005542249 podman[295982]: 2025-12-02 11:34:25.430444038 +0000 UTC m=+0.238391251 container cleanup f9ca6c997bfac90061c335cc9112d9cf0919a269d6d5be9451b25dadf5296173 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  2 06:34:25 np0005542249 systemd[1]: libpod-conmon-f9ca6c997bfac90061c335cc9112d9cf0919a269d6d5be9451b25dadf5296173.scope: Deactivated successfully.
Dec  2 06:34:25 np0005542249 podman[296040]: 2025-12-02 11:34:25.517807104 +0000 UTC m=+0.058380566 container remove f9ca6c997bfac90061c335cc9112d9cf0919a269d6d5be9451b25dadf5296173 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:34:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:34:25.528 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[11ae4984-3bc8-4f72-827b-a783590b827a]: (4, ('Tue Dec  2 11:34:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa (f9ca6c997bfac90061c335cc9112d9cf0919a269d6d5be9451b25dadf5296173)\nf9ca6c997bfac90061c335cc9112d9cf0919a269d6d5be9451b25dadf5296173\nTue Dec  2 11:34:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa (f9ca6c997bfac90061c335cc9112d9cf0919a269d6d5be9451b25dadf5296173)\nf9ca6c997bfac90061c335cc9112d9cf0919a269d6d5be9451b25dadf5296173\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:34:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:34:25.531 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[41a8a991-2ff0-443f-8015-fa4d08b1155d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:34:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:34:25.532 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28b69a92-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:34:25 np0005542249 nova_compute[254900]: 2025-12-02 11:34:25.534 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:25 np0005542249 kernel: tap28b69a92-50: left promiscuous mode
Dec  2 06:34:25 np0005542249 nova_compute[254900]: 2025-12-02 11:34:25.536 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:34:25.541 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[10a2f6ee-9f6c-4682-94d0-77d1be1d4171]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:34:25 np0005542249 nova_compute[254900]: 2025-12-02 11:34:25.556 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:34:25.566 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[dcb323b8-c99d-4cf6-87dc-d43fdd022cd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:34:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:34:25.568 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[92976ad1-3146-400f-967f-c0ece54781fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:34:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:34:25.592 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[dac08887-88ef-4d8c-9ae0-c89f82a5046c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542843, 'reachable_time': 29837, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296056, 'error': None, 'target': 'ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:34:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:34:25.596 164036 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 06:34:25 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:34:25.597 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[39d0d213-a96b-41a1-87c3-c3e8efd4eca9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:34:25 np0005542249 systemd[1]: run-netns-ovnmeta\x2d28b69a92\x2d5b45\x2d421b\x2d9985\x2dafeebc6820aa.mount: Deactivated successfully.
Dec  2 06:34:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:34:26
Dec  2 06:34:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:34:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:34:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'images', 'default.rgw.log', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups']
Dec  2 06:34:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:34:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:34:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:34:26 np0005542249 nova_compute[254900]: 2025-12-02 11:34:26.392 254904 INFO nova.virt.libvirt.driver [None req-66e9acf7-287f-4a87-8593-797a2153f533 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Deleting instance files /var/lib/nova/instances/40ecc98a-3a9e-4cd0-8546-64b7420be45f_del#033[00m
Dec  2 06:34:26 np0005542249 nova_compute[254900]: 2025-12-02 11:34:26.393 254904 INFO nova.virt.libvirt.driver [None req-66e9acf7-287f-4a87-8593-797a2153f533 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Deletion of /var/lib/nova/instances/40ecc98a-3a9e-4cd0-8546-64b7420be45f_del complete#033[00m
Dec  2 06:34:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:34:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:34:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:34:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:34:26 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1795: 321 pgs: 321 active+clean; 167 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 10 KiB/s wr, 44 op/s
Dec  2 06:34:26 np0005542249 nova_compute[254900]: 2025-12-02 11:34:26.576 254904 INFO nova.compute.manager [None req-66e9acf7-287f-4a87-8593-797a2153f533 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Took 1.69 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:34:26 np0005542249 nova_compute[254900]: 2025-12-02 11:34:26.577 254904 DEBUG oslo.service.loopingcall [None req-66e9acf7-287f-4a87-8593-797a2153f533 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:34:26 np0005542249 nova_compute[254900]: 2025-12-02 11:34:26.577 254904 DEBUG nova.compute.manager [-] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:34:26 np0005542249 nova_compute[254900]: 2025-12-02 11:34:26.577 254904 DEBUG nova.network.neutron [-] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:34:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e455 do_prune osdmap full prune enabled
Dec  2 06:34:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:34:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:34:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:34:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:34:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:34:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:34:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:34:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:34:26 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e456 e456: 3 total, 3 up, 3 in
Dec  2 06:34:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:34:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:34:26 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e456: 3 total, 3 up, 3 in
Dec  2 06:34:27 np0005542249 ceph-mgr[75372]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2127781581
Dec  2 06:34:27 np0005542249 nova_compute[254900]: 2025-12-02 11:34:27.510 254904 DEBUG nova.compute.manager [req-664de792-d733-42f7-b73a-2dd4cc6869cd req-8b963d67-ffa1-4725-b82c-213351fc0573 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Received event network-vif-plugged-889242f4-cffc-42cc-8e15-9e4f88ce1b9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:34:27 np0005542249 nova_compute[254900]: 2025-12-02 11:34:27.511 254904 DEBUG oslo_concurrency.lockutils [req-664de792-d733-42f7-b73a-2dd4cc6869cd req-8b963d67-ffa1-4725-b82c-213351fc0573 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:34:27 np0005542249 nova_compute[254900]: 2025-12-02 11:34:27.511 254904 DEBUG oslo_concurrency.lockutils [req-664de792-d733-42f7-b73a-2dd4cc6869cd req-8b963d67-ffa1-4725-b82c-213351fc0573 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:34:27 np0005542249 nova_compute[254900]: 2025-12-02 11:34:27.512 254904 DEBUG oslo_concurrency.lockutils [req-664de792-d733-42f7-b73a-2dd4cc6869cd req-8b963d67-ffa1-4725-b82c-213351fc0573 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:34:27 np0005542249 nova_compute[254900]: 2025-12-02 11:34:27.512 254904 DEBUG nova.compute.manager [req-664de792-d733-42f7-b73a-2dd4cc6869cd req-8b963d67-ffa1-4725-b82c-213351fc0573 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] No waiting events found dispatching network-vif-plugged-889242f4-cffc-42cc-8e15-9e4f88ce1b9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:34:27 np0005542249 nova_compute[254900]: 2025-12-02 11:34:27.512 254904 WARNING nova.compute.manager [req-664de792-d733-42f7-b73a-2dd4cc6869cd req-8b963d67-ffa1-4725-b82c-213351fc0573 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Received unexpected event network-vif-plugged-889242f4-cffc-42cc-8e15-9e4f88ce1b9f for instance with vm_state active and task_state deleting.#033[00m
Dec  2 06:34:27 np0005542249 nova_compute[254900]: 2025-12-02 11:34:27.586 254904 DEBUG nova.network.neutron [-] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:34:27 np0005542249 nova_compute[254900]: 2025-12-02 11:34:27.708 254904 INFO nova.compute.manager [-] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Took 1.13 seconds to deallocate network for instance.#033[00m
Dec  2 06:34:27 np0005542249 nova_compute[254900]: 2025-12-02 11:34:27.918 254904 DEBUG oslo_concurrency.lockutils [None req-66e9acf7-287f-4a87-8593-797a2153f533 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:34:27 np0005542249 nova_compute[254900]: 2025-12-02 11:34:27.918 254904 DEBUG oslo_concurrency.lockutils [None req-66e9acf7-287f-4a87-8593-797a2153f533 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:34:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:34:27 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1005857750' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:34:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:34:27 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1005857750' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:34:27 np0005542249 nova_compute[254900]: 2025-12-02 11:34:27.998 254904 DEBUG oslo_concurrency.processutils [None req-66e9acf7-287f-4a87-8593-797a2153f533 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:34:28 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e456 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:34:28 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e456 do_prune osdmap full prune enabled
Dec  2 06:34:28 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e457 e457: 3 total, 3 up, 3 in
Dec  2 06:34:28 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e457: 3 total, 3 up, 3 in
Dec  2 06:34:28 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:34:28 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1314502677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:34:28 np0005542249 nova_compute[254900]: 2025-12-02 11:34:28.442 254904 DEBUG oslo_concurrency.processutils [None req-66e9acf7-287f-4a87-8593-797a2153f533 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:34:28 np0005542249 nova_compute[254900]: 2025-12-02 11:34:28.451 254904 DEBUG nova.compute.provider_tree [None req-66e9acf7-287f-4a87-8593-797a2153f533 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:34:28 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1798: 321 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 317 active+clean; 115 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 119 KiB/s rd, 6.5 KiB/s wr, 98 op/s
Dec  2 06:34:28 np0005542249 nova_compute[254900]: 2025-12-02 11:34:28.466 254904 DEBUG nova.scheduler.client.report [None req-66e9acf7-287f-4a87-8593-797a2153f533 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:34:28 np0005542249 nova_compute[254900]: 2025-12-02 11:34:28.494 254904 DEBUG oslo_concurrency.lockutils [None req-66e9acf7-287f-4a87-8593-797a2153f533 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:34:28 np0005542249 nova_compute[254900]: 2025-12-02 11:34:28.538 254904 INFO nova.scheduler.client.report [None req-66e9acf7-287f-4a87-8593-797a2153f533 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Deleted allocations for instance 40ecc98a-3a9e-4cd0-8546-64b7420be45f#033[00m
Dec  2 06:34:28 np0005542249 nova_compute[254900]: 2025-12-02 11:34:28.617 254904 DEBUG oslo_concurrency.lockutils [None req-66e9acf7-287f-4a87-8593-797a2153f533 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "40ecc98a-3a9e-4cd0-8546-64b7420be45f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:34:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e457 do_prune osdmap full prune enabled
Dec  2 06:34:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e458 e458: 3 total, 3 up, 3 in
Dec  2 06:34:29 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e458: 3 total, 3 up, 3 in
Dec  2 06:34:29 np0005542249 nova_compute[254900]: 2025-12-02 11:34:29.652 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:29 np0005542249 nova_compute[254900]: 2025-12-02 11:34:29.660 254904 DEBUG nova.compute.manager [req-f719e4ed-c97d-4c4c-822d-0f780ae58be9 req-bcb7815b-50ea-4e88-b9cb-05187a1521df 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Received event network-vif-deleted-889242f4-cffc-42cc-8e15-9e4f88ce1b9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:34:30 np0005542249 nova_compute[254900]: 2025-12-02 11:34:30.369 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:30 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1800: 321 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 317 active+clean; 88 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 5.4 KiB/s wr, 103 op/s
Dec  2 06:34:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e458 do_prune osdmap full prune enabled
Dec  2 06:34:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e459 e459: 3 total, 3 up, 3 in
Dec  2 06:34:31 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e459: 3 total, 3 up, 3 in
Dec  2 06:34:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:34:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/374473168' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:34:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:34:31 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/374473168' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:34:31 np0005542249 nova_compute[254900]: 2025-12-02 11:34:31.383 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:34:31 np0005542249 nova_compute[254900]: 2025-12-02 11:34:31.384 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  2 06:34:31 np0005542249 nova_compute[254900]: 2025-12-02 11:34:31.401 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  2 06:34:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:34:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1660876149' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:34:32 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:34:32 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1660876149' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:34:32 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1802: 321 pgs: 321 active+clean; 88 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 5.6 KiB/s wr, 134 op/s
Dec  2 06:34:33 np0005542249 podman[296080]: 2025-12-02 11:34:33.001044229 +0000 UTC m=+0.074206322 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:34:33 np0005542249 podman[296081]: 2025-12-02 11:34:33.033434623 +0000 UTC m=+0.105542549 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:34:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e459 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:34:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e459 do_prune osdmap full prune enabled
Dec  2 06:34:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e460 e460: 3 total, 3 up, 3 in
Dec  2 06:34:33 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e460: 3 total, 3 up, 3 in
Dec  2 06:34:34 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1804: 321 pgs: 321 active+clean; 88 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 152 KiB/s rd, 5.2 KiB/s wr, 200 op/s
Dec  2 06:34:34 np0005542249 nova_compute[254900]: 2025-12-02 11:34:34.655 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:35 np0005542249 nova_compute[254900]: 2025-12-02 11:34:35.372 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e460 do_prune osdmap full prune enabled
Dec  2 06:34:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e461 e461: 3 total, 3 up, 3 in
Dec  2 06:34:35 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e461: 3 total, 3 up, 3 in
Dec  2 06:34:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:34:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:34:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:34:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:34:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  2 06:34:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:34:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003516566142475744 of space, bias 1.0, pg target 0.10549698427427232 quantized to 32 (current 32)
Dec  2 06:34:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:34:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Dec  2 06:34:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:34:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Dec  2 06:34:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:34:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:34:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:34:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:34:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:34:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:34:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:34:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:34:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:34:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:34:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:34:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:34:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:34:36 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/38231188' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:34:36 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:34:36 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/38231188' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:34:36 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1806: 321 pgs: 321 active+clean; 88 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 4.0 KiB/s wr, 169 op/s
Dec  2 06:34:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e461 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:34:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e461 do_prune osdmap full prune enabled
Dec  2 06:34:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e462 e462: 3 total, 3 up, 3 in
Dec  2 06:34:38 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e462: 3 total, 3 up, 3 in
Dec  2 06:34:38 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1808: 321 pgs: 321 active+clean; 88 MiB data, 439 MiB used, 60 GiB / 60 GiB avail; 144 KiB/s rd, 5.8 KiB/s wr, 188 op/s
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:34:39.326218) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675279326270, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1247, "num_deletes": 259, "total_data_size": 1593767, "memory_usage": 1617472, "flush_reason": "Manual Compaction"}
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675279340611, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 1561626, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35642, "largest_seqno": 36888, "table_properties": {"data_size": 1555539, "index_size": 3354, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13831, "raw_average_key_size": 20, "raw_value_size": 1543042, "raw_average_value_size": 2337, "num_data_blocks": 146, "num_entries": 660, "num_filter_entries": 660, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764675209, "oldest_key_time": 1764675209, "file_creation_time": 1764675279, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 14452 microseconds, and 8441 cpu microseconds.
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:34:39.340668) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 1561626 bytes OK
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:34:39.340693) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:34:39.342443) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:34:39.342469) EVENT_LOG_v1 {"time_micros": 1764675279342459, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:34:39.342496) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 1587839, prev total WAL file size 1587839, number of live WAL files 2.
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:34:39.344873) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(1525KB)], [74(10MB)]
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675279344987, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 12089928, "oldest_snapshot_seqno": -1}
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6645 keys, 10334747 bytes, temperature: kUnknown
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675279436177, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 10334747, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10285239, "index_size": 31796, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16645, "raw_key_size": 168497, "raw_average_key_size": 25, "raw_value_size": 10160934, "raw_average_value_size": 1529, "num_data_blocks": 1265, "num_entries": 6645, "num_filter_entries": 6645, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672515, "oldest_key_time": 0, "file_creation_time": 1764675279, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:34:39.436567) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 10334747 bytes
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:34:39.438370) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.5 rd, 113.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 10.0 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(14.4) write-amplify(6.6) OK, records in: 7173, records dropped: 528 output_compression: NoCompression
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:34:39.438399) EVENT_LOG_v1 {"time_micros": 1764675279438385, "job": 42, "event": "compaction_finished", "compaction_time_micros": 91229, "compaction_time_cpu_micros": 49406, "output_level": 6, "num_output_files": 1, "total_output_size": 10334747, "num_input_records": 7173, "num_output_records": 6645, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675279439066, "job": 42, "event": "table_file_deletion", "file_number": 76}
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675279442806, "job": 42, "event": "table_file_deletion", "file_number": 74}
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:34:39.343541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:34:39.442861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:34:39.442867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:34:39.442871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:34:39.442874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:34:39 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:34:39.442877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:34:39 np0005542249 nova_compute[254900]: 2025-12-02 11:34:39.657 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:40 np0005542249 nova_compute[254900]: 2025-12-02 11:34:40.338 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764675265.3374264, 40ecc98a-3a9e-4cd0-8546-64b7420be45f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:34:40 np0005542249 nova_compute[254900]: 2025-12-02 11:34:40.339 254904 INFO nova.compute.manager [-] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:34:40 np0005542249 nova_compute[254900]: 2025-12-02 11:34:40.361 254904 DEBUG nova.compute.manager [None req-314d2540-53cb-432f-8f41-6d5e3951fcd6 - - - - - -] [instance: 40ecc98a-3a9e-4cd0-8546-64b7420be45f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:34:40 np0005542249 nova_compute[254900]: 2025-12-02 11:34:40.403 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:40 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1809: 321 pgs: 321 active+clean; 88 MiB data, 439 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 2.5 KiB/s wr, 86 op/s
Dec  2 06:34:42 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1810: 321 pgs: 321 active+clean; 88 MiB data, 439 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 KiB/s wr, 35 op/s
Dec  2 06:34:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e462 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:34:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e462 do_prune osdmap full prune enabled
Dec  2 06:34:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e463 e463: 3 total, 3 up, 3 in
Dec  2 06:34:43 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e463: 3 total, 3 up, 3 in
Dec  2 06:34:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:34:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:34:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:34:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:34:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:34:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:34:43 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 715b1c49-900b-4a9a-ab93-b5d3a039fc22 does not exist
Dec  2 06:34:43 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 0862a3b6-b6d8-4036-9852-e99ac376d818 does not exist
Dec  2 06:34:43 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 0fcde912-ae46-4883-b02b-35a56f15359c does not exist
Dec  2 06:34:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:34:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:34:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:34:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:34:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:34:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:34:44 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1812: 321 pgs: 321 active+clean; 88 MiB data, 439 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.6 KiB/s wr, 44 op/s
Dec  2 06:34:44 np0005542249 podman[296401]: 2025-12-02 11:34:44.546205885 +0000 UTC m=+0.042790494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:34:44 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:34:44 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:34:44 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:34:44 np0005542249 podman[296401]: 2025-12-02 11:34:44.650965311 +0000 UTC m=+0.147549870 container create 56162b7d7975c160eed00fca7fe17e2fc352c227e84431511b75c018a598767d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Dec  2 06:34:44 np0005542249 nova_compute[254900]: 2025-12-02 11:34:44.661 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:44 np0005542249 systemd[1]: Started libpod-conmon-56162b7d7975c160eed00fca7fe17e2fc352c227e84431511b75c018a598767d.scope.
Dec  2 06:34:44 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:34:44 np0005542249 podman[296401]: 2025-12-02 11:34:44.777556646 +0000 UTC m=+0.274141275 container init 56162b7d7975c160eed00fca7fe17e2fc352c227e84431511b75c018a598767d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  2 06:34:44 np0005542249 podman[296401]: 2025-12-02 11:34:44.785699954 +0000 UTC m=+0.282284493 container start 56162b7d7975c160eed00fca7fe17e2fc352c227e84431511b75c018a598767d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_zhukovsky, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:34:44 np0005542249 podman[296401]: 2025-12-02 11:34:44.789274001 +0000 UTC m=+0.285858630 container attach 56162b7d7975c160eed00fca7fe17e2fc352c227e84431511b75c018a598767d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:34:44 np0005542249 peaceful_zhukovsky[296418]: 167 167
Dec  2 06:34:44 np0005542249 systemd[1]: libpod-56162b7d7975c160eed00fca7fe17e2fc352c227e84431511b75c018a598767d.scope: Deactivated successfully.
Dec  2 06:34:44 np0005542249 podman[296401]: 2025-12-02 11:34:44.797519614 +0000 UTC m=+0.294104163 container died 56162b7d7975c160eed00fca7fe17e2fc352c227e84431511b75c018a598767d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  2 06:34:44 np0005542249 systemd[1]: var-lib-containers-storage-overlay-4bf85ecc6f809c80c887a0d25e3ec4924c496c3265e95f754f2b28ba1555d30e-merged.mount: Deactivated successfully.
Dec  2 06:34:44 np0005542249 podman[296401]: 2025-12-02 11:34:44.846819313 +0000 UTC m=+0.343403852 container remove 56162b7d7975c160eed00fca7fe17e2fc352c227e84431511b75c018a598767d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  2 06:34:44 np0005542249 systemd[1]: libpod-conmon-56162b7d7975c160eed00fca7fe17e2fc352c227e84431511b75c018a598767d.scope: Deactivated successfully.
Dec  2 06:34:45 np0005542249 podman[296442]: 2025-12-02 11:34:45.094204856 +0000 UTC m=+0.063146244 container create 995077494dcccfdd01ef59a8c1ee46d26e45e3017744c7f28a7c63454f65a44f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_perlman, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  2 06:34:45 np0005542249 systemd[1]: Started libpod-conmon-995077494dcccfdd01ef59a8c1ee46d26e45e3017744c7f28a7c63454f65a44f.scope.
Dec  2 06:34:45 np0005542249 podman[296442]: 2025-12-02 11:34:45.064904655 +0000 UTC m=+0.033846053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:34:45 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:34:45 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbf5379f9133c8eb8a8a6d1f83df4b15e66c87d6ee172fd2052de8763d9fdc0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:34:45 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbf5379f9133c8eb8a8a6d1f83df4b15e66c87d6ee172fd2052de8763d9fdc0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:34:45 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbf5379f9133c8eb8a8a6d1f83df4b15e66c87d6ee172fd2052de8763d9fdc0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:34:45 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbf5379f9133c8eb8a8a6d1f83df4b15e66c87d6ee172fd2052de8763d9fdc0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:34:45 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbf5379f9133c8eb8a8a6d1f83df4b15e66c87d6ee172fd2052de8763d9fdc0f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:34:45 np0005542249 podman[296442]: 2025-12-02 11:34:45.211593042 +0000 UTC m=+0.180534440 container init 995077494dcccfdd01ef59a8c1ee46d26e45e3017744c7f28a7c63454f65a44f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:34:45 np0005542249 podman[296442]: 2025-12-02 11:34:45.223929244 +0000 UTC m=+0.192870652 container start 995077494dcccfdd01ef59a8c1ee46d26e45e3017744c7f28a7c63454f65a44f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  2 06:34:45 np0005542249 podman[296442]: 2025-12-02 11:34:45.22972396 +0000 UTC m=+0.198665348 container attach 995077494dcccfdd01ef59a8c1ee46d26e45e3017744c7f28a7c63454f65a44f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_perlman, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Dec  2 06:34:45 np0005542249 nova_compute[254900]: 2025-12-02 11:34:45.405 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e463 do_prune osdmap full prune enabled
Dec  2 06:34:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e464 e464: 3 total, 3 up, 3 in
Dec  2 06:34:45 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e464: 3 total, 3 up, 3 in
Dec  2 06:34:46 np0005542249 nova_compute[254900]: 2025-12-02 11:34:46.402 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:34:46 np0005542249 nova_compute[254900]: 2025-12-02 11:34:46.402 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:34:46 np0005542249 nova_compute[254900]: 2025-12-02 11:34:46.402 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 06:34:46 np0005542249 nova_compute[254900]: 2025-12-02 11:34:46.421 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 06:34:46 np0005542249 nova_compute[254900]: 2025-12-02 11:34:46.423 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:34:46 np0005542249 nova_compute[254900]: 2025-12-02 11:34:46.423 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:34:46 np0005542249 youthful_perlman[296458]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:34:46 np0005542249 youthful_perlman[296458]: --> relative data size: 1.0
Dec  2 06:34:46 np0005542249 youthful_perlman[296458]: --> All data devices are unavailable
Dec  2 06:34:46 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1814: 321 pgs: 321 active+clean; 88 MiB data, 439 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 255 B/s wr, 10 op/s
Dec  2 06:34:46 np0005542249 systemd[1]: libpod-995077494dcccfdd01ef59a8c1ee46d26e45e3017744c7f28a7c63454f65a44f.scope: Deactivated successfully.
Dec  2 06:34:46 np0005542249 systemd[1]: libpod-995077494dcccfdd01ef59a8c1ee46d26e45e3017744c7f28a7c63454f65a44f.scope: Consumed 1.197s CPU time.
Dec  2 06:34:46 np0005542249 podman[296442]: 2025-12-02 11:34:46.47660495 +0000 UTC m=+1.445546318 container died 995077494dcccfdd01ef59a8c1ee46d26e45e3017744c7f28a7c63454f65a44f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:34:46 np0005542249 systemd[1]: var-lib-containers-storage-overlay-bbf5379f9133c8eb8a8a6d1f83df4b15e66c87d6ee172fd2052de8763d9fdc0f-merged.mount: Deactivated successfully.
Dec  2 06:34:46 np0005542249 podman[296442]: 2025-12-02 11:34:46.557446891 +0000 UTC m=+1.526388249 container remove 995077494dcccfdd01ef59a8c1ee46d26e45e3017744c7f28a7c63454f65a44f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_perlman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:34:46 np0005542249 systemd[1]: libpod-conmon-995077494dcccfdd01ef59a8c1ee46d26e45e3017744c7f28a7c63454f65a44f.scope: Deactivated successfully.
Dec  2 06:34:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e464 do_prune osdmap full prune enabled
Dec  2 06:34:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e465 e465: 3 total, 3 up, 3 in
Dec  2 06:34:46 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e465: 3 total, 3 up, 3 in
Dec  2 06:34:47 np0005542249 podman[296643]: 2025-12-02 11:34:47.389930443 +0000 UTC m=+0.065253040 container create f80e59b0f57e1bcf08c72020a53d76fde890b4f51b5ac8f9dbe691e544313bfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:34:47 np0005542249 systemd[1]: Started libpod-conmon-f80e59b0f57e1bcf08c72020a53d76fde890b4f51b5ac8f9dbe691e544313bfb.scope.
Dec  2 06:34:47 np0005542249 podman[296643]: 2025-12-02 11:34:47.358091925 +0000 UTC m=+0.033414562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:34:47 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:34:47 np0005542249 podman[296643]: 2025-12-02 11:34:47.485643345 +0000 UTC m=+0.160965982 container init f80e59b0f57e1bcf08c72020a53d76fde890b4f51b5ac8f9dbe691e544313bfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  2 06:34:47 np0005542249 podman[296643]: 2025-12-02 11:34:47.496155699 +0000 UTC m=+0.171478266 container start f80e59b0f57e1bcf08c72020a53d76fde890b4f51b5ac8f9dbe691e544313bfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bassi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:34:47 np0005542249 podman[296643]: 2025-12-02 11:34:47.501578025 +0000 UTC m=+0.176900612 container attach f80e59b0f57e1bcf08c72020a53d76fde890b4f51b5ac8f9dbe691e544313bfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bassi, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:34:47 np0005542249 intelligent_bassi[296659]: 167 167
Dec  2 06:34:47 np0005542249 systemd[1]: libpod-f80e59b0f57e1bcf08c72020a53d76fde890b4f51b5ac8f9dbe691e544313bfb.scope: Deactivated successfully.
Dec  2 06:34:47 np0005542249 podman[296643]: 2025-12-02 11:34:47.504329149 +0000 UTC m=+0.179651736 container died f80e59b0f57e1bcf08c72020a53d76fde890b4f51b5ac8f9dbe691e544313bfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bassi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:34:47 np0005542249 systemd[1]: var-lib-containers-storage-overlay-3ded758fe8bb8e8f74e66fc07556864db07bb8829e94ca3487d5fcc59fe50efa-merged.mount: Deactivated successfully.
Dec  2 06:34:47 np0005542249 podman[296643]: 2025-12-02 11:34:47.561804909 +0000 UTC m=+0.237127496 container remove f80e59b0f57e1bcf08c72020a53d76fde890b4f51b5ac8f9dbe691e544313bfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bassi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:34:47 np0005542249 systemd[1]: libpod-conmon-f80e59b0f57e1bcf08c72020a53d76fde890b4f51b5ac8f9dbe691e544313bfb.scope: Deactivated successfully.
Dec  2 06:34:47 np0005542249 podman[296681]: 2025-12-02 11:34:47.798152264 +0000 UTC m=+0.051948022 container create 95562e39ddf42ac602d48100d1fb8bfab8b1a91e1ff60867f844b66a7f39ec8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  2 06:34:47 np0005542249 systemd[1]: Started libpod-conmon-95562e39ddf42ac602d48100d1fb8bfab8b1a91e1ff60867f844b66a7f39ec8e.scope.
Dec  2 06:34:47 np0005542249 podman[296681]: 2025-12-02 11:34:47.779322656 +0000 UTC m=+0.033118394 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:34:47 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:34:47 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1586688eabfa1509db3a2720886ba694a85c930c76ab376c178777dbdf786507/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:34:47 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1586688eabfa1509db3a2720886ba694a85c930c76ab376c178777dbdf786507/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:34:47 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1586688eabfa1509db3a2720886ba694a85c930c76ab376c178777dbdf786507/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:34:47 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1586688eabfa1509db3a2720886ba694a85c930c76ab376c178777dbdf786507/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:34:47 np0005542249 podman[296681]: 2025-12-02 11:34:47.926068724 +0000 UTC m=+0.179864462 container init 95562e39ddf42ac602d48100d1fb8bfab8b1a91e1ff60867f844b66a7f39ec8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:34:47 np0005542249 podman[296681]: 2025-12-02 11:34:47.94149101 +0000 UTC m=+0.195286758 container start 95562e39ddf42ac602d48100d1fb8bfab8b1a91e1ff60867f844b66a7f39ec8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:34:47 np0005542249 podman[296681]: 2025-12-02 11:34:47.946551616 +0000 UTC m=+0.200347444 container attach 95562e39ddf42ac602d48100d1fb8bfab8b1a91e1ff60867f844b66a7f39ec8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hopper, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  2 06:34:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:34:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3928133089' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:34:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:34:48 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3928133089' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:34:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e465 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:34:48 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1816: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 88 MiB data, 439 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.0 KiB/s wr, 66 op/s
Dec  2 06:34:48 np0005542249 silly_hopper[296697]: {
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:    "0": [
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:        {
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "devices": [
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "/dev/loop3"
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            ],
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "lv_name": "ceph_lv0",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "lv_size": "21470642176",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "name": "ceph_lv0",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "tags": {
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.cluster_name": "ceph",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.crush_device_class": "",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.encrypted": "0",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.osd_id": "0",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.type": "block",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.vdo": "0"
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            },
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "type": "block",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "vg_name": "ceph_vg0"
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:        }
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:    ],
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:    "1": [
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:        {
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "devices": [
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "/dev/loop4"
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            ],
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "lv_name": "ceph_lv1",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "lv_size": "21470642176",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "name": "ceph_lv1",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "tags": {
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.cluster_name": "ceph",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.crush_device_class": "",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.encrypted": "0",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.osd_id": "1",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.type": "block",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.vdo": "0"
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            },
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "type": "block",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "vg_name": "ceph_vg1"
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:        }
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:    ],
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:    "2": [
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:        {
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "devices": [
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "/dev/loop5"
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            ],
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "lv_name": "ceph_lv2",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "lv_size": "21470642176",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "name": "ceph_lv2",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "tags": {
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.cluster_name": "ceph",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.crush_device_class": "",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.encrypted": "0",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.osd_id": "2",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.type": "block",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:                "ceph.vdo": "0"
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            },
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "type": "block",
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:            "vg_name": "ceph_vg2"
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:        }
Dec  2 06:34:48 np0005542249 silly_hopper[296697]:    ]
Dec  2 06:34:48 np0005542249 silly_hopper[296697]: }
Dec  2 06:34:48 np0005542249 systemd[1]: libpod-95562e39ddf42ac602d48100d1fb8bfab8b1a91e1ff60867f844b66a7f39ec8e.scope: Deactivated successfully.
Dec  2 06:34:48 np0005542249 podman[296681]: 2025-12-02 11:34:48.787438105 +0000 UTC m=+1.041233873 container died 95562e39ddf42ac602d48100d1fb8bfab8b1a91e1ff60867f844b66a7f39ec8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hopper, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:34:48 np0005542249 systemd[1]: var-lib-containers-storage-overlay-1586688eabfa1509db3a2720886ba694a85c930c76ab376c178777dbdf786507-merged.mount: Deactivated successfully.
Dec  2 06:34:48 np0005542249 podman[296681]: 2025-12-02 11:34:48.856792856 +0000 UTC m=+1.110588584 container remove 95562e39ddf42ac602d48100d1fb8bfab8b1a91e1ff60867f844b66a7f39ec8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:34:48 np0005542249 systemd[1]: libpod-conmon-95562e39ddf42ac602d48100d1fb8bfab8b1a91e1ff60867f844b66a7f39ec8e.scope: Deactivated successfully.
Dec  2 06:34:49 np0005542249 podman[296860]: 2025-12-02 11:34:49.67853272 +0000 UTC m=+0.059701382 container create 8cae536350da5b8a24a80b27a035800a10826726b86121e83825809f3c117666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wiles, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:34:49 np0005542249 nova_compute[254900]: 2025-12-02 11:34:49.698 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:49 np0005542249 systemd[1]: Started libpod-conmon-8cae536350da5b8a24a80b27a035800a10826726b86121e83825809f3c117666.scope.
Dec  2 06:34:49 np0005542249 podman[296860]: 2025-12-02 11:34:49.64963142 +0000 UTC m=+0.030800122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:34:49 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:34:49 np0005542249 podman[296860]: 2025-12-02 11:34:49.792149474 +0000 UTC m=+0.173318116 container init 8cae536350da5b8a24a80b27a035800a10826726b86121e83825809f3c117666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  2 06:34:49 np0005542249 podman[296860]: 2025-12-02 11:34:49.802663618 +0000 UTC m=+0.183832280 container start 8cae536350da5b8a24a80b27a035800a10826726b86121e83825809f3c117666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wiles, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  2 06:34:49 np0005542249 podman[296860]: 2025-12-02 11:34:49.80719467 +0000 UTC m=+0.188363312 container attach 8cae536350da5b8a24a80b27a035800a10826726b86121e83825809f3c117666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  2 06:34:49 np0005542249 boring_wiles[296876]: 167 167
Dec  2 06:34:49 np0005542249 systemd[1]: libpod-8cae536350da5b8a24a80b27a035800a10826726b86121e83825809f3c117666.scope: Deactivated successfully.
Dec  2 06:34:49 np0005542249 podman[296860]: 2025-12-02 11:34:49.811802144 +0000 UTC m=+0.192970806 container died 8cae536350da5b8a24a80b27a035800a10826726b86121e83825809f3c117666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wiles, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  2 06:34:49 np0005542249 systemd[1]: var-lib-containers-storage-overlay-bc626349bf201e81ed7266fb287371605bead0321af511785c2696f29afe484b-merged.mount: Deactivated successfully.
Dec  2 06:34:49 np0005542249 podman[296860]: 2025-12-02 11:34:49.863733875 +0000 UTC m=+0.244902507 container remove 8cae536350da5b8a24a80b27a035800a10826726b86121e83825809f3c117666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wiles, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Dec  2 06:34:49 np0005542249 systemd[1]: libpod-conmon-8cae536350da5b8a24a80b27a035800a10826726b86121e83825809f3c117666.scope: Deactivated successfully.
Dec  2 06:34:50 np0005542249 podman[296900]: 2025-12-02 11:34:50.060908713 +0000 UTC m=+0.056547856 container create 3e22330ce22c8c6225a26b24435ca7b06fc7ed4c7a6d7483b9a60f2a833669b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chaum, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:34:50 np0005542249 systemd[1]: Started libpod-conmon-3e22330ce22c8c6225a26b24435ca7b06fc7ed4c7a6d7483b9a60f2a833669b8.scope.
Dec  2 06:34:50 np0005542249 podman[296900]: 2025-12-02 11:34:50.04005535 +0000 UTC m=+0.035694503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:34:50 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:34:50 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2228c949f9334fd90614b2c0116050e159f3579aba8410f4b6ae06b7fca0a7c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:34:50 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2228c949f9334fd90614b2c0116050e159f3579aba8410f4b6ae06b7fca0a7c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:34:50 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2228c949f9334fd90614b2c0116050e159f3579aba8410f4b6ae06b7fca0a7c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:34:50 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2228c949f9334fd90614b2c0116050e159f3579aba8410f4b6ae06b7fca0a7c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:34:50 np0005542249 podman[296900]: 2025-12-02 11:34:50.167115337 +0000 UTC m=+0.162754560 container init 3e22330ce22c8c6225a26b24435ca7b06fc7ed4c7a6d7483b9a60f2a833669b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chaum, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  2 06:34:50 np0005542249 podman[296900]: 2025-12-02 11:34:50.176791479 +0000 UTC m=+0.172430612 container start 3e22330ce22c8c6225a26b24435ca7b06fc7ed4c7a6d7483b9a60f2a833669b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chaum, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:34:50 np0005542249 podman[296900]: 2025-12-02 11:34:50.180640912 +0000 UTC m=+0.176280075 container attach 3e22330ce22c8c6225a26b24435ca7b06fc7ed4c7a6d7483b9a60f2a833669b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:34:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:34:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1054508812' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:34:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:34:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1054508812' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:34:50 np0005542249 nova_compute[254900]: 2025-12-02 11:34:50.409 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:50 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1817: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 88 MiB data, 439 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.7 KiB/s wr, 58 op/s
Dec  2 06:34:51 np0005542249 kind_chaum[296916]: {
Dec  2 06:34:51 np0005542249 kind_chaum[296916]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:34:51 np0005542249 kind_chaum[296916]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:34:51 np0005542249 kind_chaum[296916]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:34:51 np0005542249 kind_chaum[296916]:        "osd_id": 0,
Dec  2 06:34:51 np0005542249 kind_chaum[296916]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:34:51 np0005542249 kind_chaum[296916]:        "type": "bluestore"
Dec  2 06:34:51 np0005542249 kind_chaum[296916]:    },
Dec  2 06:34:51 np0005542249 kind_chaum[296916]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:34:51 np0005542249 kind_chaum[296916]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:34:51 np0005542249 kind_chaum[296916]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:34:51 np0005542249 kind_chaum[296916]:        "osd_id": 2,
Dec  2 06:34:51 np0005542249 kind_chaum[296916]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:34:51 np0005542249 kind_chaum[296916]:        "type": "bluestore"
Dec  2 06:34:51 np0005542249 kind_chaum[296916]:    },
Dec  2 06:34:51 np0005542249 kind_chaum[296916]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:34:51 np0005542249 kind_chaum[296916]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:34:51 np0005542249 kind_chaum[296916]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:34:51 np0005542249 kind_chaum[296916]:        "osd_id": 1,
Dec  2 06:34:51 np0005542249 kind_chaum[296916]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:34:51 np0005542249 kind_chaum[296916]:        "type": "bluestore"
Dec  2 06:34:51 np0005542249 kind_chaum[296916]:    }
Dec  2 06:34:51 np0005542249 kind_chaum[296916]: }
Dec  2 06:34:51 np0005542249 systemd[1]: libpod-3e22330ce22c8c6225a26b24435ca7b06fc7ed4c7a6d7483b9a60f2a833669b8.scope: Deactivated successfully.
Dec  2 06:34:51 np0005542249 podman[296900]: 2025-12-02 11:34:51.329779125 +0000 UTC m=+1.325418308 container died 3e22330ce22c8c6225a26b24435ca7b06fc7ed4c7a6d7483b9a60f2a833669b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chaum, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  2 06:34:51 np0005542249 systemd[1]: libpod-3e22330ce22c8c6225a26b24435ca7b06fc7ed4c7a6d7483b9a60f2a833669b8.scope: Consumed 1.158s CPU time.
Dec  2 06:34:51 np0005542249 systemd[1]: var-lib-containers-storage-overlay-2228c949f9334fd90614b2c0116050e159f3579aba8410f4b6ae06b7fca0a7c1-merged.mount: Deactivated successfully.
Dec  2 06:34:51 np0005542249 nova_compute[254900]: 2025-12-02 11:34:51.383 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:34:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e465 do_prune osdmap full prune enabled
Dec  2 06:34:51 np0005542249 podman[296900]: 2025-12-02 11:34:51.419285769 +0000 UTC m=+1.414924892 container remove 3e22330ce22c8c6225a26b24435ca7b06fc7ed4c7a6d7483b9a60f2a833669b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chaum, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 06:34:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e466 e466: 3 total, 3 up, 3 in
Dec  2 06:34:51 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e466: 3 total, 3 up, 3 in
Dec  2 06:34:51 np0005542249 systemd[1]: libpod-conmon-3e22330ce22c8c6225a26b24435ca7b06fc7ed4c7a6d7483b9a60f2a833669b8.scope: Deactivated successfully.
Dec  2 06:34:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:34:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:34:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:34:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:34:51 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev eaabda14-8c3c-453d-8974-cd048d3ee411 does not exist
Dec  2 06:34:51 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 15e4de92-bc5b-41bd-94d8-9beecad4dab6 does not exist
Dec  2 06:34:52 np0005542249 nova_compute[254900]: 2025-12-02 11:34:52.383 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:34:52 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:34:52 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:34:52 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1819: 321 pgs: 321 active+clean; 88 MiB data, 439 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 42 KiB/s wr, 67 op/s
Dec  2 06:34:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:34:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e466 do_prune osdmap full prune enabled
Dec  2 06:34:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e467 e467: 3 total, 3 up, 3 in
Dec  2 06:34:53 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e467: 3 total, 3 up, 3 in
Dec  2 06:34:54 np0005542249 podman[297011]: 2025-12-02 11:34:54.02573204 +0000 UTC m=+0.090861482 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:34:54 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1821: 321 pgs: 321 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 37 KiB/s wr, 68 op/s
Dec  2 06:34:54 np0005542249 nova_compute[254900]: 2025-12-02 11:34:54.708 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:34:54 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1747481682' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:34:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:34:54 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1747481682' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:34:55 np0005542249 nova_compute[254900]: 2025-12-02 11:34:55.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:34:55 np0005542249 nova_compute[254900]: 2025-12-02 11:34:55.413 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:34:55 np0005542249 nova_compute[254900]: 2025-12-02 11:34:55.419 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:34:55 np0005542249 nova_compute[254900]: 2025-12-02 11:34:55.420 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:34:55 np0005542249 nova_compute[254900]: 2025-12-02 11:34:55.420 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:34:55 np0005542249 nova_compute[254900]: 2025-12-02 11:34:55.420 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:34:55 np0005542249 nova_compute[254900]: 2025-12-02 11:34:55.421 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:34:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:34:55 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1787850118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:34:55 np0005542249 nova_compute[254900]: 2025-12-02 11:34:55.914 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:34:56 np0005542249 nova_compute[254900]: 2025-12-02 11:34:56.150 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:34:56 np0005542249 nova_compute[254900]: 2025-12-02 11:34:56.152 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4356MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:34:56 np0005542249 nova_compute[254900]: 2025-12-02 11:34:56.152 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:34:56 np0005542249 nova_compute[254900]: 2025-12-02 11:34:56.153 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:34:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:34:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:34:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:34:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:34:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:34:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:34:56 np0005542249 nova_compute[254900]: 2025-12-02 11:34:56.431 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:34:56 np0005542249 nova_compute[254900]: 2025-12-02 11:34:56.431 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:34:56 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1822: 321 pgs: 321 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 34 KiB/s wr, 24 op/s
Dec  2 06:34:56 np0005542249 nova_compute[254900]: 2025-12-02 11:34:56.525 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:34:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:34:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/854068362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:34:57 np0005542249 nova_compute[254900]: 2025-12-02 11:34:57.027 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:34:57 np0005542249 nova_compute[254900]: 2025-12-02 11:34:57.034 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:34:57 np0005542249 nova_compute[254900]: 2025-12-02 11:34:57.050 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:34:57 np0005542249 nova_compute[254900]: 2025-12-02 11:34:57.072 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:34:57 np0005542249 nova_compute[254900]: 2025-12-02 11:34:57.073 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:34:57 np0005542249 nova_compute[254900]: 2025-12-02 11:34:57.073 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:34:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e467 do_prune osdmap full prune enabled
Dec  2 06:34:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e468 e468: 3 total, 3 up, 3 in
Dec  2 06:34:57 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e468: 3 total, 3 up, 3 in
Dec  2 06:34:57 np0005542249 ovn_controller[153849]: 2025-12-02T11:34:57Z|00270|memory_trim|INFO|Detected inactivity (last active 30019 ms ago): trimming memory
Dec  2 06:34:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e468 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:34:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e468 do_prune osdmap full prune enabled
Dec  2 06:34:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e469 e469: 3 total, 3 up, 3 in
Dec  2 06:34:58 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e469: 3 total, 3 up, 3 in
Dec  2 06:34:58 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1825: 321 pgs: 321 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.5 KiB/s wr, 54 op/s
Dec  2 06:34:59 np0005542249 nova_compute[254900]: 2025-12-02 11:34:59.077 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:34:59 np0005542249 nova_compute[254900]: 2025-12-02 11:34:59.078 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:34:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e469 do_prune osdmap full prune enabled
Dec  2 06:34:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e470 e470: 3 total, 3 up, 3 in
Dec  2 06:34:59 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e470: 3 total, 3 up, 3 in
Dec  2 06:34:59 np0005542249 nova_compute[254900]: 2025-12-02 11:34:59.710 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:00 np0005542249 nova_compute[254900]: 2025-12-02 11:35:00.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:35:00 np0005542249 nova_compute[254900]: 2025-12-02 11:35:00.453 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:00 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1827: 321 pgs: 321 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 3.7 KiB/s wr, 71 op/s
Dec  2 06:35:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:35:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2468640328' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:35:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:35:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2468640328' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:35:01 np0005542249 nova_compute[254900]: 2025-12-02 11:35:01.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:35:02 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1828: 321 pgs: 321 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 3.7 KiB/s wr, 74 op/s
Dec  2 06:35:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e470 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:35:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e470 do_prune osdmap full prune enabled
Dec  2 06:35:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e471 e471: 3 total, 3 up, 3 in
Dec  2 06:35:03 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e471: 3 total, 3 up, 3 in
Dec  2 06:35:03 np0005542249 podman[297077]: 2025-12-02 11:35:03.998073465 +0000 UTC m=+0.079317170 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:35:04 np0005542249 podman[297078]: 2025-12-02 11:35:04.088654338 +0000 UTC m=+0.166595534 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  2 06:35:04 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1830: 321 pgs: 321 active+clean; 130 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 7.0 MiB/s wr, 115 op/s
Dec  2 06:35:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e471 do_prune osdmap full prune enabled
Dec  2 06:35:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e472 e472: 3 total, 3 up, 3 in
Dec  2 06:35:04 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e472: 3 total, 3 up, 3 in
Dec  2 06:35:04 np0005542249 nova_compute[254900]: 2025-12-02 11:35:04.713 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:05 np0005542249 nova_compute[254900]: 2025-12-02 11:35:05.383 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:35:05 np0005542249 nova_compute[254900]: 2025-12-02 11:35:05.383 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  2 06:35:05 np0005542249 nova_compute[254900]: 2025-12-02 11:35:05.455 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:06 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1832: 321 pgs: 321 active+clean; 130 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 6.0 MiB/s wr, 74 op/s
Dec  2 06:35:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e472 do_prune osdmap full prune enabled
Dec  2 06:35:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e473 e473: 3 total, 3 up, 3 in
Dec  2 06:35:06 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e473: 3 total, 3 up, 3 in
Dec  2 06:35:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:35:07 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4240481208' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:35:07 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:35:07 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4240481208' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:35:08 np0005542249 nova_compute[254900]: 2025-12-02 11:35:08.030 254904 DEBUG oslo_concurrency.lockutils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "9579cc6b-d571-46a6-80c2-f8a0fb6b2672" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:35:08 np0005542249 nova_compute[254900]: 2025-12-02 11:35:08.031 254904 DEBUG oslo_concurrency.lockutils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "9579cc6b-d571-46a6-80c2-f8a0fb6b2672" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:35:08 np0005542249 nova_compute[254900]: 2025-12-02 11:35:08.056 254904 DEBUG nova.compute.manager [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:35:08 np0005542249 nova_compute[254900]: 2025-12-02 11:35:08.189 254904 DEBUG oslo_concurrency.lockutils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:35:08 np0005542249 nova_compute[254900]: 2025-12-02 11:35:08.190 254904 DEBUG oslo_concurrency.lockutils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:35:08 np0005542249 nova_compute[254900]: 2025-12-02 11:35:08.200 254904 DEBUG nova.virt.hardware [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:35:08 np0005542249 nova_compute[254900]: 2025-12-02 11:35:08.201 254904 INFO nova.compute.claims [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:35:08 np0005542249 nova_compute[254900]: 2025-12-02 11:35:08.306 254904 DEBUG oslo_concurrency.processutils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:35:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:35:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e473 do_prune osdmap full prune enabled
Dec  2 06:35:08 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1834: 321 pgs: 321 active+clean; 202 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 19 MiB/s wr, 150 op/s
Dec  2 06:35:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e474 e474: 3 total, 3 up, 3 in
Dec  2 06:35:08 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e474: 3 total, 3 up, 3 in
Dec  2 06:35:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:35:09 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1790884388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.082 254904 DEBUG oslo_concurrency.processutils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.775s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.091 254904 DEBUG nova.compute.provider_tree [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.114 254904 DEBUG nova.scheduler.client.report [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.142 254904 DEBUG oslo_concurrency.lockutils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.144 254904 DEBUG nova.compute.manager [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.226 254904 DEBUG nova.compute.manager [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.227 254904 DEBUG nova.network.neutron [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.261 254904 INFO nova.virt.libvirt.driver [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.286 254904 DEBUG nova.compute.manager [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.337 254904 INFO nova.virt.block_device [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Booting with volume e08afcdd-5df9-46bd-a839-a3d41cb7d50a at /dev/vda#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.479 254904 DEBUG os_brick.utils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.482 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.502 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.503 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb08617-03fa-4e5c-aea2-339e092673c4]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.505 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.522 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.522 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[016b74ea-c440-42e4-b351-1b1543bc22e6]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.525 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.541 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.541 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[277f73f8-fc39-45ef-a222-96f30a576969]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.543 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[430d2bcc-759d-4adf-9373-f39a0c35def9]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.544 254904 DEBUG oslo_concurrency.processutils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.584 254904 DEBUG oslo_concurrency.processutils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CMD "nvme version" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.588 254904 DEBUG os_brick.initiator.connectors.lightos [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.589 254904 DEBUG os_brick.initiator.connectors.lightos [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.589 254904 DEBUG os_brick.initiator.connectors.lightos [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.590 254904 DEBUG os_brick.utils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] <== get_connector_properties: return (109ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.591 254904 DEBUG nova.virt.block_device [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Updating existing volume attachment record: 750ab793-d907-4746-8fc0-78ea66987006 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.619 254904 DEBUG nova.policy [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a003d9cef7684ec48ed996b22c11419e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '58574186a4fd405e83f1a4b650ea8e8c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:35:09 np0005542249 nova_compute[254900]: 2025-12-02 11:35:09.749 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:35:10 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/728976374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:35:10 np0005542249 nova_compute[254900]: 2025-12-02 11:35:10.458 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:10 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1836: 321 pgs: 321 active+clean; 202 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 12 MiB/s wr, 72 op/s
Dec  2 06:35:10 np0005542249 nova_compute[254900]: 2025-12-02 11:35:10.623 254904 DEBUG nova.compute.manager [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:35:10 np0005542249 nova_compute[254900]: 2025-12-02 11:35:10.625 254904 DEBUG nova.virt.libvirt.driver [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:35:10 np0005542249 nova_compute[254900]: 2025-12-02 11:35:10.625 254904 INFO nova.virt.libvirt.driver [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Creating image(s)#033[00m
Dec  2 06:35:10 np0005542249 nova_compute[254900]: 2025-12-02 11:35:10.625 254904 DEBUG nova.virt.libvirt.driver [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  2 06:35:10 np0005542249 nova_compute[254900]: 2025-12-02 11:35:10.626 254904 DEBUG nova.virt.libvirt.driver [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Ensure instance console log exists: /var/lib/nova/instances/9579cc6b-d571-46a6-80c2-f8a0fb6b2672/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:35:10 np0005542249 nova_compute[254900]: 2025-12-02 11:35:10.626 254904 DEBUG oslo_concurrency.lockutils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:35:10 np0005542249 nova_compute[254900]: 2025-12-02 11:35:10.627 254904 DEBUG oslo_concurrency.lockutils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:35:10 np0005542249 nova_compute[254900]: 2025-12-02 11:35:10.627 254904 DEBUG oslo_concurrency.lockutils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:35:10 np0005542249 nova_compute[254900]: 2025-12-02 11:35:10.689 254904 DEBUG nova.network.neutron [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Successfully created port: f725ebd9-55ae-40bd-ab5c-a2e8c95b7752 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:35:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e474 do_prune osdmap full prune enabled
Dec  2 06:35:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e475 e475: 3 total, 3 up, 3 in
Dec  2 06:35:10 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e475: 3 total, 3 up, 3 in
Dec  2 06:35:12 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1838: 321 pgs: 321 active+clean; 202 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 12 MiB/s wr, 106 op/s
Dec  2 06:35:12 np0005542249 nova_compute[254900]: 2025-12-02 11:35:12.499 254904 DEBUG nova.network.neutron [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Successfully updated port: f725ebd9-55ae-40bd-ab5c-a2e8c95b7752 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:35:12 np0005542249 nova_compute[254900]: 2025-12-02 11:35:12.517 254904 DEBUG oslo_concurrency.lockutils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "refresh_cache-9579cc6b-d571-46a6-80c2-f8a0fb6b2672" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:35:12 np0005542249 nova_compute[254900]: 2025-12-02 11:35:12.518 254904 DEBUG oslo_concurrency.lockutils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquired lock "refresh_cache-9579cc6b-d571-46a6-80c2-f8a0fb6b2672" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:35:12 np0005542249 nova_compute[254900]: 2025-12-02 11:35:12.518 254904 DEBUG nova.network.neutron [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:35:12 np0005542249 nova_compute[254900]: 2025-12-02 11:35:12.639 254904 DEBUG nova.compute.manager [req-5eb5e149-9bef-4104-bcf4-8d7f02cdce72 req-0c5a9b85-a133-4071-a343-37bea2c600e0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Received event network-changed-f725ebd9-55ae-40bd-ab5c-a2e8c95b7752 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:35:12 np0005542249 nova_compute[254900]: 2025-12-02 11:35:12.641 254904 DEBUG nova.compute.manager [req-5eb5e149-9bef-4104-bcf4-8d7f02cdce72 req-0c5a9b85-a133-4071-a343-37bea2c600e0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Refreshing instance network info cache due to event network-changed-f725ebd9-55ae-40bd-ab5c-a2e8c95b7752. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:35:12 np0005542249 nova_compute[254900]: 2025-12-02 11:35:12.641 254904 DEBUG oslo_concurrency.lockutils [req-5eb5e149-9bef-4104-bcf4-8d7f02cdce72 req-0c5a9b85-a133-4071-a343-37bea2c600e0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-9579cc6b-d571-46a6-80c2-f8a0fb6b2672" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:35:12 np0005542249 nova_compute[254900]: 2025-12-02 11:35:12.678 254904 DEBUG nova.network.neutron [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:35:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e475 do_prune osdmap full prune enabled
Dec  2 06:35:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e476 e476: 3 total, 3 up, 3 in
Dec  2 06:35:12 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e476: 3 total, 3 up, 3 in
Dec  2 06:35:13 np0005542249 nova_compute[254900]: 2025-12-02 11:35:13.652 254904 DEBUG nova.network.neutron [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Updating instance_info_cache with network_info: [{"id": "f725ebd9-55ae-40bd-ab5c-a2e8c95b7752", "address": "fa:16:3e:f7:37:46", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf725ebd9-55", "ovs_interfaceid": "f725ebd9-55ae-40bd-ab5c-a2e8c95b7752", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:35:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e476 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:35:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e476 do_prune osdmap full prune enabled
Dec  2 06:35:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e477 e477: 3 total, 3 up, 3 in
Dec  2 06:35:13 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e477: 3 total, 3 up, 3 in
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.284 254904 DEBUG oslo_concurrency.lockutils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Releasing lock "refresh_cache-9579cc6b-d571-46a6-80c2-f8a0fb6b2672" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.285 254904 DEBUG nova.compute.manager [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Instance network_info: |[{"id": "f725ebd9-55ae-40bd-ab5c-a2e8c95b7752", "address": "fa:16:3e:f7:37:46", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf725ebd9-55", "ovs_interfaceid": "f725ebd9-55ae-40bd-ab5c-a2e8c95b7752", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.285 254904 DEBUG oslo_concurrency.lockutils [req-5eb5e149-9bef-4104-bcf4-8d7f02cdce72 req-0c5a9b85-a133-4071-a343-37bea2c600e0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-9579cc6b-d571-46a6-80c2-f8a0fb6b2672" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.286 254904 DEBUG nova.network.neutron [req-5eb5e149-9bef-4104-bcf4-8d7f02cdce72 req-0c5a9b85-a133-4071-a343-37bea2c600e0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Refreshing network info cache for port f725ebd9-55ae-40bd-ab5c-a2e8c95b7752 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.289 254904 DEBUG nova.virt.libvirt.driver [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Start _get_guest_xml network_info=[{"id": "f725ebd9-55ae-40bd-ab5c-a2e8c95b7752", "address": "fa:16:3e:f7:37:46", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf725ebd9-55", "ovs_interfaceid": "f725ebd9-55ae-40bd-ab5c-a2e8c95b7752", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-e08afcdd-5df9-46bd-a839-a3d41cb7d50a', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'e08afcdd-5df9-46bd-a839-a3d41cb7d50a', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '9579cc6b-d571-46a6-80c2-f8a0fb6b2672', 'attached_at': '', 'detached_at': '', 'volume_id': 'e08afcdd-5df9-46bd-a839-a3d41cb7d50a', 'serial': 'e08afcdd-5df9-46bd-a839-a3d41cb7d50a'}, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'attachment_id': '750ab793-d907-4746-8fc0-78ea66987006', 'delete_on_termination': False, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.295 254904 WARNING nova.virt.libvirt.driver [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.302 254904 DEBUG nova.virt.libvirt.host [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.303 254904 DEBUG nova.virt.libvirt.host [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.307 254904 DEBUG nova.virt.libvirt.host [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.308 254904 DEBUG nova.virt.libvirt.host [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.309 254904 DEBUG nova.virt.libvirt.driver [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.309 254904 DEBUG nova.virt.hardware [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.309 254904 DEBUG nova.virt.hardware [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.310 254904 DEBUG nova.virt.hardware [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.310 254904 DEBUG nova.virt.hardware [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.310 254904 DEBUG nova.virt.hardware [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.311 254904 DEBUG nova.virt.hardware [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.311 254904 DEBUG nova.virt.hardware [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.311 254904 DEBUG nova.virt.hardware [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.311 254904 DEBUG nova.virt.hardware [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.312 254904 DEBUG nova.virt.hardware [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.312 254904 DEBUG nova.virt.hardware [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.350 254904 DEBUG nova.storage.rbd_utils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] rbd image 9579cc6b-d571-46a6-80c2-f8a0fb6b2672_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.355 254904 DEBUG oslo_concurrency.processutils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:35:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:35:14 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2204360171' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:35:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:35:14 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2204360171' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:35:14 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1841: 321 pgs: 321 active+clean; 202 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 1.4 KiB/s wr, 46 op/s
Dec  2 06:35:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:35:14 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1832366145' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.792 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.811 254904 DEBUG oslo_concurrency.processutils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.985 254904 DEBUG os_brick.encryptors [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Using volume encryption metadata '{'encryption_key_id': 'f5371b1c-a479-4797-b0e5-071534da6624', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-e08afcdd-5df9-46bd-a839-a3d41cb7d50a', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'e08afcdd-5df9-46bd-a839-a3d41cb7d50a', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '9579cc6b-d571-46a6-80c2-f8a0fb6b2672', 'attached_at': '', 'detached_at': '', 'volume_id': 'e08afcdd-5df9-46bd-a839-a3d41cb7d50a', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Dec  2 06:35:14 np0005542249 nova_compute[254900]: 2025-12-02 11:35:14.989 254904 DEBUG barbicanclient.client [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.008 254904 DEBUG barbicanclient.v1.secrets [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/f5371b1c-a479-4797-b0e5-071534da6624 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.009 254904 INFO barbicanclient.base [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/f5371b1c-a479-4797-b0e5-071534da6624#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.035 254904 DEBUG barbicanclient.client [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.036 254904 INFO barbicanclient.base [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/f5371b1c-a479-4797-b0e5-071534da6624#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.057 254904 DEBUG barbicanclient.client [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.058 254904 INFO barbicanclient.base [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/f5371b1c-a479-4797-b0e5-071534da6624#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.082 254904 DEBUG barbicanclient.client [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.083 254904 INFO barbicanclient.base [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/f5371b1c-a479-4797-b0e5-071534da6624#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.106 254904 DEBUG barbicanclient.client [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.107 254904 INFO barbicanclient.base [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/f5371b1c-a479-4797-b0e5-071534da6624#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.136 254904 DEBUG barbicanclient.client [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.137 254904 INFO barbicanclient.base [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/f5371b1c-a479-4797-b0e5-071534da6624#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.158 254904 DEBUG barbicanclient.client [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.159 254904 INFO barbicanclient.base [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/f5371b1c-a479-4797-b0e5-071534da6624#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.178 254904 DEBUG barbicanclient.client [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.178 254904 INFO barbicanclient.base [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/f5371b1c-a479-4797-b0e5-071534da6624#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.204 254904 DEBUG barbicanclient.client [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.205 254904 INFO barbicanclient.base [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/f5371b1c-a479-4797-b0e5-071534da6624#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.236 254904 DEBUG barbicanclient.client [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.237 254904 INFO barbicanclient.base [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/f5371b1c-a479-4797-b0e5-071534da6624#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.267 254904 DEBUG barbicanclient.client [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.268 254904 INFO barbicanclient.base [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/f5371b1c-a479-4797-b0e5-071534da6624#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.352 254904 DEBUG barbicanclient.client [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.353 254904 INFO barbicanclient.base [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/f5371b1c-a479-4797-b0e5-071534da6624#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.383 254904 DEBUG barbicanclient.client [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.384 254904 INFO barbicanclient.base [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/f5371b1c-a479-4797-b0e5-071534da6624#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.405 254904 DEBUG barbicanclient.client [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.406 254904 INFO barbicanclient.base [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/f5371b1c-a479-4797-b0e5-071534da6624#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.431 254904 DEBUG barbicanclient.client [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.432 254904 INFO barbicanclient.base [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/f5371b1c-a479-4797-b0e5-071534da6624#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.451 254904 DEBUG barbicanclient.client [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.452 254904 DEBUG nova.virt.libvirt.host [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Secret XML: <secret ephemeral="no" private="no">
Dec  2 06:35:15 np0005542249 nova_compute[254900]:  <usage type="volume">
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <volume>e08afcdd-5df9-46bd-a839-a3d41cb7d50a</volume>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:  </usage>
Dec  2 06:35:15 np0005542249 nova_compute[254900]: </secret>
Dec  2 06:35:15 np0005542249 nova_compute[254900]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.462 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.488 254904 DEBUG nova.virt.libvirt.vif [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:35:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-718564893',display_name='tempest-TestEncryptedCinderVolumes-server-718564893',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-718564893',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK920rwaN7hvXue9KA1NjrUwvtvK954cG7ZuCXgqFd9X1K1nkCEcuVbCcgefDelXRvQjOoRaTNZtPceNbknmWJHmkwM014+LbqRvx6BhJSogI4x+qdIBG/Zp5TIVdDeUYQ==',key_name='tempest-TestEncryptedCinderVolumes-250014646',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='58574186a4fd405e83f1a4b650ea8e8c',ramdisk_id='',reservation_id='r-x1sx0ti7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-337876243',owner_user_name='tempest-TestEncryptedCinderVolumes-337876243-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:35:09Z,user_data=None,user_id='a003d9cef7684ec48ed996b22c11419e',uuid=9579cc6b-d571-46a6-80c2-f8a0fb6b2672,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f725ebd9-55ae-40bd-ab5c-a2e8c95b7752", "address": "fa:16:3e:f7:37:46", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf725ebd9-55", "ovs_interfaceid": "f725ebd9-55ae-40bd-ab5c-a2e8c95b7752", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.489 254904 DEBUG nova.network.os_vif_util [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Converting VIF {"id": "f725ebd9-55ae-40bd-ab5c-a2e8c95b7752", "address": "fa:16:3e:f7:37:46", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf725ebd9-55", "ovs_interfaceid": "f725ebd9-55ae-40bd-ab5c-a2e8c95b7752", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.490 254904 DEBUG nova.network.os_vif_util [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:37:46,bridge_name='br-int',has_traffic_filtering=True,id=f725ebd9-55ae-40bd-ab5c-a2e8c95b7752,network=Network(28b69a92-5b45-421b-9985-afeebc6820aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf725ebd9-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.493 254904 DEBUG nova.objects.instance [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lazy-loading 'pci_devices' on Instance uuid 9579cc6b-d571-46a6-80c2-f8a0fb6b2672 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.577 254904 DEBUG nova.virt.libvirt.driver [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:35:15 np0005542249 nova_compute[254900]:  <uuid>9579cc6b-d571-46a6-80c2-f8a0fb6b2672</uuid>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:  <name>instance-0000001d</name>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-718564893</nova:name>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:35:14</nova:creationTime>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:35:15 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:        <nova:user uuid="a003d9cef7684ec48ed996b22c11419e">tempest-TestEncryptedCinderVolumes-337876243-project-member</nova:user>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:        <nova:project uuid="58574186a4fd405e83f1a4b650ea8e8c">tempest-TestEncryptedCinderVolumes-337876243</nova:project>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:        <nova:port uuid="f725ebd9-55ae-40bd-ab5c-a2e8c95b7752">
Dec  2 06:35:15 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <entry name="serial">9579cc6b-d571-46a6-80c2-f8a0fb6b2672</entry>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <entry name="uuid">9579cc6b-d571-46a6-80c2-f8a0fb6b2672</entry>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/9579cc6b-d571-46a6-80c2-f8a0fb6b2672_disk.config">
Dec  2 06:35:15 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:35:15 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="volumes/volume-e08afcdd-5df9-46bd-a839-a3d41cb7d50a">
Dec  2 06:35:15 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:35:15 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <serial>e08afcdd-5df9-46bd-a839-a3d41cb7d50a</serial>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <encryption format="luks">
Dec  2 06:35:15 np0005542249 nova_compute[254900]:        <secret type="passphrase" uuid="f2d814d0-edf2-483b-ac0d-5600279bd6e7"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      </encryption>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:f7:37:46"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <target dev="tapf725ebd9-55"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/9579cc6b-d571-46a6-80c2-f8a0fb6b2672/console.log" append="off"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:35:15 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:35:15 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:35:15 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:35:15 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.579 254904 DEBUG nova.compute.manager [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Preparing to wait for external event network-vif-plugged-f725ebd9-55ae-40bd-ab5c-a2e8c95b7752 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.580 254904 DEBUG oslo_concurrency.lockutils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "9579cc6b-d571-46a6-80c2-f8a0fb6b2672-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.581 254904 DEBUG oslo_concurrency.lockutils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "9579cc6b-d571-46a6-80c2-f8a0fb6b2672-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.581 254904 DEBUG oslo_concurrency.lockutils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "9579cc6b-d571-46a6-80c2-f8a0fb6b2672-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.582 254904 DEBUG nova.virt.libvirt.vif [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:35:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-718564893',display_name='tempest-TestEncryptedCinderVolumes-server-718564893',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-718564893',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK920rwaN7hvXue9KA1NjrUwvtvK954cG7ZuCXgqFd9X1K1nkCEcuVbCcgefDelXRvQjOoRaTNZtPceNbknmWJHmkwM014+LbqRvx6BhJSogI4x+qdIBG/Zp5TIVdDeUYQ==',key_name='tempest-TestEncryptedCinderVolumes-250014646',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='58574186a4fd405e83f1a4b650ea8e8c',ramdisk_id='',reservation_id='r-x1sx0ti7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-337876243',owner_user_name='tempest-TestEncryptedCinderVolumes-337876243-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:35:09Z,user_data=None,user_id='a003d9cef7684ec48ed996b22c11419e',uuid=9579cc6b-d571-46a6-80c2-f8a0fb6b2672,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f725ebd9-55ae-40bd-ab5c-a2e8c95b7752", "address": "fa:16:3e:f7:37:46", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf725ebd9-55", "ovs_interfaceid": "f725ebd9-55ae-40bd-ab5c-a2e8c95b7752", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.583 254904 DEBUG nova.network.os_vif_util [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Converting VIF {"id": "f725ebd9-55ae-40bd-ab5c-a2e8c95b7752", "address": "fa:16:3e:f7:37:46", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf725ebd9-55", "ovs_interfaceid": "f725ebd9-55ae-40bd-ab5c-a2e8c95b7752", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.584 254904 DEBUG nova.network.os_vif_util [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:37:46,bridge_name='br-int',has_traffic_filtering=True,id=f725ebd9-55ae-40bd-ab5c-a2e8c95b7752,network=Network(28b69a92-5b45-421b-9985-afeebc6820aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf725ebd9-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.585 254904 DEBUG os_vif [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:37:46,bridge_name='br-int',has_traffic_filtering=True,id=f725ebd9-55ae-40bd-ab5c-a2e8c95b7752,network=Network(28b69a92-5b45-421b-9985-afeebc6820aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf725ebd9-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.586 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.587 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.587 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.593 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.593 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf725ebd9-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.594 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf725ebd9-55, col_values=(('external_ids', {'iface-id': 'f725ebd9-55ae-40bd-ab5c-a2e8c95b7752', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:37:46', 'vm-uuid': '9579cc6b-d571-46a6-80c2-f8a0fb6b2672'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.597 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:15 np0005542249 NetworkManager[48987]: <info>  [1764675315.5989] manager: (tapf725ebd9-55): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/142)
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.601 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.607 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.610 254904 INFO os_vif [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:37:46,bridge_name='br-int',has_traffic_filtering=True,id=f725ebd9-55ae-40bd-ab5c-a2e8c95b7752,network=Network(28b69a92-5b45-421b-9985-afeebc6820aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf725ebd9-55')#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.678 254904 DEBUG nova.virt.libvirt.driver [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.679 254904 DEBUG nova.virt.libvirt.driver [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.679 254904 DEBUG nova.virt.libvirt.driver [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] No VIF found with MAC fa:16:3e:f7:37:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.680 254904 INFO nova.virt.libvirt.driver [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Using config drive#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.737 254904 DEBUG nova.storage.rbd_utils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] rbd image 9579cc6b-d571-46a6-80c2-f8a0fb6b2672_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.753 254904 DEBUG nova.network.neutron [req-5eb5e149-9bef-4104-bcf4-8d7f02cdce72 req-0c5a9b85-a133-4071-a343-37bea2c600e0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Updated VIF entry in instance network info cache for port f725ebd9-55ae-40bd-ab5c-a2e8c95b7752. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.754 254904 DEBUG nova.network.neutron [req-5eb5e149-9bef-4104-bcf4-8d7f02cdce72 req-0c5a9b85-a133-4071-a343-37bea2c600e0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Updating instance_info_cache with network_info: [{"id": "f725ebd9-55ae-40bd-ab5c-a2e8c95b7752", "address": "fa:16:3e:f7:37:46", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf725ebd9-55", "ovs_interfaceid": "f725ebd9-55ae-40bd-ab5c-a2e8c95b7752", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:35:15 np0005542249 nova_compute[254900]: 2025-12-02 11:35:15.863 254904 DEBUG oslo_concurrency.lockutils [req-5eb5e149-9bef-4104-bcf4-8d7f02cdce72 req-0c5a9b85-a133-4071-a343-37bea2c600e0 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-9579cc6b-d571-46a6-80c2-f8a0fb6b2672" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:35:16 np0005542249 nova_compute[254900]: 2025-12-02 11:35:16.222 254904 INFO nova.virt.libvirt.driver [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Creating config drive at /var/lib/nova/instances/9579cc6b-d571-46a6-80c2-f8a0fb6b2672/disk.config#033[00m
Dec  2 06:35:16 np0005542249 nova_compute[254900]: 2025-12-02 11:35:16.228 254904 DEBUG oslo_concurrency.processutils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9579cc6b-d571-46a6-80c2-f8a0fb6b2672/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps776jn18 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:35:16 np0005542249 nova_compute[254900]: 2025-12-02 11:35:16.376 254904 DEBUG oslo_concurrency.processutils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9579cc6b-d571-46a6-80c2-f8a0fb6b2672/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps776jn18" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:35:16 np0005542249 nova_compute[254900]: 2025-12-02 11:35:16.417 254904 DEBUG nova.storage.rbd_utils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] rbd image 9579cc6b-d571-46a6-80c2-f8a0fb6b2672_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:35:16 np0005542249 nova_compute[254900]: 2025-12-02 11:35:16.423 254904 DEBUG oslo_concurrency.processutils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9579cc6b-d571-46a6-80c2-f8a0fb6b2672/disk.config 9579cc6b-d571-46a6-80c2-f8a0fb6b2672_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:35:16 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1842: 321 pgs: 321 active+clean; 202 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.3 KiB/s wr, 39 op/s
Dec  2 06:35:16 np0005542249 nova_compute[254900]: 2025-12-02 11:35:16.611 254904 DEBUG oslo_concurrency.processutils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9579cc6b-d571-46a6-80c2-f8a0fb6b2672/disk.config 9579cc6b-d571-46a6-80c2-f8a0fb6b2672_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:35:16 np0005542249 nova_compute[254900]: 2025-12-02 11:35:16.612 254904 INFO nova.virt.libvirt.driver [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Deleting local config drive /var/lib/nova/instances/9579cc6b-d571-46a6-80c2-f8a0fb6b2672/disk.config because it was imported into RBD.#033[00m
Dec  2 06:35:16 np0005542249 kernel: tapf725ebd9-55: entered promiscuous mode
Dec  2 06:35:16 np0005542249 ovn_controller[153849]: 2025-12-02T11:35:16Z|00271|binding|INFO|Claiming lport f725ebd9-55ae-40bd-ab5c-a2e8c95b7752 for this chassis.
Dec  2 06:35:16 np0005542249 ovn_controller[153849]: 2025-12-02T11:35:16Z|00272|binding|INFO|f725ebd9-55ae-40bd-ab5c-a2e8c95b7752: Claiming fa:16:3e:f7:37:46 10.100.0.14
Dec  2 06:35:16 np0005542249 nova_compute[254900]: 2025-12-02 11:35:16.695 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:16 np0005542249 NetworkManager[48987]: <info>  [1764675316.6969] manager: (tapf725ebd9-55): new Tun device (/org/freedesktop/NetworkManager/Devices/143)
Dec  2 06:35:16 np0005542249 ovn_controller[153849]: 2025-12-02T11:35:16Z|00273|binding|INFO|Setting lport f725ebd9-55ae-40bd-ab5c-a2e8c95b7752 ovn-installed in OVS
Dec  2 06:35:16 np0005542249 nova_compute[254900]: 2025-12-02 11:35:16.719 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:16 np0005542249 nova_compute[254900]: 2025-12-02 11:35:16.723 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:16 np0005542249 systemd-udevd[297262]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:35:16 np0005542249 systemd-machined[216222]: New machine qemu-29-instance-0000001d.
Dec  2 06:35:16 np0005542249 NetworkManager[48987]: <info>  [1764675316.7574] device (tapf725ebd9-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:35:16 np0005542249 systemd[1]: Started Virtual Machine qemu-29-instance-0000001d.
Dec  2 06:35:16 np0005542249 NetworkManager[48987]: <info>  [1764675316.7589] device (tapf725ebd9-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:35:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:16.765 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:37:46 10.100.0.14'], port_security=['fa:16:3e:f7:37:46 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '9579cc6b-d571-46a6-80c2-f8a0fb6b2672', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28b69a92-5b45-421b-9985-afeebc6820aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58574186a4fd405e83f1a4b650ea8e8c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4140d653-003b-4989-8de5-3e8bad7f5c96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81aeb855-c9bf-4f95-90d1-85f514f075e1, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=f725ebd9-55ae-40bd-ab5c-a2e8c95b7752) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:35:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:16.767 163757 INFO neutron.agent.ovn.metadata.agent [-] Port f725ebd9-55ae-40bd-ab5c-a2e8c95b7752 in datapath 28b69a92-5b45-421b-9985-afeebc6820aa bound to our chassis#033[00m
Dec  2 06:35:16 np0005542249 ovn_controller[153849]: 2025-12-02T11:35:16Z|00274|binding|INFO|Setting lport f725ebd9-55ae-40bd-ab5c-a2e8c95b7752 up in Southbound
Dec  2 06:35:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:16.768 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 28b69a92-5b45-421b-9985-afeebc6820aa#033[00m
Dec  2 06:35:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:16.782 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[309e8d4b-27fa-453e-a5b2-2f3910322777]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:16.783 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap28b69a92-51 in ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 06:35:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:16.786 262398 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap28b69a92-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 06:35:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:16.787 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[4002ea7e-2f7f-423b-a0ee-9b042d5b898d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:16.788 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[da300323-1a20-4987-8783-3770ec5d4d99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:16.802 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[11e39143-54e3-4741-9cd5-45631ac37751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:16.828 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[b835afd1-3953-4929-9b26-e8fe4d000f3f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:16.874 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[f67bfa41-e626-4965-8267-93689c5be640]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:16.884 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[ff6c4fd0-dad6-46fb-8b9c-f57357c73b52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:16 np0005542249 NetworkManager[48987]: <info>  [1764675316.8855] manager: (tap28b69a92-50): new Veth device (/org/freedesktop/NetworkManager/Devices/144)
Dec  2 06:35:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:16.937 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[788f4585-516e-4661-a1ce-332d8c55b36f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:16.943 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[56fb0932-d274-47d5-8d52-c1ed50c0025a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:16 np0005542249 NetworkManager[48987]: <info>  [1764675316.9851] device (tap28b69a92-50): carrier: link connected
Dec  2 06:35:16 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:16.991 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[b69abeed-fd04-4fd9-9c07-a38a945b8d52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:17.017 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[92737760-0550-43bb-9d20-c63dc4af6dc3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28b69a92-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:96:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550980, 'reachable_time': 27177, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297296, 'error': None, 'target': 'ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:17.039 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[1a3a4729-ab54-4b28-815d-2d475ea7ee36]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1f:96b2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 550980, 'tstamp': 550980}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297297, 'error': None, 'target': 'ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:17.065 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[8f7c9b8e-3933-472e-9aae-23f576886561]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28b69a92-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:96:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550980, 'reachable_time': 27177, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297298, 'error': None, 'target': 'ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:17.112 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[428c90b8-d048-444a-b72b-2e2f8afe742a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:17.207 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[8810559c-2bf5-4a5f-ac62-a287a3fc9c8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:17.209 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28b69a92-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:17.209 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:17.210 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28b69a92-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:35:17 np0005542249 nova_compute[254900]: 2025-12-02 11:35:17.212 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:17 np0005542249 NetworkManager[48987]: <info>  [1764675317.2137] manager: (tap28b69a92-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/145)
Dec  2 06:35:17 np0005542249 kernel: tap28b69a92-50: entered promiscuous mode
Dec  2 06:35:17 np0005542249 nova_compute[254900]: 2025-12-02 11:35:17.215 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:17.220 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap28b69a92-50, col_values=(('external_ids', {'iface-id': '82e6ca5f-5089-4718-9fe8-4d0d719de187'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:35:17 np0005542249 nova_compute[254900]: 2025-12-02 11:35:17.221 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:17 np0005542249 ovn_controller[153849]: 2025-12-02T11:35:17Z|00275|binding|INFO|Releasing lport 82e6ca5f-5089-4718-9fe8-4d0d719de187 from this chassis (sb_readonly=0)
Dec  2 06:35:17 np0005542249 nova_compute[254900]: 2025-12-02 11:35:17.222 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:17.236 163757 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/28b69a92-5b45-421b-9985-afeebc6820aa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/28b69a92-5b45-421b-9985-afeebc6820aa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 06:35:17 np0005542249 nova_compute[254900]: 2025-12-02 11:35:17.237 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:17.237 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[8274fa0b-3491-4653-bcc9-450c5a1b3cb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:17.238 163757 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]: global
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]:    log         /dev/log local0 debug
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]:    log-tag     haproxy-metadata-proxy-28b69a92-5b45-421b-9985-afeebc6820aa
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]:    user        root
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]:    group       root
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]:    maxconn     1024
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]:    pidfile     /var/lib/neutron/external/pids/28b69a92-5b45-421b-9985-afeebc6820aa.pid.haproxy
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]:    daemon
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]: defaults
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]:    log global
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]:    mode http
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]:    option httplog
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]:    option dontlognull
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]:    option http-server-close
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]:    option forwardfor
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]:    retries                 3
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]:    timeout http-request    30s
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]:    timeout connect         30s
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]:    timeout client          32s
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]:    timeout server          32s
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]:    timeout http-keep-alive 30s
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]: listen listener
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]:    bind 169.254.169.254:80
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]:    http-request add-header X-OVN-Network-ID 28b69a92-5b45-421b-9985-afeebc6820aa
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 06:35:17 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:17.238 163757 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa', 'env', 'PROCESS_TAG=haproxy-28b69a92-5b45-421b-9985-afeebc6820aa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/28b69a92-5b45-421b-9985-afeebc6820aa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 06:35:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e477 do_prune osdmap full prune enabled
Dec  2 06:35:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e478 e478: 3 total, 3 up, 3 in
Dec  2 06:35:17 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e478: 3 total, 3 up, 3 in
Dec  2 06:35:17 np0005542249 podman[297354]: 2025-12-02 11:35:17.683931612 +0000 UTC m=+0.054240184 container create 7cc664f642f0295030b3d599080d4d09126932d21480e569081457a298876213 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  2 06:35:17 np0005542249 systemd[1]: Started libpod-conmon-7cc664f642f0295030b3d599080d4d09126932d21480e569081457a298876213.scope.
Dec  2 06:35:17 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:35:17 np0005542249 podman[297354]: 2025-12-02 11:35:17.662310548 +0000 UTC m=+0.032619140 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:35:17 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f766ec82b95fce8c377b7677091a6f74590fddc1ca0cb657923792f5d57fbac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 06:35:17 np0005542249 nova_compute[254900]: 2025-12-02 11:35:17.775 254904 DEBUG nova.compute.manager [req-683e586f-ac18-4c9f-baaa-ef0c63e001c2 req-9dcc435c-8ef9-419f-a86a-94d283fcffda 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Received event network-vif-plugged-f725ebd9-55ae-40bd-ab5c-a2e8c95b7752 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:35:17 np0005542249 nova_compute[254900]: 2025-12-02 11:35:17.777 254904 DEBUG oslo_concurrency.lockutils [req-683e586f-ac18-4c9f-baaa-ef0c63e001c2 req-9dcc435c-8ef9-419f-a86a-94d283fcffda 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "9579cc6b-d571-46a6-80c2-f8a0fb6b2672-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:35:17 np0005542249 nova_compute[254900]: 2025-12-02 11:35:17.778 254904 DEBUG oslo_concurrency.lockutils [req-683e586f-ac18-4c9f-baaa-ef0c63e001c2 req-9dcc435c-8ef9-419f-a86a-94d283fcffda 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "9579cc6b-d571-46a6-80c2-f8a0fb6b2672-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:35:17 np0005542249 nova_compute[254900]: 2025-12-02 11:35:17.778 254904 DEBUG oslo_concurrency.lockutils [req-683e586f-ac18-4c9f-baaa-ef0c63e001c2 req-9dcc435c-8ef9-419f-a86a-94d283fcffda 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "9579cc6b-d571-46a6-80c2-f8a0fb6b2672-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:35:17 np0005542249 nova_compute[254900]: 2025-12-02 11:35:17.779 254904 DEBUG nova.compute.manager [req-683e586f-ac18-4c9f-baaa-ef0c63e001c2 req-9dcc435c-8ef9-419f-a86a-94d283fcffda 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Processing event network-vif-plugged-f725ebd9-55ae-40bd-ab5c-a2e8c95b7752 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:35:17 np0005542249 podman[297354]: 2025-12-02 11:35:17.789412396 +0000 UTC m=+0.159720968 container init 7cc664f642f0295030b3d599080d4d09126932d21480e569081457a298876213 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  2 06:35:17 np0005542249 podman[297354]: 2025-12-02 11:35:17.796326183 +0000 UTC m=+0.166634755 container start 7cc664f642f0295030b3d599080d4d09126932d21480e569081457a298876213 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:35:17 np0005542249 neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa[297380]: [NOTICE]   (297384) : New worker (297386) forked
Dec  2 06:35:17 np0005542249 neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa[297380]: [NOTICE]   (297384) : Loading success.
Dec  2 06:35:18 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1844: 321 pgs: 321 active+clean; 202 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 28 KiB/s wr, 77 op/s
Dec  2 06:35:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e478 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:35:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e478 do_prune osdmap full prune enabled
Dec  2 06:35:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e479 e479: 3 total, 3 up, 3 in
Dec  2 06:35:18 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e479: 3 total, 3 up, 3 in
Dec  2 06:35:19 np0005542249 nova_compute[254900]: 2025-12-02 11:35:19.120 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:19.120 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:23:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:25:d0:a1:75:ed'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:35:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:19.122 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 06:35:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e479 do_prune osdmap full prune enabled
Dec  2 06:35:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e480 e480: 3 total, 3 up, 3 in
Dec  2 06:35:19 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e480: 3 total, 3 up, 3 in
Dec  2 06:35:19 np0005542249 nova_compute[254900]: 2025-12-02 11:35:19.795 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:19.850 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:35:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:19.851 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:35:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:19.852 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:35:19 np0005542249 nova_compute[254900]: 2025-12-02 11:35:19.936 254904 DEBUG nova.compute.manager [req-983d23f8-24ea-4378-ae05-67a4bfed2a61 req-de96f92a-22ea-492d-8da1-7a7db2ba537f 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Received event network-vif-plugged-f725ebd9-55ae-40bd-ab5c-a2e8c95b7752 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:35:19 np0005542249 nova_compute[254900]: 2025-12-02 11:35:19.937 254904 DEBUG oslo_concurrency.lockutils [req-983d23f8-24ea-4378-ae05-67a4bfed2a61 req-de96f92a-22ea-492d-8da1-7a7db2ba537f 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "9579cc6b-d571-46a6-80c2-f8a0fb6b2672-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:35:19 np0005542249 nova_compute[254900]: 2025-12-02 11:35:19.937 254904 DEBUG oslo_concurrency.lockutils [req-983d23f8-24ea-4378-ae05-67a4bfed2a61 req-de96f92a-22ea-492d-8da1-7a7db2ba537f 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "9579cc6b-d571-46a6-80c2-f8a0fb6b2672-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:35:19 np0005542249 nova_compute[254900]: 2025-12-02 11:35:19.937 254904 DEBUG oslo_concurrency.lockutils [req-983d23f8-24ea-4378-ae05-67a4bfed2a61 req-de96f92a-22ea-492d-8da1-7a7db2ba537f 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "9579cc6b-d571-46a6-80c2-f8a0fb6b2672-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:35:19 np0005542249 nova_compute[254900]: 2025-12-02 11:35:19.938 254904 DEBUG nova.compute.manager [req-983d23f8-24ea-4378-ae05-67a4bfed2a61 req-de96f92a-22ea-492d-8da1-7a7db2ba537f 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] No waiting events found dispatching network-vif-plugged-f725ebd9-55ae-40bd-ab5c-a2e8c95b7752 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:35:19 np0005542249 nova_compute[254900]: 2025-12-02 11:35:19.938 254904 WARNING nova.compute.manager [req-983d23f8-24ea-4378-ae05-67a4bfed2a61 req-de96f92a-22ea-492d-8da1-7a7db2ba537f 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Received unexpected event network-vif-plugged-f725ebd9-55ae-40bd-ab5c-a2e8c95b7752 for instance with vm_state building and task_state spawning.#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.128 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764675320.1279252, 9579cc6b-d571-46a6-80c2-f8a0fb6b2672 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.129 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] VM Started (Lifecycle Event)#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.132 254904 DEBUG nova.compute.manager [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.137 254904 DEBUG nova.virt.libvirt.driver [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.140 254904 INFO nova.virt.libvirt.driver [-] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Instance spawned successfully.#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.141 254904 DEBUG nova.virt.libvirt.driver [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.164 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.173 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.177 254904 DEBUG nova.virt.libvirt.driver [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.178 254904 DEBUG nova.virt.libvirt.driver [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.178 254904 DEBUG nova.virt.libvirt.driver [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.179 254904 DEBUG nova.virt.libvirt.driver [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.179 254904 DEBUG nova.virt.libvirt.driver [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.180 254904 DEBUG nova.virt.libvirt.driver [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.218 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.219 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764675320.128128, 9579cc6b-d571-46a6-80c2-f8a0fb6b2672 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.219 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.255 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.260 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764675320.136036, 9579cc6b-d571-46a6-80c2-f8a0fb6b2672 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.260 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.272 254904 INFO nova.compute.manager [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Took 9.65 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.273 254904 DEBUG nova.compute.manager [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.420 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.426 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.463 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:35:20 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1847: 321 pgs: 321 active+clean; 202 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 28 KiB/s wr, 73 op/s
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.490 254904 INFO nova.compute.manager [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Took 12.38 seconds to build instance.#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.515 254904 DEBUG oslo_concurrency.lockutils [None req-f6f928a9-efbe-4977-8511-c25e9e4f70ed a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "9579cc6b-d571-46a6-80c2-f8a0fb6b2672" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.484s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:35:20 np0005542249 nova_compute[254900]: 2025-12-02 11:35:20.636 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:35:20 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1118494152' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:35:20 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:35:20 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1118494152' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:35:22 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:22.125 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:35:22 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1848: 321 pgs: 321 active+clean; 202 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 28 KiB/s wr, 141 op/s
Dec  2 06:35:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e480 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:35:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e480 do_prune osdmap full prune enabled
Dec  2 06:35:23 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e481 e481: 3 total, 3 up, 3 in
Dec  2 06:35:23 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e481: 3 total, 3 up, 3 in
Dec  2 06:35:24 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1850: 321 pgs: 321 active+clean; 202 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.7 KiB/s wr, 191 op/s
Dec  2 06:35:24 np0005542249 nova_compute[254900]: 2025-12-02 11:35:24.845 254904 DEBUG nova.compute.manager [req-fb7c6677-4e1a-4c0a-9cce-0329b9e63fce req-67f35471-efe7-40eb-90ad-9dc1c2a261d5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Received event network-changed-f725ebd9-55ae-40bd-ab5c-a2e8c95b7752 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:35:24 np0005542249 nova_compute[254900]: 2025-12-02 11:35:24.846 254904 DEBUG nova.compute.manager [req-fb7c6677-4e1a-4c0a-9cce-0329b9e63fce req-67f35471-efe7-40eb-90ad-9dc1c2a261d5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Refreshing instance network info cache due to event network-changed-f725ebd9-55ae-40bd-ab5c-a2e8c95b7752. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:35:24 np0005542249 nova_compute[254900]: 2025-12-02 11:35:24.847 254904 DEBUG oslo_concurrency.lockutils [req-fb7c6677-4e1a-4c0a-9cce-0329b9e63fce req-67f35471-efe7-40eb-90ad-9dc1c2a261d5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-9579cc6b-d571-46a6-80c2-f8a0fb6b2672" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:35:24 np0005542249 nova_compute[254900]: 2025-12-02 11:35:24.847 254904 DEBUG oslo_concurrency.lockutils [req-fb7c6677-4e1a-4c0a-9cce-0329b9e63fce req-67f35471-efe7-40eb-90ad-9dc1c2a261d5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-9579cc6b-d571-46a6-80c2-f8a0fb6b2672" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:35:24 np0005542249 nova_compute[254900]: 2025-12-02 11:35:24.847 254904 DEBUG nova.network.neutron [req-fb7c6677-4e1a-4c0a-9cce-0329b9e63fce req-67f35471-efe7-40eb-90ad-9dc1c2a261d5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Refreshing network info cache for port f725ebd9-55ae-40bd-ab5c-a2e8c95b7752 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:35:24 np0005542249 nova_compute[254900]: 2025-12-02 11:35:24.849 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:25 np0005542249 podman[297403]: 2025-12-02 11:35:25.017527166 +0000 UTC m=+0.082038303 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  2 06:35:25 np0005542249 nova_compute[254900]: 2025-12-02 11:35:25.639 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e481 do_prune osdmap full prune enabled
Dec  2 06:35:25 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e482 e482: 3 total, 3 up, 3 in
Dec  2 06:35:25 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e482: 3 total, 3 up, 3 in
Dec  2 06:35:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:35:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:35:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:35:26
Dec  2 06:35:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:35:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:35:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'backups', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'images', 'vms']
Dec  2 06:35:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:35:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:35:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:35:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:35:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:35:26 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1852: 321 pgs: 321 active+clean; 202 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.2 KiB/s wr, 168 op/s
Dec  2 06:35:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:35:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:35:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:35:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:35:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:35:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:35:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:35:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:35:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:35:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:35:26 np0005542249 nova_compute[254900]: 2025-12-02 11:35:26.880 254904 DEBUG nova.network.neutron [req-fb7c6677-4e1a-4c0a-9cce-0329b9e63fce req-67f35471-efe7-40eb-90ad-9dc1c2a261d5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Updated VIF entry in instance network info cache for port f725ebd9-55ae-40bd-ab5c-a2e8c95b7752. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:35:26 np0005542249 nova_compute[254900]: 2025-12-02 11:35:26.882 254904 DEBUG nova.network.neutron [req-fb7c6677-4e1a-4c0a-9cce-0329b9e63fce req-67f35471-efe7-40eb-90ad-9dc1c2a261d5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Updating instance_info_cache with network_info: [{"id": "f725ebd9-55ae-40bd-ab5c-a2e8c95b7752", "address": "fa:16:3e:f7:37:46", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf725ebd9-55", "ovs_interfaceid": "f725ebd9-55ae-40bd-ab5c-a2e8c95b7752", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:35:26 np0005542249 nova_compute[254900]: 2025-12-02 11:35:26.904 254904 DEBUG oslo_concurrency.lockutils [req-fb7c6677-4e1a-4c0a-9cce-0329b9e63fce req-67f35471-efe7-40eb-90ad-9dc1c2a261d5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-9579cc6b-d571-46a6-80c2-f8a0fb6b2672" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:35:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:35:27 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3434917701' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:35:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:35:27 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3434917701' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:35:28 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1853: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 202 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.5 KiB/s wr, 150 op/s
Dec  2 06:35:28 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e482 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:35:28 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e482 do_prune osdmap full prune enabled
Dec  2 06:35:28 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e483 e483: 3 total, 3 up, 3 in
Dec  2 06:35:28 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e483: 3 total, 3 up, 3 in
Dec  2 06:35:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e483 do_prune osdmap full prune enabled
Dec  2 06:35:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e484 e484: 3 total, 3 up, 3 in
Dec  2 06:35:29 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e484: 3 total, 3 up, 3 in
Dec  2 06:35:29 np0005542249 nova_compute[254900]: 2025-12-02 11:35:29.882 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:30 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1856: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 202 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 KiB/s wr, 47 op/s
Dec  2 06:35:30 np0005542249 nova_compute[254900]: 2025-12-02 11:35:30.642 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e484 do_prune osdmap full prune enabled
Dec  2 06:35:31 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e485 e485: 3 total, 3 up, 3 in
Dec  2 06:35:31 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e485: 3 total, 3 up, 3 in
Dec  2 06:35:31 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Dec  2 06:35:32 np0005542249 ceph-osd[88961]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Dec  2 06:35:32 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1858: 321 pgs: 321 active+clean; 202 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 2.0 KiB/s wr, 50 op/s
Dec  2 06:35:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:35:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1098130585' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:35:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:35:33 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1098130585' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:35:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:35:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e485 do_prune osdmap full prune enabled
Dec  2 06:35:33 np0005542249 ovn_controller[153849]: 2025-12-02T11:35:33Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f7:37:46 10.100.0.14
Dec  2 06:35:33 np0005542249 ovn_controller[153849]: 2025-12-02T11:35:33Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f7:37:46 10.100.0.14
Dec  2 06:35:33 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e486 e486: 3 total, 3 up, 3 in
Dec  2 06:35:33 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e486: 3 total, 3 up, 3 in
Dec  2 06:35:33 np0005542249 ceph-osd[89966]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Dec  2 06:35:34 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1860: 321 pgs: 321 active+clean; 220 MiB data, 619 MiB used, 59 GiB / 60 GiB avail; 884 KiB/s rd, 4.6 MiB/s wr, 181 op/s
Dec  2 06:35:34 np0005542249 nova_compute[254900]: 2025-12-02 11:35:34.911 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:35 np0005542249 podman[297425]: 2025-12-02 11:35:35.052983273 +0000 UTC m=+0.112571187 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec  2 06:35:35 np0005542249 podman[297426]: 2025-12-02 11:35:35.101042839 +0000 UTC m=+0.147914550 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec  2 06:35:35 np0005542249 nova_compute[254900]: 2025-12-02 11:35:35.645 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e486 do_prune osdmap full prune enabled
Dec  2 06:35:35 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e487 e487: 3 total, 3 up, 3 in
Dec  2 06:35:35 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e487: 3 total, 3 up, 3 in
Dec  2 06:35:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:35:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:35:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:35:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:35:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Dec  2 06:35:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:35:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0023867500366282477 of space, bias 1.0, pg target 0.7160250109884743 quantized to 32 (current 32)
Dec  2 06:35:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:35:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  2 06:35:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:35:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Dec  2 06:35:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:35:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:35:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:35:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:35:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:35:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:35:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:35:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:35:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:35:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:35:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:35:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:35:36 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1862: 321 pgs: 321 active+clean; 220 MiB data, 619 MiB used, 59 GiB / 60 GiB avail; 819 KiB/s rd, 4.4 MiB/s wr, 137 op/s
Dec  2 06:35:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e487 do_prune osdmap full prune enabled
Dec  2 06:35:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e488 e488: 3 total, 3 up, 3 in
Dec  2 06:35:38 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e488: 3 total, 3 up, 3 in
Dec  2 06:35:38 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1864: 321 pgs: 321 active+clean; 269 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 11 MiB/s wr, 220 op/s
Dec  2 06:35:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e488 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:35:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:35:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4069239049' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:35:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:35:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4069239049' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:35:39 np0005542249 nova_compute[254900]: 2025-12-02 11:35:39.970 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:40 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1865: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 661 KiB/s rd, 9.2 MiB/s wr, 137 op/s
Dec  2 06:35:40 np0005542249 nova_compute[254900]: 2025-12-02 11:35:40.646 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:40 np0005542249 nova_compute[254900]: 2025-12-02 11:35:40.937 254904 DEBUG oslo_concurrency.lockutils [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "9579cc6b-d571-46a6-80c2-f8a0fb6b2672" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:35:40 np0005542249 nova_compute[254900]: 2025-12-02 11:35:40.938 254904 DEBUG oslo_concurrency.lockutils [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "9579cc6b-d571-46a6-80c2-f8a0fb6b2672" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:35:40 np0005542249 nova_compute[254900]: 2025-12-02 11:35:40.939 254904 DEBUG oslo_concurrency.lockutils [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "9579cc6b-d571-46a6-80c2-f8a0fb6b2672-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:35:40 np0005542249 nova_compute[254900]: 2025-12-02 11:35:40.939 254904 DEBUG oslo_concurrency.lockutils [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "9579cc6b-d571-46a6-80c2-f8a0fb6b2672-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:35:40 np0005542249 nova_compute[254900]: 2025-12-02 11:35:40.939 254904 DEBUG oslo_concurrency.lockutils [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "9579cc6b-d571-46a6-80c2-f8a0fb6b2672-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:35:40 np0005542249 nova_compute[254900]: 2025-12-02 11:35:40.941 254904 INFO nova.compute.manager [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Terminating instance#033[00m
Dec  2 06:35:40 np0005542249 nova_compute[254900]: 2025-12-02 11:35:40.943 254904 DEBUG nova.compute.manager [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:35:40 np0005542249 kernel: tapf725ebd9-55 (unregistering): left promiscuous mode
Dec  2 06:35:41 np0005542249 NetworkManager[48987]: <info>  [1764675341.0041] device (tapf725ebd9-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:35:41 np0005542249 ovn_controller[153849]: 2025-12-02T11:35:41Z|00276|binding|INFO|Releasing lport f725ebd9-55ae-40bd-ab5c-a2e8c95b7752 from this chassis (sb_readonly=0)
Dec  2 06:35:41 np0005542249 ovn_controller[153849]: 2025-12-02T11:35:41Z|00277|binding|INFO|Setting lport f725ebd9-55ae-40bd-ab5c-a2e8c95b7752 down in Southbound
Dec  2 06:35:41 np0005542249 ovn_controller[153849]: 2025-12-02T11:35:41Z|00278|binding|INFO|Removing iface tapf725ebd9-55 ovn-installed in OVS
Dec  2 06:35:41 np0005542249 nova_compute[254900]: 2025-12-02 11:35:41.056 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:41 np0005542249 nova_compute[254900]: 2025-12-02 11:35:41.083 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:41 np0005542249 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Dec  2 06:35:41 np0005542249 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001d.scope: Consumed 17.072s CPU time.
Dec  2 06:35:41 np0005542249 systemd-machined[216222]: Machine qemu-29-instance-0000001d terminated.
Dec  2 06:35:41 np0005542249 nova_compute[254900]: 2025-12-02 11:35:41.176 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:41.185 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:37:46 10.100.0.14'], port_security=['fa:16:3e:f7:37:46 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '9579cc6b-d571-46a6-80c2-f8a0fb6b2672', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28b69a92-5b45-421b-9985-afeebc6820aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58574186a4fd405e83f1a4b650ea8e8c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4140d653-003b-4989-8de5-3e8bad7f5c96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.197'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81aeb855-c9bf-4f95-90d1-85f514f075e1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=f725ebd9-55ae-40bd-ab5c-a2e8c95b7752) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:35:41 np0005542249 nova_compute[254900]: 2025-12-02 11:35:41.188 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:41.189 163757 INFO neutron.agent.ovn.metadata.agent [-] Port f725ebd9-55ae-40bd-ab5c-a2e8c95b7752 in datapath 28b69a92-5b45-421b-9985-afeebc6820aa unbound from our chassis#033[00m
Dec  2 06:35:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:41.191 163757 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 28b69a92-5b45-421b-9985-afeebc6820aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 06:35:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:41.193 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[d4fc60a4-d880-47e2-a860-ef651e01e618]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:41.194 163757 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa namespace which is not needed anymore#033[00m
Dec  2 06:35:41 np0005542249 nova_compute[254900]: 2025-12-02 11:35:41.200 254904 INFO nova.virt.libvirt.driver [-] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Instance destroyed successfully.#033[00m
Dec  2 06:35:41 np0005542249 nova_compute[254900]: 2025-12-02 11:35:41.201 254904 DEBUG nova.objects.instance [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lazy-loading 'resources' on Instance uuid 9579cc6b-d571-46a6-80c2-f8a0fb6b2672 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:35:41 np0005542249 neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa[297380]: [NOTICE]   (297384) : haproxy version is 2.8.14-c23fe91
Dec  2 06:35:41 np0005542249 neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa[297380]: [NOTICE]   (297384) : path to executable is /usr/sbin/haproxy
Dec  2 06:35:41 np0005542249 neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa[297380]: [WARNING]  (297384) : Exiting Master process...
Dec  2 06:35:41 np0005542249 neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa[297380]: [ALERT]    (297384) : Current worker (297386) exited with code 143 (Terminated)
Dec  2 06:35:41 np0005542249 neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa[297380]: [WARNING]  (297384) : All workers exited. Exiting... (0)
Dec  2 06:35:41 np0005542249 systemd[1]: libpod-7cc664f642f0295030b3d599080d4d09126932d21480e569081457a298876213.scope: Deactivated successfully.
Dec  2 06:35:41 np0005542249 podman[297501]: 2025-12-02 11:35:41.403600117 +0000 UTC m=+0.071507511 container died 7cc664f642f0295030b3d599080d4d09126932d21480e569081457a298876213 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  2 06:35:41 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7cc664f642f0295030b3d599080d4d09126932d21480e569081457a298876213-userdata-shm.mount: Deactivated successfully.
Dec  2 06:35:41 np0005542249 systemd[1]: var-lib-containers-storage-overlay-0f766ec82b95fce8c377b7677091a6f74590fddc1ca0cb657923792f5d57fbac-merged.mount: Deactivated successfully.
Dec  2 06:35:41 np0005542249 podman[297501]: 2025-12-02 11:35:41.467785387 +0000 UTC m=+0.135692761 container cleanup 7cc664f642f0295030b3d599080d4d09126932d21480e569081457a298876213 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  2 06:35:41 np0005542249 systemd[1]: libpod-conmon-7cc664f642f0295030b3d599080d4d09126932d21480e569081457a298876213.scope: Deactivated successfully.
Dec  2 06:35:41 np0005542249 nova_compute[254900]: 2025-12-02 11:35:41.528 254904 DEBUG nova.virt.libvirt.vif [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:35:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-718564893',display_name='tempest-TestEncryptedCinderVolumes-server-718564893',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-718564893',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK920rwaN7hvXue9KA1NjrUwvtvK954cG7ZuCXgqFd9X1K1nkCEcuVbCcgefDelXRvQjOoRaTNZtPceNbknmWJHmkwM014+LbqRvx6BhJSogI4x+qdIBG/Zp5TIVdDeUYQ==',key_name='tempest-TestEncryptedCinderVolumes-250014646',keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:35:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='58574186a4fd405e83f1a4b650ea8e8c',ramdisk_id='',reservation_id='r-x1sx0ti7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestEncryptedCinderVolumes-337876243',owner_user_name='tempest-TestEncryptedCinderVolumes-337876243-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:35:20Z,user_data=None,user_id='a003d9cef7684ec48ed996b22c11419e',uuid=9579cc6b-d571-46a6-80c2-f8a0fb6b2672,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f725ebd9-55ae-40bd-ab5c-a2e8c95b7752", "address": "fa:16:3e:f7:37:46", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf725ebd9-55", "ovs_interfaceid": "f725ebd9-55ae-40bd-ab5c-a2e8c95b7752", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:35:41 np0005542249 nova_compute[254900]: 2025-12-02 11:35:41.529 254904 DEBUG nova.network.os_vif_util [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Converting VIF {"id": "f725ebd9-55ae-40bd-ab5c-a2e8c95b7752", "address": "fa:16:3e:f7:37:46", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf725ebd9-55", "ovs_interfaceid": "f725ebd9-55ae-40bd-ab5c-a2e8c95b7752", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:35:41 np0005542249 nova_compute[254900]: 2025-12-02 11:35:41.530 254904 DEBUG nova.network.os_vif_util [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f7:37:46,bridge_name='br-int',has_traffic_filtering=True,id=f725ebd9-55ae-40bd-ab5c-a2e8c95b7752,network=Network(28b69a92-5b45-421b-9985-afeebc6820aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf725ebd9-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:35:41 np0005542249 nova_compute[254900]: 2025-12-02 11:35:41.531 254904 DEBUG os_vif [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:37:46,bridge_name='br-int',has_traffic_filtering=True,id=f725ebd9-55ae-40bd-ab5c-a2e8c95b7752,network=Network(28b69a92-5b45-421b-9985-afeebc6820aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf725ebd9-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:35:41 np0005542249 nova_compute[254900]: 2025-12-02 11:35:41.533 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:41 np0005542249 nova_compute[254900]: 2025-12-02 11:35:41.534 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf725ebd9-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:35:41 np0005542249 nova_compute[254900]: 2025-12-02 11:35:41.536 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:41 np0005542249 nova_compute[254900]: 2025-12-02 11:35:41.538 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:35:41 np0005542249 nova_compute[254900]: 2025-12-02 11:35:41.541 254904 INFO os_vif [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:37:46,bridge_name='br-int',has_traffic_filtering=True,id=f725ebd9-55ae-40bd-ab5c-a2e8c95b7752,network=Network(28b69a92-5b45-421b-9985-afeebc6820aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf725ebd9-55')#033[00m
Dec  2 06:35:41 np0005542249 podman[297532]: 2025-12-02 11:35:41.564791184 +0000 UTC m=+0.060444162 container remove 7cc664f642f0295030b3d599080d4d09126932d21480e569081457a298876213 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 06:35:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:41.576 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[45123c52-8de2-43fc-ad25-8a81f95db480]: (4, ('Tue Dec  2 11:35:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa (7cc664f642f0295030b3d599080d4d09126932d21480e569081457a298876213)\n7cc664f642f0295030b3d599080d4d09126932d21480e569081457a298876213\nTue Dec  2 11:35:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa (7cc664f642f0295030b3d599080d4d09126932d21480e569081457a298876213)\n7cc664f642f0295030b3d599080d4d09126932d21480e569081457a298876213\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:41.579 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[a845fee7-ed05-400b-8e74-2174b3f10d74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:41.581 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28b69a92-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:35:41 np0005542249 nova_compute[254900]: 2025-12-02 11:35:41.583 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:41 np0005542249 kernel: tap28b69a92-50: left promiscuous mode
Dec  2 06:35:41 np0005542249 nova_compute[254900]: 2025-12-02 11:35:41.617 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:41.623 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[1239cb7a-1a25-404a-8e7a-f8d1473cf3b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:41.646 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[c24e777b-65ef-4517-ab25-0702b00c1853]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:41.648 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[ed59ac05-66ae-4e04-866c-b3de7103bca4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:41.680 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[bf2a29f7-ec3e-433c-9c09-aa81cb2467c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550968, 'reachable_time': 27878, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297565, 'error': None, 'target': 'ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:41.685 164036 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 06:35:41 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:41.685 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[b33d5c06-ab7b-4149-8a2b-d758ae4822f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:41 np0005542249 systemd[1]: run-netns-ovnmeta\x2d28b69a92\x2d5b45\x2d421b\x2d9985\x2dafeebc6820aa.mount: Deactivated successfully.
Dec  2 06:35:41 np0005542249 nova_compute[254900]: 2025-12-02 11:35:41.799 254904 INFO nova.virt.libvirt.driver [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Deleting instance files /var/lib/nova/instances/9579cc6b-d571-46a6-80c2-f8a0fb6b2672_del#033[00m
Dec  2 06:35:41 np0005542249 nova_compute[254900]: 2025-12-02 11:35:41.800 254904 INFO nova.virt.libvirt.driver [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Deletion of /var/lib/nova/instances/9579cc6b-d571-46a6-80c2-f8a0fb6b2672_del complete#033[00m
Dec  2 06:35:42 np0005542249 nova_compute[254900]: 2025-12-02 11:35:42.244 254904 INFO nova.compute.manager [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Took 1.30 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:35:42 np0005542249 nova_compute[254900]: 2025-12-02 11:35:42.245 254904 DEBUG oslo.service.loopingcall [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:35:42 np0005542249 nova_compute[254900]: 2025-12-02 11:35:42.246 254904 DEBUG nova.compute.manager [-] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:35:42 np0005542249 nova_compute[254900]: 2025-12-02 11:35:42.246 254904 DEBUG nova.network.neutron [-] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:35:42 np0005542249 nova_compute[254900]: 2025-12-02 11:35:42.384 254904 DEBUG nova.compute.manager [req-fb7d465a-72be-43b4-9a5b-dff6a66418f4 req-6395d9ea-13a9-4986-8c62-17144b812ba1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Received event network-vif-unplugged-f725ebd9-55ae-40bd-ab5c-a2e8c95b7752 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:35:42 np0005542249 nova_compute[254900]: 2025-12-02 11:35:42.385 254904 DEBUG oslo_concurrency.lockutils [req-fb7d465a-72be-43b4-9a5b-dff6a66418f4 req-6395d9ea-13a9-4986-8c62-17144b812ba1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "9579cc6b-d571-46a6-80c2-f8a0fb6b2672-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:35:42 np0005542249 nova_compute[254900]: 2025-12-02 11:35:42.385 254904 DEBUG oslo_concurrency.lockutils [req-fb7d465a-72be-43b4-9a5b-dff6a66418f4 req-6395d9ea-13a9-4986-8c62-17144b812ba1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "9579cc6b-d571-46a6-80c2-f8a0fb6b2672-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:35:42 np0005542249 nova_compute[254900]: 2025-12-02 11:35:42.386 254904 DEBUG oslo_concurrency.lockutils [req-fb7d465a-72be-43b4-9a5b-dff6a66418f4 req-6395d9ea-13a9-4986-8c62-17144b812ba1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "9579cc6b-d571-46a6-80c2-f8a0fb6b2672-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:35:42 np0005542249 nova_compute[254900]: 2025-12-02 11:35:42.386 254904 DEBUG nova.compute.manager [req-fb7d465a-72be-43b4-9a5b-dff6a66418f4 req-6395d9ea-13a9-4986-8c62-17144b812ba1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] No waiting events found dispatching network-vif-unplugged-f725ebd9-55ae-40bd-ab5c-a2e8c95b7752 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:35:42 np0005542249 nova_compute[254900]: 2025-12-02 11:35:42.387 254904 DEBUG nova.compute.manager [req-fb7d465a-72be-43b4-9a5b-dff6a66418f4 req-6395d9ea-13a9-4986-8c62-17144b812ba1 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Received event network-vif-unplugged-f725ebd9-55ae-40bd-ab5c-a2e8c95b7752 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 06:35:42 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1866: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 278 KiB/s rd, 5.4 MiB/s wr, 118 op/s
Dec  2 06:35:43 np0005542249 nova_compute[254900]: 2025-12-02 11:35:43.244 254904 DEBUG nova.network.neutron [-] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:35:43 np0005542249 nova_compute[254900]: 2025-12-02 11:35:43.272 254904 INFO nova.compute.manager [-] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Took 1.03 seconds to deallocate network for instance.#033[00m
Dec  2 06:35:43 np0005542249 nova_compute[254900]: 2025-12-02 11:35:43.407 254904 DEBUG nova.compute.manager [req-fdfaea68-ccd2-4bc0-9058-46401c1546ea req-5a033b59-0f77-4867-93bd-2e4de9513809 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Received event network-vif-deleted-f725ebd9-55ae-40bd-ab5c-a2e8c95b7752 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:35:43 np0005542249 nova_compute[254900]: 2025-12-02 11:35:43.522 254904 INFO nova.compute.manager [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Took 0.25 seconds to detach 1 volumes for instance.#033[00m
Dec  2 06:35:43 np0005542249 nova_compute[254900]: 2025-12-02 11:35:43.564 254904 DEBUG oslo_concurrency.lockutils [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:35:43 np0005542249 nova_compute[254900]: 2025-12-02 11:35:43.564 254904 DEBUG oslo_concurrency.lockutils [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:35:43 np0005542249 nova_compute[254900]: 2025-12-02 11:35:43.615 254904 DEBUG oslo_concurrency.processutils [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:35:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e488 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:35:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e488 do_prune osdmap full prune enabled
Dec  2 06:35:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e489 e489: 3 total, 3 up, 3 in
Dec  2 06:35:43 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e489: 3 total, 3 up, 3 in
Dec  2 06:35:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:35:44 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3040933905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:35:44 np0005542249 nova_compute[254900]: 2025-12-02 11:35:44.095 254904 DEBUG oslo_concurrency.processutils [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:35:44 np0005542249 nova_compute[254900]: 2025-12-02 11:35:44.107 254904 DEBUG nova.compute.provider_tree [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:35:44 np0005542249 nova_compute[254900]: 2025-12-02 11:35:44.131 254904 DEBUG nova.scheduler.client.report [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:35:44 np0005542249 nova_compute[254900]: 2025-12-02 11:35:44.164 254904 DEBUG oslo_concurrency.lockutils [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:35:44 np0005542249 nova_compute[254900]: 2025-12-02 11:35:44.192 254904 INFO nova.scheduler.client.report [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Deleted allocations for instance 9579cc6b-d571-46a6-80c2-f8a0fb6b2672#033[00m
Dec  2 06:35:44 np0005542249 nova_compute[254900]: 2025-12-02 11:35:44.281 254904 DEBUG oslo_concurrency.lockutils [None req-f754de90-838e-4a18-b6a8-846f259f2d76 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "9579cc6b-d571-46a6-80c2-f8a0fb6b2672" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.343s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:35:44 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1868: 321 pgs: 321 active+clean; 271 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 5.4 MiB/s wr, 124 op/s
Dec  2 06:35:44 np0005542249 nova_compute[254900]: 2025-12-02 11:35:44.715 254904 DEBUG nova.compute.manager [req-ede1ebcc-5c80-4d1d-a1cc-497c7caf9ce5 req-1c2ac2a7-077d-4bb0-9619-867f55477997 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Received event network-vif-plugged-f725ebd9-55ae-40bd-ab5c-a2e8c95b7752 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:35:44 np0005542249 nova_compute[254900]: 2025-12-02 11:35:44.715 254904 DEBUG oslo_concurrency.lockutils [req-ede1ebcc-5c80-4d1d-a1cc-497c7caf9ce5 req-1c2ac2a7-077d-4bb0-9619-867f55477997 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "9579cc6b-d571-46a6-80c2-f8a0fb6b2672-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:35:44 np0005542249 nova_compute[254900]: 2025-12-02 11:35:44.716 254904 DEBUG oslo_concurrency.lockutils [req-ede1ebcc-5c80-4d1d-a1cc-497c7caf9ce5 req-1c2ac2a7-077d-4bb0-9619-867f55477997 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "9579cc6b-d571-46a6-80c2-f8a0fb6b2672-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:35:44 np0005542249 nova_compute[254900]: 2025-12-02 11:35:44.716 254904 DEBUG oslo_concurrency.lockutils [req-ede1ebcc-5c80-4d1d-a1cc-497c7caf9ce5 req-1c2ac2a7-077d-4bb0-9619-867f55477997 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "9579cc6b-d571-46a6-80c2-f8a0fb6b2672-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:35:44 np0005542249 nova_compute[254900]: 2025-12-02 11:35:44.716 254904 DEBUG nova.compute.manager [req-ede1ebcc-5c80-4d1d-a1cc-497c7caf9ce5 req-1c2ac2a7-077d-4bb0-9619-867f55477997 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] No waiting events found dispatching network-vif-plugged-f725ebd9-55ae-40bd-ab5c-a2e8c95b7752 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:35:44 np0005542249 nova_compute[254900]: 2025-12-02 11:35:44.716 254904 WARNING nova.compute.manager [req-ede1ebcc-5c80-4d1d-a1cc-497c7caf9ce5 req-1c2ac2a7-077d-4bb0-9619-867f55477997 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Received unexpected event network-vif-plugged-f725ebd9-55ae-40bd-ab5c-a2e8c95b7752 for instance with vm_state deleted and task_state None.#033[00m
Dec  2 06:35:44 np0005542249 nova_compute[254900]: 2025-12-02 11:35:44.972 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e489 do_prune osdmap full prune enabled
Dec  2 06:35:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e490 e490: 3 total, 3 up, 3 in
Dec  2 06:35:45 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e490: 3 total, 3 up, 3 in
Dec  2 06:35:46 np0005542249 nova_compute[254900]: 2025-12-02 11:35:46.397 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:35:46 np0005542249 nova_compute[254900]: 2025-12-02 11:35:46.398 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:35:46 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1870: 321 pgs: 321 active+clean; 271 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 283 KiB/s wr, 60 op/s
Dec  2 06:35:46 np0005542249 nova_compute[254900]: 2025-12-02 11:35:46.537 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:35:46 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2378064398' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:35:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e490 do_prune osdmap full prune enabled
Dec  2 06:35:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e491 e491: 3 total, 3 up, 3 in
Dec  2 06:35:48 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e491: 3 total, 3 up, 3 in
Dec  2 06:35:48 np0005542249 nova_compute[254900]: 2025-12-02 11:35:48.383 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:35:48 np0005542249 nova_compute[254900]: 2025-12-02 11:35:48.384 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:35:48 np0005542249 nova_compute[254900]: 2025-12-02 11:35:48.385 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 06:35:48 np0005542249 nova_compute[254900]: 2025-12-02 11:35:48.400 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 06:35:48 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1872: 321 pgs: 321 active+clean; 271 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 15 KiB/s wr, 43 op/s
Dec  2 06:35:48 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e491 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:35:49 np0005542249 nova_compute[254900]: 2025-12-02 11:35:49.975 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e491 do_prune osdmap full prune enabled
Dec  2 06:35:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e492 e492: 3 total, 3 up, 3 in
Dec  2 06:35:50 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e492: 3 total, 3 up, 3 in
Dec  2 06:35:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:35:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3661029035' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:35:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:35:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3661029035' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:35:50 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1874: 321 pgs: 321 active+clean; 271 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 50 op/s
Dec  2 06:35:51 np0005542249 nova_compute[254900]: 2025-12-02 11:35:51.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:35:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:35:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3459589856' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:35:51 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:35:51 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3459589856' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:35:51 np0005542249 nova_compute[254900]: 2025-12-02 11:35:51.540 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:52 np0005542249 nova_compute[254900]: 2025-12-02 11:35:52.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:35:52 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1875: 321 pgs: 321 active+clean; 271 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 4.6 KiB/s wr, 92 op/s
Dec  2 06:35:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:35:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:35:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:35:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:35:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:35:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:35:52 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 51a3af60-f332-4f22-a37e-0e095fd601c7 does not exist
Dec  2 06:35:52 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 923637b2-4e07-4c18-a5f4-2ce60f07019f does not exist
Dec  2 06:35:52 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 4895c9e5-8aa2-4db0-bc13-aaf06e7de25d does not exist
Dec  2 06:35:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:35:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:35:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:35:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:35:52 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:35:52 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:35:52 np0005542249 nova_compute[254900]: 2025-12-02 11:35:52.924 254904 DEBUG oslo_concurrency.lockutils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "3fa2207a-fc9e-44b7-9356-3c2d1ba98e87" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:35:52 np0005542249 nova_compute[254900]: 2025-12-02 11:35:52.926 254904 DEBUG oslo_concurrency.lockutils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "3fa2207a-fc9e-44b7-9356-3c2d1ba98e87" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:35:52 np0005542249 nova_compute[254900]: 2025-12-02 11:35:52.943 254904 DEBUG nova.compute.manager [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  2 06:35:53 np0005542249 nova_compute[254900]: 2025-12-02 11:35:53.023 254904 DEBUG oslo_concurrency.lockutils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:35:53 np0005542249 nova_compute[254900]: 2025-12-02 11:35:53.024 254904 DEBUG oslo_concurrency.lockutils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:35:53 np0005542249 nova_compute[254900]: 2025-12-02 11:35:53.033 254904 DEBUG nova.virt.hardware [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  2 06:35:53 np0005542249 nova_compute[254900]: 2025-12-02 11:35:53.034 254904 INFO nova.compute.claims [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  2 06:35:53 np0005542249 nova_compute[254900]: 2025-12-02 11:35:53.152 254904 DEBUG oslo_concurrency.processutils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:35:53 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:35:53 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:35:53 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:35:53 np0005542249 podman[297878]: 2025-12-02 11:35:53.543300587 +0000 UTC m=+0.065437957 container create 7cdea54176123ce3db5ef87e4f7c7732c8d618c456b31c3987c945060213a59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_williamson, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:35:53 np0005542249 podman[297878]: 2025-12-02 11:35:53.508398915 +0000 UTC m=+0.030536295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:35:53 np0005542249 systemd[1]: Started libpod-conmon-7cdea54176123ce3db5ef87e4f7c7732c8d618c456b31c3987c945060213a59c.scope.
Dec  2 06:35:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:35:53 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1839701796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:35:53 np0005542249 nova_compute[254900]: 2025-12-02 11:35:53.648 254904 DEBUG oslo_concurrency.processutils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:35:53 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:35:53 np0005542249 nova_compute[254900]: 2025-12-02 11:35:53.658 254904 DEBUG nova.compute.provider_tree [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:35:53 np0005542249 podman[297878]: 2025-12-02 11:35:53.681322799 +0000 UTC m=+0.203460229 container init 7cdea54176123ce3db5ef87e4f7c7732c8d618c456b31c3987c945060213a59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 06:35:53 np0005542249 podman[297878]: 2025-12-02 11:35:53.690750423 +0000 UTC m=+0.212887813 container start 7cdea54176123ce3db5ef87e4f7c7732c8d618c456b31c3987c945060213a59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:35:53 np0005542249 awesome_williamson[297893]: 167 167
Dec  2 06:35:53 np0005542249 systemd[1]: libpod-7cdea54176123ce3db5ef87e4f7c7732c8d618c456b31c3987c945060213a59c.scope: Deactivated successfully.
Dec  2 06:35:53 np0005542249 podman[297878]: 2025-12-02 11:35:53.697931827 +0000 UTC m=+0.220069207 container attach 7cdea54176123ce3db5ef87e4f7c7732c8d618c456b31c3987c945060213a59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_williamson, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:35:53 np0005542249 podman[297878]: 2025-12-02 11:35:53.698545794 +0000 UTC m=+0.220683164 container died 7cdea54176123ce3db5ef87e4f7c7732c8d618c456b31c3987c945060213a59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_williamson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  2 06:35:53 np0005542249 systemd[1]: var-lib-containers-storage-overlay-f8612ce0b5a95a6d501d860d6f81beb1cbc27b875514485ff4edb439ab5bb441-merged.mount: Deactivated successfully.
Dec  2 06:35:53 np0005542249 podman[297878]: 2025-12-02 11:35:53.760455954 +0000 UTC m=+0.282593304 container remove 7cdea54176123ce3db5ef87e4f7c7732c8d618c456b31c3987c945060213a59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:35:53 np0005542249 systemd[1]: libpod-conmon-7cdea54176123ce3db5ef87e4f7c7732c8d618c456b31c3987c945060213a59c.scope: Deactivated successfully.
Dec  2 06:35:53 np0005542249 nova_compute[254900]: 2025-12-02 11:35:53.790 254904 DEBUG nova.scheduler.client.report [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:35:53 np0005542249 nova_compute[254900]: 2025-12-02 11:35:53.821 254904 DEBUG oslo_concurrency.lockutils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:35:53 np0005542249 nova_compute[254900]: 2025-12-02 11:35:53.822 254904 DEBUG nova.compute.manager [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  2 06:35:53 np0005542249 nova_compute[254900]: 2025-12-02 11:35:53.880 254904 DEBUG nova.compute.manager [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  2 06:35:53 np0005542249 nova_compute[254900]: 2025-12-02 11:35:53.881 254904 DEBUG nova.network.neutron [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  2 06:35:53 np0005542249 nova_compute[254900]: 2025-12-02 11:35:53.911 254904 INFO nova.virt.libvirt.driver [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  2 06:35:53 np0005542249 nova_compute[254900]: 2025-12-02 11:35:53.935 254904 DEBUG nova.compute.manager [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  2 06:35:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e492 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:35:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e492 do_prune osdmap full prune enabled
Dec  2 06:35:53 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e493 e493: 3 total, 3 up, 3 in
Dec  2 06:35:53 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e493: 3 total, 3 up, 3 in
Dec  2 06:35:53 np0005542249 podman[297917]: 2025-12-02 11:35:53.993593121 +0000 UTC m=+0.071917330 container create 4fcbf222c6a2e88696906def2c046ae6c52ea088dfbd2021070a92634d1250d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mestorf, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  2 06:35:53 np0005542249 nova_compute[254900]: 2025-12-02 11:35:53.997 254904 INFO nova.virt.block_device [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Booting with volume 872da8d4-b22a-41b4-b607-ea71915c01b5 at /dev/vda#033[00m
Dec  2 06:35:54 np0005542249 systemd[1]: Started libpod-conmon-4fcbf222c6a2e88696906def2c046ae6c52ea088dfbd2021070a92634d1250d4.scope.
Dec  2 06:35:54 np0005542249 podman[297917]: 2025-12-02 11:35:53.966287115 +0000 UTC m=+0.044611374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:35:54 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:35:54 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d939741a74e5b523156f5ad6a16e6709b7d145fa2e4f9a24f48250551c509a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:35:54 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d939741a74e5b523156f5ad6a16e6709b7d145fa2e4f9a24f48250551c509a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:35:54 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d939741a74e5b523156f5ad6a16e6709b7d145fa2e4f9a24f48250551c509a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:35:54 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d939741a74e5b523156f5ad6a16e6709b7d145fa2e4f9a24f48250551c509a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:35:54 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d939741a74e5b523156f5ad6a16e6709b7d145fa2e4f9a24f48250551c509a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:35:54 np0005542249 podman[297917]: 2025-12-02 11:35:54.122054816 +0000 UTC m=+0.200379015 container init 4fcbf222c6a2e88696906def2c046ae6c52ea088dfbd2021070a92634d1250d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mestorf, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  2 06:35:54 np0005542249 podman[297917]: 2025-12-02 11:35:54.135792687 +0000 UTC m=+0.214116896 container start 4fcbf222c6a2e88696906def2c046ae6c52ea088dfbd2021070a92634d1250d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  2 06:35:54 np0005542249 podman[297917]: 2025-12-02 11:35:54.142452617 +0000 UTC m=+0.220776826 container attach 4fcbf222c6a2e88696906def2c046ae6c52ea088dfbd2021070a92634d1250d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mestorf, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  2 06:35:54 np0005542249 nova_compute[254900]: 2025-12-02 11:35:54.173 254904 DEBUG os_brick.utils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  2 06:35:54 np0005542249 nova_compute[254900]: 2025-12-02 11:35:54.174 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:35:54 np0005542249 nova_compute[254900]: 2025-12-02 11:35:54.188 262759 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:35:54 np0005542249 nova_compute[254900]: 2025-12-02 11:35:54.189 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[b160d1d6-591b-4b4e-8905-5d40fd3485ea]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:54 np0005542249 nova_compute[254900]: 2025-12-02 11:35:54.190 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:35:54 np0005542249 nova_compute[254900]: 2025-12-02 11:35:54.199 262759 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:35:54 np0005542249 nova_compute[254900]: 2025-12-02 11:35:54.199 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[f65e2c86-e621-4dfd-b8a9-43011d4c34c7]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2cd459f5c5a1', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:54 np0005542249 nova_compute[254900]: 2025-12-02 11:35:54.201 262759 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:35:54 np0005542249 nova_compute[254900]: 2025-12-02 11:35:54.216 262759 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:35:54 np0005542249 nova_compute[254900]: 2025-12-02 11:35:54.217 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[cf39d33f-e909-4378-a041-e762c3d76aaf]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:54 np0005542249 nova_compute[254900]: 2025-12-02 11:35:54.218 262759 DEBUG oslo.privsep.daemon [-] privsep: reply[00591777-a210-4c9c-a5d9-39668bcaa704]: (4, 'b5d8029e-bce4-4398-9c24-ad4d219021cb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:54 np0005542249 nova_compute[254900]: 2025-12-02 11:35:54.219 254904 DEBUG oslo_concurrency.processutils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:35:54 np0005542249 nova_compute[254900]: 2025-12-02 11:35:54.259 254904 DEBUG nova.policy [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a003d9cef7684ec48ed996b22c11419e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '58574186a4fd405e83f1a4b650ea8e8c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  2 06:35:54 np0005542249 nova_compute[254900]: 2025-12-02 11:35:54.266 254904 DEBUG oslo_concurrency.processutils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CMD "nvme version" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:35:54 np0005542249 nova_compute[254900]: 2025-12-02 11:35:54.269 254904 DEBUG os_brick.initiator.connectors.lightos [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  2 06:35:54 np0005542249 nova_compute[254900]: 2025-12-02 11:35:54.270 254904 DEBUG os_brick.initiator.connectors.lightos [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  2 06:35:54 np0005542249 nova_compute[254900]: 2025-12-02 11:35:54.270 254904 DEBUG os_brick.initiator.connectors.lightos [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  2 06:35:54 np0005542249 nova_compute[254900]: 2025-12-02 11:35:54.271 254904 DEBUG os_brick.utils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] <== get_connector_properties: return (97ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2cd459f5c5a1', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'b5d8029e-bce4-4398-9c24-ad4d219021cb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  2 06:35:54 np0005542249 nova_compute[254900]: 2025-12-02 11:35:54.271 254904 DEBUG nova.virt.block_device [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Updating existing volume attachment record: 492af9f3-c6b2-4943-98eb-0e1c99b63b1f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  2 06:35:54 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1877: 321 pgs: 321 active+clean; 271 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 3.0 KiB/s wr, 69 op/s
Dec  2 06:35:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:35:54 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/413146046' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:35:54 np0005542249 nova_compute[254900]: 2025-12-02 11:35:54.932 254904 DEBUG nova.network.neutron [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Successfully created port: ac73987a-aa98-40fc-a185-3eb1a23885de _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  2 06:35:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e493 do_prune osdmap full prune enabled
Dec  2 06:35:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e494 e494: 3 total, 3 up, 3 in
Dec  2 06:35:54 np0005542249 nova_compute[254900]: 2025-12-02 11:35:54.976 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:54 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e494: 3 total, 3 up, 3 in
Dec  2 06:35:55 np0005542249 nova_compute[254900]: 2025-12-02 11:35:55.274 254904 DEBUG nova.compute.manager [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  2 06:35:55 np0005542249 nova_compute[254900]: 2025-12-02 11:35:55.276 254904 DEBUG nova.virt.libvirt.driver [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  2 06:35:55 np0005542249 nova_compute[254900]: 2025-12-02 11:35:55.277 254904 INFO nova.virt.libvirt.driver [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Creating image(s)#033[00m
Dec  2 06:35:55 np0005542249 nova_compute[254900]: 2025-12-02 11:35:55.277 254904 DEBUG nova.virt.libvirt.driver [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  2 06:35:55 np0005542249 nova_compute[254900]: 2025-12-02 11:35:55.278 254904 DEBUG nova.virt.libvirt.driver [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Ensure instance console log exists: /var/lib/nova/instances/3fa2207a-fc9e-44b7-9356-3c2d1ba98e87/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  2 06:35:55 np0005542249 nova_compute[254900]: 2025-12-02 11:35:55.279 254904 DEBUG oslo_concurrency.lockutils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:35:55 np0005542249 nova_compute[254900]: 2025-12-02 11:35:55.279 254904 DEBUG oslo_concurrency.lockutils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:35:55 np0005542249 nova_compute[254900]: 2025-12-02 11:35:55.280 254904 DEBUG oslo_concurrency.lockutils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:35:55 np0005542249 adoring_mestorf[297933]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:35:55 np0005542249 adoring_mestorf[297933]: --> relative data size: 1.0
Dec  2 06:35:55 np0005542249 adoring_mestorf[297933]: --> All data devices are unavailable
Dec  2 06:35:55 np0005542249 systemd[1]: libpod-4fcbf222c6a2e88696906def2c046ae6c52ea088dfbd2021070a92634d1250d4.scope: Deactivated successfully.
Dec  2 06:35:55 np0005542249 podman[297917]: 2025-12-02 11:35:55.339490322 +0000 UTC m=+1.417814521 container died 4fcbf222c6a2e88696906def2c046ae6c52ea088dfbd2021070a92634d1250d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  2 06:35:55 np0005542249 systemd[1]: libpod-4fcbf222c6a2e88696906def2c046ae6c52ea088dfbd2021070a92634d1250d4.scope: Consumed 1.148s CPU time.
Dec  2 06:35:55 np0005542249 systemd[1]: var-lib-containers-storage-overlay-62d939741a74e5b523156f5ad6a16e6709b7d145fa2e4f9a24f48250551c509a-merged.mount: Deactivated successfully.
Dec  2 06:35:55 np0005542249 nova_compute[254900]: 2025-12-02 11:35:55.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:35:55 np0005542249 nova_compute[254900]: 2025-12-02 11:35:55.407 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:35:55 np0005542249 nova_compute[254900]: 2025-12-02 11:35:55.408 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:35:55 np0005542249 nova_compute[254900]: 2025-12-02 11:35:55.408 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:35:55 np0005542249 nova_compute[254900]: 2025-12-02 11:35:55.408 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:35:55 np0005542249 nova_compute[254900]: 2025-12-02 11:35:55.408 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:35:55 np0005542249 podman[297917]: 2025-12-02 11:35:55.418479942 +0000 UTC m=+1.496804151 container remove 4fcbf222c6a2e88696906def2c046ae6c52ea088dfbd2021070a92634d1250d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mestorf, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  2 06:35:55 np0005542249 systemd[1]: libpod-conmon-4fcbf222c6a2e88696906def2c046ae6c52ea088dfbd2021070a92634d1250d4.scope: Deactivated successfully.
Dec  2 06:35:55 np0005542249 podman[297970]: 2025-12-02 11:35:55.480487134 +0000 UTC m=+0.095053404 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  2 06:35:55 np0005542249 nova_compute[254900]: 2025-12-02 11:35:55.634 254904 DEBUG nova.network.neutron [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Successfully updated port: ac73987a-aa98-40fc-a185-3eb1a23885de _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  2 06:35:55 np0005542249 nova_compute[254900]: 2025-12-02 11:35:55.661 254904 DEBUG oslo_concurrency.lockutils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "refresh_cache-3fa2207a-fc9e-44b7-9356-3c2d1ba98e87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:35:55 np0005542249 nova_compute[254900]: 2025-12-02 11:35:55.662 254904 DEBUG oslo_concurrency.lockutils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquired lock "refresh_cache-3fa2207a-fc9e-44b7-9356-3c2d1ba98e87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:35:55 np0005542249 nova_compute[254900]: 2025-12-02 11:35:55.662 254904 DEBUG nova.network.neutron [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  2 06:35:55 np0005542249 nova_compute[254900]: 2025-12-02 11:35:55.721 254904 DEBUG nova.compute.manager [req-5e3637eb-cdff-4163-8da3-116bcbffbdae req-0a7f6a6e-db33-4251-b9b3-7360be9c7d1b 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Received event network-changed-ac73987a-aa98-40fc-a185-3eb1a23885de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:35:55 np0005542249 nova_compute[254900]: 2025-12-02 11:35:55.722 254904 DEBUG nova.compute.manager [req-5e3637eb-cdff-4163-8da3-116bcbffbdae req-0a7f6a6e-db33-4251-b9b3-7360be9c7d1b 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Refreshing instance network info cache due to event network-changed-ac73987a-aa98-40fc-a185-3eb1a23885de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:35:55 np0005542249 nova_compute[254900]: 2025-12-02 11:35:55.723 254904 DEBUG oslo_concurrency.lockutils [req-5e3637eb-cdff-4163-8da3-116bcbffbdae req-0a7f6a6e-db33-4251-b9b3-7360be9c7d1b 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-3fa2207a-fc9e-44b7-9356-3c2d1ba98e87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:35:55 np0005542249 nova_compute[254900]: 2025-12-02 11:35:55.791 254904 DEBUG nova.network.neutron [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  2 06:35:55 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:35:55 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2316143419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:35:55 np0005542249 nova_compute[254900]: 2025-12-02 11:35:55.866 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.092 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.093 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4291MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.093 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.094 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.165 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Instance 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.166 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.166 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:35:56 np0005542249 podman[298161]: 2025-12-02 11:35:56.195469008 +0000 UTC m=+0.049098435 container create f51b1be28b0f40d3d5f73d5a37c0bc263d8839f6786ba98ccce6ae4d52839858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.196 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.225 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764675341.1953208, 9579cc6b-d571-46a6-80c2-f8a0fb6b2672 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.226 254904 INFO nova.compute.manager [-] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.244 254904 DEBUG nova.compute.manager [None req-79e71b3c-b8aa-4831-8a7a-8c0a8936b5df - - - - - -] [instance: 9579cc6b-d571-46a6-80c2-f8a0fb6b2672] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:35:56 np0005542249 systemd[1]: Started libpod-conmon-f51b1be28b0f40d3d5f73d5a37c0bc263d8839f6786ba98ccce6ae4d52839858.scope.
Dec  2 06:35:56 np0005542249 podman[298161]: 2025-12-02 11:35:56.174326428 +0000 UTC m=+0.027955885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:35:56 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:35:56 np0005542249 podman[298161]: 2025-12-02 11:35:56.294161941 +0000 UTC m=+0.147791388 container init f51b1be28b0f40d3d5f73d5a37c0bc263d8839f6786ba98ccce6ae4d52839858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_benz, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  2 06:35:56 np0005542249 podman[298161]: 2025-12-02 11:35:56.302774483 +0000 UTC m=+0.156403900 container start f51b1be28b0f40d3d5f73d5a37c0bc263d8839f6786ba98ccce6ae4d52839858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_benz, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  2 06:35:56 np0005542249 podman[298161]: 2025-12-02 11:35:56.306338868 +0000 UTC m=+0.159968315 container attach f51b1be28b0f40d3d5f73d5a37c0bc263d8839f6786ba98ccce6ae4d52839858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_benz, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:35:56 np0005542249 infallible_benz[298179]: 167 167
Dec  2 06:35:56 np0005542249 systemd[1]: libpod-f51b1be28b0f40d3d5f73d5a37c0bc263d8839f6786ba98ccce6ae4d52839858.scope: Deactivated successfully.
Dec  2 06:35:56 np0005542249 podman[298161]: 2025-12-02 11:35:56.312516316 +0000 UTC m=+0.166145743 container died f51b1be28b0f40d3d5f73d5a37c0bc263d8839f6786ba98ccce6ae4d52839858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:35:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e494 do_prune osdmap full prune enabled
Dec  2 06:35:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e495 e495: 3 total, 3 up, 3 in
Dec  2 06:35:56 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e495: 3 total, 3 up, 3 in
Dec  2 06:35:56 np0005542249 systemd[1]: var-lib-containers-storage-overlay-17309111e4c99d4afa40d7039c02909f5ac3b5436b5e8aa921904a4ef4609db1-merged.mount: Deactivated successfully.
Dec  2 06:35:56 np0005542249 podman[298161]: 2025-12-02 11:35:56.359821131 +0000 UTC m=+0.213450548 container remove f51b1be28b0f40d3d5f73d5a37c0bc263d8839f6786ba98ccce6ae4d52839858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_benz, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  2 06:35:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:35:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:35:56 np0005542249 systemd[1]: libpod-conmon-f51b1be28b0f40d3d5f73d5a37c0bc263d8839f6786ba98ccce6ae4d52839858.scope: Deactivated successfully.
Dec  2 06:35:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:35:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:35:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:35:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:35:56 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1880: 321 pgs: 321 active+clean; 271 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.2 KiB/s wr, 58 op/s
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.544 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:56 np0005542249 podman[298221]: 2025-12-02 11:35:56.572740624 +0000 UTC m=+0.070934864 container create 3ca0b048f3cf2e8754082b09b22301781ae423404eac8014bbae7f1d1c1f09f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_thompson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  2 06:35:56 np0005542249 systemd[1]: Started libpod-conmon-3ca0b048f3cf2e8754082b09b22301781ae423404eac8014bbae7f1d1c1f09f5.scope.
Dec  2 06:35:56 np0005542249 podman[298221]: 2025-12-02 11:35:56.543768922 +0000 UTC m=+0.041963252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:35:56 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:35:56 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0283fef55df803c24ab5510227619690dadaad1279115f4bcf6b8edd79fc12b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:35:56 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:35:56 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/864367750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:35:56 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0283fef55df803c24ab5510227619690dadaad1279115f4bcf6b8edd79fc12b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:35:56 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0283fef55df803c24ab5510227619690dadaad1279115f4bcf6b8edd79fc12b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:35:56 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0283fef55df803c24ab5510227619690dadaad1279115f4bcf6b8edd79fc12b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:35:56 np0005542249 podman[298221]: 2025-12-02 11:35:56.683369668 +0000 UTC m=+0.181563918 container init 3ca0b048f3cf2e8754082b09b22301781ae423404eac8014bbae7f1d1c1f09f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_thompson, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.685 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.691 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:35:56 np0005542249 podman[298221]: 2025-12-02 11:35:56.69718863 +0000 UTC m=+0.195382890 container start 3ca0b048f3cf2e8754082b09b22301781ae423404eac8014bbae7f1d1c1f09f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  2 06:35:56 np0005542249 podman[298221]: 2025-12-02 11:35:56.702446372 +0000 UTC m=+0.200640602 container attach 3ca0b048f3cf2e8754082b09b22301781ae423404eac8014bbae7f1d1c1f09f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_thompson, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.723 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.751 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.751 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.801 254904 DEBUG nova.network.neutron [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Updating instance_info_cache with network_info: [{"id": "ac73987a-aa98-40fc-a185-3eb1a23885de", "address": "fa:16:3e:52:3c:ac", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac73987a-aa", "ovs_interfaceid": "ac73987a-aa98-40fc-a185-3eb1a23885de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.820 254904 DEBUG oslo_concurrency.lockutils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Releasing lock "refresh_cache-3fa2207a-fc9e-44b7-9356-3c2d1ba98e87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.820 254904 DEBUG nova.compute.manager [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Instance network_info: |[{"id": "ac73987a-aa98-40fc-a185-3eb1a23885de", "address": "fa:16:3e:52:3c:ac", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac73987a-aa", "ovs_interfaceid": "ac73987a-aa98-40fc-a185-3eb1a23885de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.820 254904 DEBUG oslo_concurrency.lockutils [req-5e3637eb-cdff-4163-8da3-116bcbffbdae req-0a7f6a6e-db33-4251-b9b3-7360be9c7d1b 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-3fa2207a-fc9e-44b7-9356-3c2d1ba98e87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.820 254904 DEBUG nova.network.neutron [req-5e3637eb-cdff-4163-8da3-116bcbffbdae req-0a7f6a6e-db33-4251-b9b3-7360be9c7d1b 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Refreshing network info cache for port ac73987a-aa98-40fc-a185-3eb1a23885de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.823 254904 DEBUG nova.virt.libvirt.driver [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Start _get_guest_xml network_info=[{"id": "ac73987a-aa98-40fc-a185-3eb1a23885de", "address": "fa:16:3e:52:3c:ac", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac73987a-aa", "ovs_interfaceid": "ac73987a-aa98-40fc-a185-3eb1a23885de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-872da8d4-b22a-41b4-b607-ea71915c01b5', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '872da8d4-b22a-41b4-b607-ea71915c01b5', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '3fa2207a-fc9e-44b7-9356-3c2d1ba98e87', 'attached_at': '', 'detached_at': '', 'volume_id': '872da8d4-b22a-41b4-b607-ea71915c01b5', 'serial': '872da8d4-b22a-41b4-b607-ea71915c01b5'}, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'attachment_id': '492af9f3-c6b2-4943-98eb-0e1c99b63b1f', 'delete_on_termination': False, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.827 254904 WARNING nova.virt.libvirt.driver [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.833 254904 DEBUG nova.virt.libvirt.host [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.833 254904 DEBUG nova.virt.libvirt.host [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.841 254904 DEBUG nova.virt.libvirt.host [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.841 254904 DEBUG nova.virt.libvirt.host [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.842 254904 DEBUG nova.virt.libvirt.driver [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.842 254904 DEBUG nova.virt.hardware [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T11:15:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='72ab1b76-57b9-4154-af5d-d44eef11ba44',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.842 254904 DEBUG nova.virt.hardware [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.843 254904 DEBUG nova.virt.hardware [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.843 254904 DEBUG nova.virt.hardware [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.843 254904 DEBUG nova.virt.hardware [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.843 254904 DEBUG nova.virt.hardware [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.843 254904 DEBUG nova.virt.hardware [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.843 254904 DEBUG nova.virt.hardware [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.844 254904 DEBUG nova.virt.hardware [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.844 254904 DEBUG nova.virt.hardware [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.844 254904 DEBUG nova.virt.hardware [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.878 254904 DEBUG nova.storage.rbd_utils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] rbd image 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:35:56 np0005542249 nova_compute[254900]: 2025-12-02 11:35:56.889 254904 DEBUG oslo_concurrency.processutils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:35:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  2 06:35:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2724305562' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.379 254904 DEBUG oslo_concurrency.processutils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]: {
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:    "0": [
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:        {
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "devices": [
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "/dev/loop3"
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            ],
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "lv_name": "ceph_lv0",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "lv_size": "21470642176",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "name": "ceph_lv0",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "tags": {
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.cluster_name": "ceph",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.crush_device_class": "",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.encrypted": "0",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.osd_id": "0",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.type": "block",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.vdo": "0"
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            },
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "type": "block",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "vg_name": "ceph_vg0"
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:        }
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:    ],
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:    "1": [
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:        {
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "devices": [
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "/dev/loop4"
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            ],
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "lv_name": "ceph_lv1",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "lv_size": "21470642176",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "name": "ceph_lv1",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "tags": {
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.cluster_name": "ceph",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.crush_device_class": "",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.encrypted": "0",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.osd_id": "1",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.type": "block",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.vdo": "0"
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            },
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "type": "block",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "vg_name": "ceph_vg1"
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:        }
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:    ],
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:    "2": [
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:        {
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "devices": [
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "/dev/loop5"
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            ],
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "lv_name": "ceph_lv2",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "lv_size": "21470642176",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "name": "ceph_lv2",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "tags": {
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.cluster_name": "ceph",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.crush_device_class": "",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.encrypted": "0",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.osd_id": "2",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.type": "block",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:                "ceph.vdo": "0"
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            },
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "type": "block",
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:            "vg_name": "ceph_vg2"
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:        }
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]:    ]
Dec  2 06:35:57 np0005542249 wonderful_thompson[298238]: }
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.532 254904 DEBUG os_brick.encryptors [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Using volume encryption metadata '{'encryption_key_id': '92eae2ca-7c27-4e36-afea-dc9a87747409', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-872da8d4-b22a-41b4-b607-ea71915c01b5', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '872da8d4-b22a-41b4-b607-ea71915c01b5', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '3fa2207a-fc9e-44b7-9356-3c2d1ba98e87', 'attached_at': '', 'detached_at': '', 'volume_id': '872da8d4-b22a-41b4-b607-ea71915c01b5', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Dec  2 06:35:57 np0005542249 systemd[1]: libpod-3ca0b048f3cf2e8754082b09b22301781ae423404eac8014bbae7f1d1c1f09f5.scope: Deactivated successfully.
Dec  2 06:35:57 np0005542249 podman[298221]: 2025-12-02 11:35:57.534520654 +0000 UTC m=+1.032714884 container died 3ca0b048f3cf2e8754082b09b22301781ae423404eac8014bbae7f1d1c1f09f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.538 254904 DEBUG barbicanclient.client [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.560 254904 DEBUG barbicanclient.v1.secrets [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/92eae2ca-7c27-4e36-afea-dc9a87747409 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.561 254904 INFO barbicanclient.base [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/92eae2ca-7c27-4e36-afea-dc9a87747409#033[00m
Dec  2 06:35:57 np0005542249 systemd[1]: var-lib-containers-storage-overlay-0283fef55df803c24ab5510227619690dadaad1279115f4bcf6b8edd79fc12b2-merged.mount: Deactivated successfully.
Dec  2 06:35:57 np0005542249 podman[298221]: 2025-12-02 11:35:57.594359108 +0000 UTC m=+1.092553378 container remove 3ca0b048f3cf2e8754082b09b22301781ae423404eac8014bbae7f1d1c1f09f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_thompson, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.605 254904 DEBUG barbicanclient.client [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.605 254904 INFO barbicanclient.base [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/92eae2ca-7c27-4e36-afea-dc9a87747409#033[00m
Dec  2 06:35:57 np0005542249 systemd[1]: libpod-conmon-3ca0b048f3cf2e8754082b09b22301781ae423404eac8014bbae7f1d1c1f09f5.scope: Deactivated successfully.
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.631 254904 DEBUG barbicanclient.client [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.632 254904 INFO barbicanclient.base [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/92eae2ca-7c27-4e36-afea-dc9a87747409#033[00m
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.668 254904 DEBUG barbicanclient.client [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.669 254904 INFO barbicanclient.base [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/92eae2ca-7c27-4e36-afea-dc9a87747409#033[00m
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.697 254904 DEBUG barbicanclient.client [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.698 254904 INFO barbicanclient.base [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/92eae2ca-7c27-4e36-afea-dc9a87747409#033[00m
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.723 254904 DEBUG barbicanclient.client [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.724 254904 INFO barbicanclient.base [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/92eae2ca-7c27-4e36-afea-dc9a87747409#033[00m
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.844 254904 DEBUG barbicanclient.client [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.845 254904 INFO barbicanclient.base [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/92eae2ca-7c27-4e36-afea-dc9a87747409#033[00m
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.870 254904 DEBUG barbicanclient.client [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.870 254904 INFO barbicanclient.base [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/92eae2ca-7c27-4e36-afea-dc9a87747409#033[00m
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.895 254904 DEBUG barbicanclient.client [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.896 254904 INFO barbicanclient.base [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/92eae2ca-7c27-4e36-afea-dc9a87747409#033[00m
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.917 254904 DEBUG barbicanclient.client [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.917 254904 INFO barbicanclient.base [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/92eae2ca-7c27-4e36-afea-dc9a87747409#033[00m
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.937 254904 DEBUG barbicanclient.client [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.937 254904 INFO barbicanclient.base [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/92eae2ca-7c27-4e36-afea-dc9a87747409#033[00m
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.960 254904 DEBUG barbicanclient.client [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.960 254904 INFO barbicanclient.base [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/92eae2ca-7c27-4e36-afea-dc9a87747409#033[00m
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.981 254904 DEBUG barbicanclient.client [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:57 np0005542249 nova_compute[254900]: 2025-12-02 11:35:57.982 254904 INFO barbicanclient.base [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/92eae2ca-7c27-4e36-afea-dc9a87747409#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.003 254904 DEBUG barbicanclient.client [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.004 254904 INFO barbicanclient.base [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/92eae2ca-7c27-4e36-afea-dc9a87747409#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.026 254904 DEBUG barbicanclient.client [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.027 254904 INFO barbicanclient.base [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Calculated Secrets uuid ref: secrets/92eae2ca-7c27-4e36-afea-dc9a87747409#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.047 254904 DEBUG barbicanclient.client [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.048 254904 DEBUG nova.virt.libvirt.host [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Secret XML: <secret ephemeral="no" private="no">
Dec  2 06:35:58 np0005542249 nova_compute[254900]:  <usage type="volume">
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <volume>872da8d4-b22a-41b4-b607-ea71915c01b5</volume>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:  </usage>
Dec  2 06:35:58 np0005542249 nova_compute[254900]: </secret>
Dec  2 06:35:58 np0005542249 nova_compute[254900]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.081 254904 DEBUG nova.virt.libvirt.vif [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:35:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-828383850',display_name='tempest-TestEncryptedCinderVolumes-server-828383850',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-828383850',id=30,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK920rwaN7hvXue9KA1NjrUwvtvK954cG7ZuCXgqFd9X1K1nkCEcuVbCcgefDelXRvQjOoRaTNZtPceNbknmWJHmkwM014+LbqRvx6BhJSogI4x+qdIBG/Zp5TIVdDeUYQ==',key_name='tempest-TestEncryptedCinderVolumes-250014646',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='58574186a4fd405e83f1a4b650ea8e8c',ramdisk_id='',reservation_id='r-xjccmtck',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-337876243',owner_user_name='tempest-TestEncryptedCinderVolumes-337876243-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:35:53Z,user_data=None,user_id='a003d9cef7684ec48ed996b22c11419e',uuid=3fa2207a-fc9e-44b7-9356-3c2d1ba98e87,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac73987a-aa98-40fc-a185-3eb1a23885de", "address": "fa:16:3e:52:3c:ac", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac73987a-aa", "ovs_interfaceid": "ac73987a-aa98-40fc-a185-3eb1a23885de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.082 254904 DEBUG nova.network.os_vif_util [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Converting VIF {"id": "ac73987a-aa98-40fc-a185-3eb1a23885de", "address": "fa:16:3e:52:3c:ac", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac73987a-aa", "ovs_interfaceid": "ac73987a-aa98-40fc-a185-3eb1a23885de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.083 254904 DEBUG nova.network.os_vif_util [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:3c:ac,bridge_name='br-int',has_traffic_filtering=True,id=ac73987a-aa98-40fc-a185-3eb1a23885de,network=Network(28b69a92-5b45-421b-9985-afeebc6820aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac73987a-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.085 254904 DEBUG nova.objects.instance [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lazy-loading 'pci_devices' on Instance uuid 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.098 254904 DEBUG nova.virt.libvirt.driver [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] End _get_guest_xml xml=<domain type="kvm">
Dec  2 06:35:58 np0005542249 nova_compute[254900]:  <uuid>3fa2207a-fc9e-44b7-9356-3c2d1ba98e87</uuid>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:  <name>instance-0000001e</name>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:  <memory>131072</memory>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:  <vcpu>1</vcpu>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:  <metadata>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-828383850</nova:name>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <nova:creationTime>2025-12-02 11:35:56</nova:creationTime>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <nova:flavor name="m1.nano">
Dec  2 06:35:58 np0005542249 nova_compute[254900]:        <nova:memory>128</nova:memory>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:        <nova:disk>1</nova:disk>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:        <nova:swap>0</nova:swap>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:        <nova:ephemeral>0</nova:ephemeral>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:        <nova:vcpus>1</nova:vcpus>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      </nova:flavor>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <nova:owner>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:        <nova:user uuid="a003d9cef7684ec48ed996b22c11419e">tempest-TestEncryptedCinderVolumes-337876243-project-member</nova:user>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:        <nova:project uuid="58574186a4fd405e83f1a4b650ea8e8c">tempest-TestEncryptedCinderVolumes-337876243</nova:project>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      </nova:owner>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <nova:ports>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:        <nova:port uuid="ac73987a-aa98-40fc-a185-3eb1a23885de">
Dec  2 06:35:58 np0005542249 nova_compute[254900]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:        </nova:port>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      </nova:ports>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    </nova:instance>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:  </metadata>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:  <sysinfo type="smbios">
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <system>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <entry name="manufacturer">RDO</entry>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <entry name="product">OpenStack Compute</entry>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <entry name="serial">3fa2207a-fc9e-44b7-9356-3c2d1ba98e87</entry>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <entry name="uuid">3fa2207a-fc9e-44b7-9356-3c2d1ba98e87</entry>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <entry name="family">Virtual Machine</entry>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    </system>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:  </sysinfo>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:  <os>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <boot dev="hd"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <smbios mode="sysinfo"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:  </os>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:  <features>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <acpi/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <apic/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <vmcoreinfo/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:  </features>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:  <clock offset="utc">
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <timer name="pit" tickpolicy="delay"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <timer name="hpet" present="no"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:  </clock>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:  <cpu mode="host-model" match="exact">
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <topology sockets="1" cores="1" threads="1"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:  </cpu>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:  <devices>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <disk type="network" device="cdrom">
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <driver type="raw" cache="none"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="vms/3fa2207a-fc9e-44b7-9356-3c2d1ba98e87_disk.config">
Dec  2 06:35:58 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:35:58 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <target dev="sda" bus="sata"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <disk type="network" device="disk">
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <source protocol="rbd" name="volumes/volume-872da8d4-b22a-41b4-b607-ea71915c01b5">
Dec  2 06:35:58 np0005542249 nova_compute[254900]:        <host name="192.168.122.100" port="6789"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      </source>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <auth username="openstack">
Dec  2 06:35:58 np0005542249 nova_compute[254900]:        <secret type="ceph" uuid="95bc4eaa-1a14-59bf-acf2-4b3da055547d"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      </auth>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <target dev="vda" bus="virtio"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <serial>872da8d4-b22a-41b4-b607-ea71915c01b5</serial>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <encryption format="luks">
Dec  2 06:35:58 np0005542249 nova_compute[254900]:        <secret type="passphrase" uuid="6362a8ab-ae0b-40f2-9d86-1b807891230b"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      </encryption>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    </disk>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <interface type="ethernet">
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <mac address="fa:16:3e:52:3c:ac"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <driver name="vhost" rx_queue_size="512"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <mtu size="1442"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <target dev="tapac73987a-aa"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    </interface>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <serial type="pty">
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <log file="/var/lib/nova/instances/3fa2207a-fc9e-44b7-9356-3c2d1ba98e87/console.log" append="off"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    </serial>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <video>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <model type="virtio"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    </video>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <input type="tablet" bus="usb"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <rng model="virtio">
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <backend model="random">/dev/urandom</backend>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    </rng>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <controller type="pci" model="pcie-root-port"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <controller type="usb" index="0"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    <memballoon model="virtio">
Dec  2 06:35:58 np0005542249 nova_compute[254900]:      <stats period="10"/>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:    </memballoon>
Dec  2 06:35:58 np0005542249 nova_compute[254900]:  </devices>
Dec  2 06:35:58 np0005542249 nova_compute[254900]: </domain>
Dec  2 06:35:58 np0005542249 nova_compute[254900]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.100 254904 DEBUG nova.compute.manager [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Preparing to wait for external event network-vif-plugged-ac73987a-aa98-40fc-a185-3eb1a23885de prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.100 254904 DEBUG oslo_concurrency.lockutils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "3fa2207a-fc9e-44b7-9356-3c2d1ba98e87-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.100 254904 DEBUG oslo_concurrency.lockutils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "3fa2207a-fc9e-44b7-9356-3c2d1ba98e87-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.101 254904 DEBUG oslo_concurrency.lockutils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "3fa2207a-fc9e-44b7-9356-3c2d1ba98e87-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.101 254904 DEBUG nova.virt.libvirt.vif [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T11:35:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-828383850',display_name='tempest-TestEncryptedCinderVolumes-server-828383850',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-828383850',id=30,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK920rwaN7hvXue9KA1NjrUwvtvK954cG7ZuCXgqFd9X1K1nkCEcuVbCcgefDelXRvQjOoRaTNZtPceNbknmWJHmkwM014+LbqRvx6BhJSogI4x+qdIBG/Zp5TIVdDeUYQ==',key_name='tempest-TestEncryptedCinderVolumes-250014646',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='58574186a4fd405e83f1a4b650ea8e8c',ramdisk_id='',reservation_id='r-xjccmtck',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-337876243',owner_user_name='tempest-TestEncryptedCinderVolumes-337876243-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T11:35:53Z,user_data=None,user_id='a003d9cef7684ec48ed996b22c11419e',uuid=3fa2207a-fc9e-44b7-9356-3c2d1ba98e87,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac73987a-aa98-40fc-a185-3eb1a23885de", "address": "fa:16:3e:52:3c:ac", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac73987a-aa", "ovs_interfaceid": "ac73987a-aa98-40fc-a185-3eb1a23885de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.102 254904 DEBUG nova.network.os_vif_util [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Converting VIF {"id": "ac73987a-aa98-40fc-a185-3eb1a23885de", "address": "fa:16:3e:52:3c:ac", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac73987a-aa", "ovs_interfaceid": "ac73987a-aa98-40fc-a185-3eb1a23885de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.102 254904 DEBUG nova.network.os_vif_util [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:3c:ac,bridge_name='br-int',has_traffic_filtering=True,id=ac73987a-aa98-40fc-a185-3eb1a23885de,network=Network(28b69a92-5b45-421b-9985-afeebc6820aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac73987a-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.103 254904 DEBUG os_vif [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:3c:ac,bridge_name='br-int',has_traffic_filtering=True,id=ac73987a-aa98-40fc-a185-3eb1a23885de,network=Network(28b69a92-5b45-421b-9985-afeebc6820aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac73987a-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.103 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.104 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.105 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.109 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.110 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac73987a-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.111 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac73987a-aa, col_values=(('external_ids', {'iface-id': 'ac73987a-aa98-40fc-a185-3eb1a23885de', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:52:3c:ac', 'vm-uuid': '3fa2207a-fc9e-44b7-9356-3c2d1ba98e87'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.113 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:58 np0005542249 NetworkManager[48987]: <info>  [1764675358.1150] manager: (tapac73987a-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/146)
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.116 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.118 254904 DEBUG nova.network.neutron [req-5e3637eb-cdff-4163-8da3-116bcbffbdae req-0a7f6a6e-db33-4251-b9b3-7360be9c7d1b 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Updated VIF entry in instance network info cache for port ac73987a-aa98-40fc-a185-3eb1a23885de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.118 254904 DEBUG nova.network.neutron [req-5e3637eb-cdff-4163-8da3-116bcbffbdae req-0a7f6a6e-db33-4251-b9b3-7360be9c7d1b 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Updating instance_info_cache with network_info: [{"id": "ac73987a-aa98-40fc-a185-3eb1a23885de", "address": "fa:16:3e:52:3c:ac", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac73987a-aa", "ovs_interfaceid": "ac73987a-aa98-40fc-a185-3eb1a23885de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.122 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.123 254904 INFO os_vif [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:3c:ac,bridge_name='br-int',has_traffic_filtering=True,id=ac73987a-aa98-40fc-a185-3eb1a23885de,network=Network(28b69a92-5b45-421b-9985-afeebc6820aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac73987a-aa')#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.133 254904 DEBUG oslo_concurrency.lockutils [req-5e3637eb-cdff-4163-8da3-116bcbffbdae req-0a7f6a6e-db33-4251-b9b3-7360be9c7d1b 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-3fa2207a-fc9e-44b7-9356-3c2d1ba98e87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:35:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:35:58 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1511798294' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:35:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:35:58 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1511798294' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.172 254904 DEBUG nova.virt.libvirt.driver [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.173 254904 DEBUG nova.virt.libvirt.driver [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.173 254904 DEBUG nova.virt.libvirt.driver [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] No VIF found with MAC fa:16:3e:52:3c:ac, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.173 254904 INFO nova.virt.libvirt.driver [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Using config drive#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.196 254904 DEBUG nova.storage.rbd_utils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] rbd image 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:35:58 np0005542249 podman[298461]: 2025-12-02 11:35:58.374251842 +0000 UTC m=+0.049900176 container create 93764c1fc2b84fd50a6b8f2ff314dcc9d236d3d7803773221876861b1c7817e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cartwright, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:35:58 np0005542249 systemd[1]: Started libpod-conmon-93764c1fc2b84fd50a6b8f2ff314dcc9d236d3d7803773221876861b1c7817e7.scope.
Dec  2 06:35:58 np0005542249 podman[298461]: 2025-12-02 11:35:58.353455571 +0000 UTC m=+0.029103955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:35:58 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:35:58 np0005542249 podman[298461]: 2025-12-02 11:35:58.465951056 +0000 UTC m=+0.141599390 container init 93764c1fc2b84fd50a6b8f2ff314dcc9d236d3d7803773221876861b1c7817e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cartwright, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec  2 06:35:58 np0005542249 podman[298461]: 2025-12-02 11:35:58.474846356 +0000 UTC m=+0.150494690 container start 93764c1fc2b84fd50a6b8f2ff314dcc9d236d3d7803773221876861b1c7817e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cartwright, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  2 06:35:58 np0005542249 podman[298461]: 2025-12-02 11:35:58.477917758 +0000 UTC m=+0.153566092 container attach 93764c1fc2b84fd50a6b8f2ff314dcc9d236d3d7803773221876861b1c7817e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  2 06:35:58 np0005542249 peaceful_cartwright[298478]: 167 167
Dec  2 06:35:58 np0005542249 systemd[1]: libpod-93764c1fc2b84fd50a6b8f2ff314dcc9d236d3d7803773221876861b1c7817e7.scope: Deactivated successfully.
Dec  2 06:35:58 np0005542249 podman[298461]: 2025-12-02 11:35:58.482982175 +0000 UTC m=+0.158630509 container died 93764c1fc2b84fd50a6b8f2ff314dcc9d236d3d7803773221876861b1c7817e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cartwright, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  2 06:35:58 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1881: 321 pgs: 321 active+clean; 271 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.8 KiB/s wr, 41 op/s
Dec  2 06:35:58 np0005542249 systemd[1]: var-lib-containers-storage-overlay-106326d4dbcafd15f8a7b48a6139b0bc5e20ac0ec8db00cf5216d98a91328ec0-merged.mount: Deactivated successfully.
Dec  2 06:35:58 np0005542249 podman[298461]: 2025-12-02 11:35:58.52653202 +0000 UTC m=+0.202180354 container remove 93764c1fc2b84fd50a6b8f2ff314dcc9d236d3d7803773221876861b1c7817e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  2 06:35:58 np0005542249 systemd[1]: libpod-conmon-93764c1fc2b84fd50a6b8f2ff314dcc9d236d3d7803773221876861b1c7817e7.scope: Deactivated successfully.
Dec  2 06:35:58 np0005542249 podman[298502]: 2025-12-02 11:35:58.714619102 +0000 UTC m=+0.052242069 container create 032739de6bea84b5079ee3b4c8b71f88e808802aa895395888a4d6dc6667f0f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galois, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  2 06:35:58 np0005542249 systemd[1]: Started libpod-conmon-032739de6bea84b5079ee3b4c8b71f88e808802aa895395888a4d6dc6667f0f5.scope.
Dec  2 06:35:58 np0005542249 podman[298502]: 2025-12-02 11:35:58.69263243 +0000 UTC m=+0.030255387 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:35:58 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:35:58 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/866a25f879cbbe360cb78bcc90c3b4492601e6f051d02984ebd3e7c413666450/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:35:58 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/866a25f879cbbe360cb78bcc90c3b4492601e6f051d02984ebd3e7c413666450/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:35:58 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/866a25f879cbbe360cb78bcc90c3b4492601e6f051d02984ebd3e7c413666450/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:35:58 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/866a25f879cbbe360cb78bcc90c3b4492601e6f051d02984ebd3e7c413666450/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:35:58 np0005542249 podman[298502]: 2025-12-02 11:35:58.81609581 +0000 UTC m=+0.153718767 container init 032739de6bea84b5079ee3b4c8b71f88e808802aa895395888a4d6dc6667f0f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galois, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  2 06:35:58 np0005542249 podman[298502]: 2025-12-02 11:35:58.82722698 +0000 UTC m=+0.164849957 container start 032739de6bea84b5079ee3b4c8b71f88e808802aa895395888a4d6dc6667f0f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:35:58 np0005542249 podman[298502]: 2025-12-02 11:35:58.831952217 +0000 UTC m=+0.169575174 container attach 032739de6bea84b5079ee3b4c8b71f88e808802aa895395888a4d6dc6667f0f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galois, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.830 254904 INFO nova.virt.libvirt.driver [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Creating config drive at /var/lib/nova/instances/3fa2207a-fc9e-44b7-9356-3c2d1ba98e87/disk.config#033[00m
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.836 254904 DEBUG oslo_concurrency.processutils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3fa2207a-fc9e-44b7-9356-3c2d1ba98e87/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpky6z3s_t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:35:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:35:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e495 do_prune osdmap full prune enabled
Dec  2 06:35:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e496 e496: 3 total, 3 up, 3 in
Dec  2 06:35:58 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e496: 3 total, 3 up, 3 in
Dec  2 06:35:58 np0005542249 nova_compute[254900]: 2025-12-02 11:35:58.979 254904 DEBUG oslo_concurrency.processutils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3fa2207a-fc9e-44b7-9356-3c2d1ba98e87/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpky6z3s_t" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:35:59 np0005542249 nova_compute[254900]: 2025-12-02 11:35:59.018 254904 DEBUG nova.storage.rbd_utils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] rbd image 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  2 06:35:59 np0005542249 nova_compute[254900]: 2025-12-02 11:35:59.024 254904 DEBUG oslo_concurrency.processutils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3fa2207a-fc9e-44b7-9356-3c2d1ba98e87/disk.config 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:35:59 np0005542249 nova_compute[254900]: 2025-12-02 11:35:59.196 254904 DEBUG oslo_concurrency.processutils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3fa2207a-fc9e-44b7-9356-3c2d1ba98e87/disk.config 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:35:59 np0005542249 nova_compute[254900]: 2025-12-02 11:35:59.198 254904 INFO nova.virt.libvirt.driver [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Deleting local config drive /var/lib/nova/instances/3fa2207a-fc9e-44b7-9356-3c2d1ba98e87/disk.config because it was imported into RBD.#033[00m
Dec  2 06:35:59 np0005542249 kernel: tapac73987a-aa: entered promiscuous mode
Dec  2 06:35:59 np0005542249 NetworkManager[48987]: <info>  [1764675359.2845] manager: (tapac73987a-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/147)
Dec  2 06:35:59 np0005542249 ovn_controller[153849]: 2025-12-02T11:35:59Z|00279|binding|INFO|Claiming lport ac73987a-aa98-40fc-a185-3eb1a23885de for this chassis.
Dec  2 06:35:59 np0005542249 ovn_controller[153849]: 2025-12-02T11:35:59Z|00280|binding|INFO|ac73987a-aa98-40fc-a185-3eb1a23885de: Claiming fa:16:3e:52:3c:ac 10.100.0.3
Dec  2 06:35:59 np0005542249 nova_compute[254900]: 2025-12-02 11:35:59.287 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.295 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:3c:ac 10.100.0.3'], port_security=['fa:16:3e:52:3c:ac 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3fa2207a-fc9e-44b7-9356-3c2d1ba98e87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28b69a92-5b45-421b-9985-afeebc6820aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58574186a4fd405e83f1a4b650ea8e8c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4140d653-003b-4989-8de5-3e8bad7f5c96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81aeb855-c9bf-4f95-90d1-85f514f075e1, chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=ac73987a-aa98-40fc-a185-3eb1a23885de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.296 163757 INFO neutron.agent.ovn.metadata.agent [-] Port ac73987a-aa98-40fc-a185-3eb1a23885de in datapath 28b69a92-5b45-421b-9985-afeebc6820aa bound to our chassis#033[00m
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.297 163757 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 28b69a92-5b45-421b-9985-afeebc6820aa#033[00m
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.317 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[06f42a90-408d-4bd2-aa05-786f00d4ac65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.318 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap28b69a92-51 in ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.322 262398 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap28b69a92-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.323 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[7f66e5d6-c690-4858-aa7a-1b0fe918e602]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:59 np0005542249 systemd-udevd[298576]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.331 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[58ac8360-f3cc-4870-8001-02f4d59408e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:59 np0005542249 ovn_controller[153849]: 2025-12-02T11:35:59Z|00281|binding|INFO|Setting lport ac73987a-aa98-40fc-a185-3eb1a23885de ovn-installed in OVS
Dec  2 06:35:59 np0005542249 ovn_controller[153849]: 2025-12-02T11:35:59Z|00282|binding|INFO|Setting lport ac73987a-aa98-40fc-a185-3eb1a23885de up in Southbound
Dec  2 06:35:59 np0005542249 systemd-machined[216222]: New machine qemu-30-instance-0000001e.
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.347 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[33cbb670-043c-4510-873a-ec70106de991]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:59 np0005542249 NetworkManager[48987]: <info>  [1764675359.3887] device (tapac73987a-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 06:35:59 np0005542249 NetworkManager[48987]: <info>  [1764675359.3899] device (tapac73987a-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  2 06:35:59 np0005542249 systemd[1]: Started Virtual Machine qemu-30-instance-0000001e.
Dec  2 06:35:59 np0005542249 nova_compute[254900]: 2025-12-02 11:35:59.395 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.413 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[930e835c-79fc-4d72-93ca-81c34d683c58]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.463 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[cb7ce2f8-e418-45a5-bb4b-b83e9661ccee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:59 np0005542249 NetworkManager[48987]: <info>  [1764675359.4749] manager: (tap28b69a92-50): new Veth device (/org/freedesktop/NetworkManager/Devices/148)
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.476 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[9b9e33dc-e2c0-4f6f-8449-c01703c520fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.509 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[824b714c-70e6-41da-9bc0-a6edbf3b6423]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.513 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[a36e2a4b-3302-4077-872c-87d7ecf2db9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:59 np0005542249 NetworkManager[48987]: <info>  [1764675359.5428] device (tap28b69a92-50): carrier: link connected
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.554 262581 DEBUG oslo.privsep.daemon [-] privsep: reply[a8b65327-fc62-4b75-bde5-90996654b2e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.582 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc5441a-7683-4a6e-a2ce-716804a25a24]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28b69a92-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:96:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555236, 'reachable_time': 16776, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298609, 'error': None, 'target': 'ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.604 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[76f37fd2-1bdf-4186-8198-003190baa49d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1f:96b2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 555236, 'tstamp': 555236}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298610, 'error': None, 'target': 'ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.628 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[b7dda275-0ded-4e2f-8ea5-8d90e1d09e4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28b69a92-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:96:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555236, 'reachable_time': 16776, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298613, 'error': None, 'target': 'ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.671 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[dcbfa3b6-e36b-4568-b6d2-22b637439f83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:59 np0005542249 nova_compute[254900]: 2025-12-02 11:35:59.747 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:35:59 np0005542249 nova_compute[254900]: 2025-12-02 11:35:59.748 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.761 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[ad4c31bd-b9cb-4e3d-ba73-47973606b779]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.763 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28b69a92-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.764 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.764 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28b69a92-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:35:59 np0005542249 nova_compute[254900]: 2025-12-02 11:35:59.766 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:59 np0005542249 NetworkManager[48987]: <info>  [1764675359.7674] manager: (tap28b69a92-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/149)
Dec  2 06:35:59 np0005542249 kernel: tap28b69a92-50: entered promiscuous mode
Dec  2 06:35:59 np0005542249 nova_compute[254900]: 2025-12-02 11:35:59.768 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:59 np0005542249 nova_compute[254900]: 2025-12-02 11:35:59.769 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.770 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap28b69a92-50, col_values=(('external_ids', {'iface-id': '82e6ca5f-5089-4718-9fe8-4d0d719de187'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:35:59 np0005542249 nova_compute[254900]: 2025-12-02 11:35:59.771 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:59 np0005542249 ovn_controller[153849]: 2025-12-02T11:35:59Z|00283|binding|INFO|Releasing lport 82e6ca5f-5089-4718-9fe8-4d0d719de187 from this chassis (sb_readonly=0)
Dec  2 06:35:59 np0005542249 nova_compute[254900]: 2025-12-02 11:35:59.792 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.793 163757 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/28b69a92-5b45-421b-9985-afeebc6820aa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/28b69a92-5b45-421b-9985-afeebc6820aa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.794 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[6eab5a55-c053-4d97-a865-81f1e43a8aa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.795 163757 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: global
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]:    log         /dev/log local0 debug
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]:    log-tag     haproxy-metadata-proxy-28b69a92-5b45-421b-9985-afeebc6820aa
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]:    user        root
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]:    group       root
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]:    maxconn     1024
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]:    pidfile     /var/lib/neutron/external/pids/28b69a92-5b45-421b-9985-afeebc6820aa.pid.haproxy
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]:    daemon
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: defaults
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]:    log global
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]:    mode http
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]:    option httplog
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]:    option dontlognull
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]:    option http-server-close
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]:    option forwardfor
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]:    retries                 3
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]:    timeout http-request    30s
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]:    timeout connect         30s
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]:    timeout client          32s
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]:    timeout server          32s
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]:    timeout http-keep-alive 30s
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: listen listener
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]:    bind 169.254.169.254:80
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]:    server metadata /var/lib/neutron/metadata_proxy
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]:    http-request add-header X-OVN-Network-ID 28b69a92-5b45-421b-9985-afeebc6820aa
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  2 06:35:59 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:35:59.797 163757 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa', 'env', 'PROCESS_TAG=haproxy-28b69a92-5b45-421b-9985-afeebc6820aa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/28b69a92-5b45-421b-9985-afeebc6820aa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  2 06:35:59 np0005542249 nova_compute[254900]: 2025-12-02 11:35:59.916 254904 DEBUG nova.compute.manager [req-1c4ada47-c380-4aab-bcd2-abee4689290b req-233dfa64-909d-4b42-8e1c-50d2865d1358 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Received event network-vif-plugged-ac73987a-aa98-40fc-a185-3eb1a23885de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:35:59 np0005542249 nova_compute[254900]: 2025-12-02 11:35:59.917 254904 DEBUG oslo_concurrency.lockutils [req-1c4ada47-c380-4aab-bcd2-abee4689290b req-233dfa64-909d-4b42-8e1c-50d2865d1358 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "3fa2207a-fc9e-44b7-9356-3c2d1ba98e87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:35:59 np0005542249 nova_compute[254900]: 2025-12-02 11:35:59.917 254904 DEBUG oslo_concurrency.lockutils [req-1c4ada47-c380-4aab-bcd2-abee4689290b req-233dfa64-909d-4b42-8e1c-50d2865d1358 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "3fa2207a-fc9e-44b7-9356-3c2d1ba98e87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:35:59 np0005542249 nova_compute[254900]: 2025-12-02 11:35:59.917 254904 DEBUG oslo_concurrency.lockutils [req-1c4ada47-c380-4aab-bcd2-abee4689290b req-233dfa64-909d-4b42-8e1c-50d2865d1358 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "3fa2207a-fc9e-44b7-9356-3c2d1ba98e87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:35:59 np0005542249 nova_compute[254900]: 2025-12-02 11:35:59.918 254904 DEBUG nova.compute.manager [req-1c4ada47-c380-4aab-bcd2-abee4689290b req-233dfa64-909d-4b42-8e1c-50d2865d1358 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Processing event network-vif-plugged-ac73987a-aa98-40fc-a185-3eb1a23885de _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  2 06:35:59 np0005542249 friendly_galois[298518]: {
Dec  2 06:35:59 np0005542249 friendly_galois[298518]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:35:59 np0005542249 friendly_galois[298518]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:35:59 np0005542249 friendly_galois[298518]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:35:59 np0005542249 friendly_galois[298518]:        "osd_id": 0,
Dec  2 06:35:59 np0005542249 friendly_galois[298518]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:35:59 np0005542249 friendly_galois[298518]:        "type": "bluestore"
Dec  2 06:35:59 np0005542249 friendly_galois[298518]:    },
Dec  2 06:35:59 np0005542249 friendly_galois[298518]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:35:59 np0005542249 friendly_galois[298518]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:35:59 np0005542249 friendly_galois[298518]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:35:59 np0005542249 friendly_galois[298518]:        "osd_id": 2,
Dec  2 06:35:59 np0005542249 friendly_galois[298518]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:35:59 np0005542249 friendly_galois[298518]:        "type": "bluestore"
Dec  2 06:35:59 np0005542249 friendly_galois[298518]:    },
Dec  2 06:35:59 np0005542249 friendly_galois[298518]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:35:59 np0005542249 friendly_galois[298518]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:35:59 np0005542249 friendly_galois[298518]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:35:59 np0005542249 friendly_galois[298518]:        "osd_id": 1,
Dec  2 06:35:59 np0005542249 friendly_galois[298518]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:35:59 np0005542249 friendly_galois[298518]:        "type": "bluestore"
Dec  2 06:35:59 np0005542249 friendly_galois[298518]:    }
Dec  2 06:35:59 np0005542249 friendly_galois[298518]: }
Dec  2 06:35:59 np0005542249 systemd[1]: libpod-032739de6bea84b5079ee3b4c8b71f88e808802aa895395888a4d6dc6667f0f5.scope: Deactivated successfully.
Dec  2 06:35:59 np0005542249 systemd[1]: libpod-032739de6bea84b5079ee3b4c8b71f88e808802aa895395888a4d6dc6667f0f5.scope: Consumed 1.097s CPU time.
Dec  2 06:35:59 np0005542249 podman[298502]: 2025-12-02 11:35:59.962150699 +0000 UTC m=+1.299773636 container died 032739de6bea84b5079ee3b4c8b71f88e808802aa895395888a4d6dc6667f0f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galois, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  2 06:35:59 np0005542249 nova_compute[254900]: 2025-12-02 11:35:59.980 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:35:59 np0005542249 systemd[1]: var-lib-containers-storage-overlay-866a25f879cbbe360cb78bcc90c3b4492601e6f051d02984ebd3e7c413666450-merged.mount: Deactivated successfully.
Dec  2 06:36:00 np0005542249 podman[298502]: 2025-12-02 11:36:00.023088653 +0000 UTC m=+1.360711590 container remove 032739de6bea84b5079ee3b4c8b71f88e808802aa895395888a4d6dc6667f0f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:36:00 np0005542249 systemd[1]: libpod-conmon-032739de6bea84b5079ee3b4c8b71f88e808802aa895395888a4d6dc6667f0f5.scope: Deactivated successfully.
Dec  2 06:36:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:36:00 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:36:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:36:00 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:36:00 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 15cda802-aa47-4505-834e-de93f2c3acc7 does not exist
Dec  2 06:36:00 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 857fe922-4107-4fb0-9097-0d184402f9ee does not exist
Dec  2 06:36:00 np0005542249 podman[298741]: 2025-12-02 11:36:00.206578912 +0000 UTC m=+0.058072617 container create 83ce492a8a5ae5af28223973e0a23ce20cb34ab4b8346c01ad14b0b9d4823ece (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  2 06:36:00 np0005542249 systemd[1]: Started libpod-conmon-83ce492a8a5ae5af28223973e0a23ce20cb34ab4b8346c01ad14b0b9d4823ece.scope.
Dec  2 06:36:00 np0005542249 podman[298741]: 2025-12-02 11:36:00.178055832 +0000 UTC m=+0.029549567 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  2 06:36:00 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:36:00 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02900975817dbd27d95979ce9a687c1687233066e425e79bca6753adbc660e16/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  2 06:36:00 np0005542249 podman[298741]: 2025-12-02 11:36:00.304285888 +0000 UTC m=+0.155779613 container init 83ce492a8a5ae5af28223973e0a23ce20cb34ab4b8346c01ad14b0b9d4823ece (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec  2 06:36:00 np0005542249 podman[298741]: 2025-12-02 11:36:00.31180129 +0000 UTC m=+0.163295005 container start 83ce492a8a5ae5af28223973e0a23ce20cb34ab4b8346c01ad14b0b9d4823ece (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec  2 06:36:00 np0005542249 neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa[298782]: [NOTICE]   (298788) : New worker (298790) forked
Dec  2 06:36:00 np0005542249 neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa[298782]: [NOTICE]   (298788) : Loading success.
Dec  2 06:36:00 np0005542249 nova_compute[254900]: 2025-12-02 11:36:00.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:36:00 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1883: 321 pgs: 321 active+clean; 271 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.7 KiB/s wr, 35 op/s
Dec  2 06:36:01 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:36:01 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:36:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e496 do_prune osdmap full prune enabled
Dec  2 06:36:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e497 e497: 3 total, 3 up, 3 in
Dec  2 06:36:01 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e497: 3 total, 3 up, 3 in
Dec  2 06:36:01 np0005542249 nova_compute[254900]: 2025-12-02 11:36:01.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.027 254904 DEBUG nova.compute.manager [req-edbe511d-4a43-4e8a-855e-13c8ca2da2dc req-6ace4106-a519-4124-b11d-18ceaff599c5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Received event network-vif-plugged-ac73987a-aa98-40fc-a185-3eb1a23885de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.027 254904 DEBUG oslo_concurrency.lockutils [req-edbe511d-4a43-4e8a-855e-13c8ca2da2dc req-6ace4106-a519-4124-b11d-18ceaff599c5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "3fa2207a-fc9e-44b7-9356-3c2d1ba98e87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.028 254904 DEBUG oslo_concurrency.lockutils [req-edbe511d-4a43-4e8a-855e-13c8ca2da2dc req-6ace4106-a519-4124-b11d-18ceaff599c5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "3fa2207a-fc9e-44b7-9356-3c2d1ba98e87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.028 254904 DEBUG oslo_concurrency.lockutils [req-edbe511d-4a43-4e8a-855e-13c8ca2da2dc req-6ace4106-a519-4124-b11d-18ceaff599c5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "3fa2207a-fc9e-44b7-9356-3c2d1ba98e87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.028 254904 DEBUG nova.compute.manager [req-edbe511d-4a43-4e8a-855e-13c8ca2da2dc req-6ace4106-a519-4124-b11d-18ceaff599c5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] No waiting events found dispatching network-vif-plugged-ac73987a-aa98-40fc-a185-3eb1a23885de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.029 254904 WARNING nova.compute.manager [req-edbe511d-4a43-4e8a-855e-13c8ca2da2dc req-6ace4106-a519-4124-b11d-18ceaff599c5 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Received unexpected event network-vif-plugged-ac73987a-aa98-40fc-a185-3eb1a23885de for instance with vm_state building and task_state spawning.#033[00m
Dec  2 06:36:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e497 do_prune osdmap full prune enabled
Dec  2 06:36:02 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e498 e498: 3 total, 3 up, 3 in
Dec  2 06:36:02 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e498: 3 total, 3 up, 3 in
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.292 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764675362.2917056, 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.294 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] VM Started (Lifecycle Event)#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.297 254904 DEBUG nova.compute.manager [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.303 254904 DEBUG nova.virt.libvirt.driver [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.307 254904 INFO nova.virt.libvirt.driver [-] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Instance spawned successfully.#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.308 254904 DEBUG nova.virt.libvirt.driver [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.328 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.333 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.368 254904 DEBUG nova.virt.libvirt.driver [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.369 254904 DEBUG nova.virt.libvirt.driver [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.369 254904 DEBUG nova.virt.libvirt.driver [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.370 254904 DEBUG nova.virt.libvirt.driver [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.370 254904 DEBUG nova.virt.libvirt.driver [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.370 254904 DEBUG nova.virt.libvirt.driver [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.406 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.407 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764675362.2922463, 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.407 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] VM Paused (Lifecycle Event)#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.486 254904 INFO nova.compute.manager [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Took 7.21 seconds to spawn the instance on the hypervisor.#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.487 254904 DEBUG nova.compute.manager [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.496 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.501 254904 DEBUG nova.virt.driver [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] Emitting event <LifecycleEvent: 1764675362.3008006, 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.501 254904 INFO nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] VM Resumed (Lifecycle Event)#033[00m
Dec  2 06:36:02 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1886: 321 pgs: 321 active+clean; 271 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 5.0 KiB/s wr, 87 op/s
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.555 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.560 254904 DEBUG nova.compute.manager [None req-80096570-d4df-4f21-bca6-b821706ca2ce - - - - - -] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.599 254904 INFO nova.compute.manager [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Took 9.61 seconds to build instance.#033[00m
Dec  2 06:36:02 np0005542249 nova_compute[254900]: 2025-12-02 11:36:02.619 254904 DEBUG oslo_concurrency.lockutils [None req-c85bb902-c278-4cec-870c-430419e3a312 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "3fa2207a-fc9e-44b7-9356-3c2d1ba98e87" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:36:03 np0005542249 nova_compute[254900]: 2025-12-02 11:36:03.114 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:36:03 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/710646197' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:36:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:36:03 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/710646197' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:36:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e498 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:36:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e498 do_prune osdmap full prune enabled
Dec  2 06:36:03 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e499 e499: 3 total, 3 up, 3 in
Dec  2 06:36:03 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e499: 3 total, 3 up, 3 in
Dec  2 06:36:04 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1888: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 30 KiB/s wr, 146 op/s
Dec  2 06:36:04 np0005542249 nova_compute[254900]: 2025-12-02 11:36:04.981 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:05 np0005542249 podman[298805]: 2025-12-02 11:36:05.992417342 +0000 UTC m=+0.070894072 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  2 06:36:06 np0005542249 podman[298806]: 2025-12-02 11:36:06.053351446 +0000 UTC m=+0.126302978 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:36:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e499 do_prune osdmap full prune enabled
Dec  2 06:36:06 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e500 e500: 3 total, 3 up, 3 in
Dec  2 06:36:06 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e500: 3 total, 3 up, 3 in
Dec  2 06:36:06 np0005542249 nova_compute[254900]: 2025-12-02 11:36:06.267 254904 DEBUG nova.compute.manager [req-3c3cc504-e7d6-47e2-8aa4-33ed3dcd2daa req-673c64ec-7358-42c0-859e-ad20b5900d2b 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Received event network-changed-ac73987a-aa98-40fc-a185-3eb1a23885de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:36:06 np0005542249 nova_compute[254900]: 2025-12-02 11:36:06.268 254904 DEBUG nova.compute.manager [req-3c3cc504-e7d6-47e2-8aa4-33ed3dcd2daa req-673c64ec-7358-42c0-859e-ad20b5900d2b 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Refreshing instance network info cache due to event network-changed-ac73987a-aa98-40fc-a185-3eb1a23885de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  2 06:36:06 np0005542249 nova_compute[254900]: 2025-12-02 11:36:06.269 254904 DEBUG oslo_concurrency.lockutils [req-3c3cc504-e7d6-47e2-8aa4-33ed3dcd2daa req-673c64ec-7358-42c0-859e-ad20b5900d2b 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "refresh_cache-3fa2207a-fc9e-44b7-9356-3c2d1ba98e87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  2 06:36:06 np0005542249 nova_compute[254900]: 2025-12-02 11:36:06.270 254904 DEBUG oslo_concurrency.lockutils [req-3c3cc504-e7d6-47e2-8aa4-33ed3dcd2daa req-673c64ec-7358-42c0-859e-ad20b5900d2b 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquired lock "refresh_cache-3fa2207a-fc9e-44b7-9356-3c2d1ba98e87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  2 06:36:06 np0005542249 nova_compute[254900]: 2025-12-02 11:36:06.271 254904 DEBUG nova.network.neutron [req-3c3cc504-e7d6-47e2-8aa4-33ed3dcd2daa req-673c64ec-7358-42c0-859e-ad20b5900d2b 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Refreshing network info cache for port ac73987a-aa98-40fc-a185-3eb1a23885de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  2 06:36:06 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1890: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 30 KiB/s wr, 148 op/s
Dec  2 06:36:07 np0005542249 nova_compute[254900]: 2025-12-02 11:36:07.302 254904 DEBUG nova.network.neutron [req-3c3cc504-e7d6-47e2-8aa4-33ed3dcd2daa req-673c64ec-7358-42c0-859e-ad20b5900d2b 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Updated VIF entry in instance network info cache for port ac73987a-aa98-40fc-a185-3eb1a23885de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  2 06:36:07 np0005542249 nova_compute[254900]: 2025-12-02 11:36:07.303 254904 DEBUG nova.network.neutron [req-3c3cc504-e7d6-47e2-8aa4-33ed3dcd2daa req-673c64ec-7358-42c0-859e-ad20b5900d2b 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Updating instance_info_cache with network_info: [{"id": "ac73987a-aa98-40fc-a185-3eb1a23885de", "address": "fa:16:3e:52:3c:ac", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac73987a-aa", "ovs_interfaceid": "ac73987a-aa98-40fc-a185-3eb1a23885de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:36:07 np0005542249 nova_compute[254900]: 2025-12-02 11:36:07.329 254904 DEBUG oslo_concurrency.lockutils [req-3c3cc504-e7d6-47e2-8aa4-33ed3dcd2daa req-673c64ec-7358-42c0-859e-ad20b5900d2b 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Releasing lock "refresh_cache-3fa2207a-fc9e-44b7-9356-3c2d1ba98e87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  2 06:36:08 np0005542249 nova_compute[254900]: 2025-12-02 11:36:08.116 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e500 do_prune osdmap full prune enabled
Dec  2 06:36:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e501 e501: 3 total, 3 up, 3 in
Dec  2 06:36:08 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e501: 3 total, 3 up, 3 in
Dec  2 06:36:08 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1892: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 27 KiB/s wr, 200 op/s
Dec  2 06:36:08 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e501 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:36:10 np0005542249 nova_compute[254900]: 2025-12-02 11:36:10.024 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:36:10 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2702008038' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:36:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:36:10 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2702008038' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:36:10 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1893: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.0 KiB/s wr, 167 op/s
Dec  2 06:36:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e501 do_prune osdmap full prune enabled
Dec  2 06:36:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e502 e502: 3 total, 3 up, 3 in
Dec  2 06:36:12 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e502: 3 total, 3 up, 3 in
Dec  2 06:36:12 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1895: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.5 KiB/s wr, 155 op/s
Dec  2 06:36:13 np0005542249 nova_compute[254900]: 2025-12-02 11:36:13.119 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e502 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:36:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e502 do_prune osdmap full prune enabled
Dec  2 06:36:13 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e503 e503: 3 total, 3 up, 3 in
Dec  2 06:36:13 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e503: 3 total, 3 up, 3 in
Dec  2 06:36:14 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1897: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.4 KiB/s wr, 50 op/s
Dec  2 06:36:15 np0005542249 nova_compute[254900]: 2025-12-02 11:36:15.027 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:36:15 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2144646270' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:36:15 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:36:15 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2144646270' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:36:15 np0005542249 ovn_controller[153849]: 2025-12-02T11:36:15Z|00074|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.14 does not match offer 10.100.0.3
Dec  2 06:36:15 np0005542249 ovn_controller[153849]: 2025-12-02T11:36:15Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:52:3c:ac 10.100.0.3
Dec  2 06:36:16 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1898: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.9 KiB/s wr, 38 op/s
Dec  2 06:36:18 np0005542249 nova_compute[254900]: 2025-12-02 11:36:18.121 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:18 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1899: 321 pgs: 321 active+clean; 279 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 144 op/s
Dec  2 06:36:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:36:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e503 do_prune osdmap full prune enabled
Dec  2 06:36:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e504 e504: 3 total, 3 up, 3 in
Dec  2 06:36:19 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e504: 3 total, 3 up, 3 in
Dec  2 06:36:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:36:19.851 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:36:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:36:19.852 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:36:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:36:19.853 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:36:20 np0005542249 nova_compute[254900]: 2025-12-02 11:36:20.030 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:20 np0005542249 ovn_controller[153849]: 2025-12-02T11:36:20Z|00076|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.14 does not match offer 10.100.0.3
Dec  2 06:36:20 np0005542249 ovn_controller[153849]: 2025-12-02T11:36:20Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:52:3c:ac 10.100.0.3
Dec  2 06:36:20 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1901: 321 pgs: 321 active+clean; 283 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.5 MiB/s wr, 119 op/s
Dec  2 06:36:20 np0005542249 ovn_controller[153849]: 2025-12-02T11:36:20Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:52:3c:ac 10.100.0.3
Dec  2 06:36:20 np0005542249 ovn_controller[153849]: 2025-12-02T11:36:20Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:52:3c:ac 10.100.0.3
Dec  2 06:36:22 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1902: 321 pgs: 321 active+clean; 283 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 110 op/s
Dec  2 06:36:23 np0005542249 nova_compute[254900]: 2025-12-02 11:36:23.123 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e504 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:36:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e504 do_prune osdmap full prune enabled
Dec  2 06:36:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e505 e505: 3 total, 3 up, 3 in
Dec  2 06:36:24 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e505: 3 total, 3 up, 3 in
Dec  2 06:36:24 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1904: 321 pgs: 321 active+clean; 287 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.0 MiB/s wr, 117 op/s
Dec  2 06:36:25 np0005542249 nova_compute[254900]: 2025-12-02 11:36:25.033 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:26 np0005542249 podman[298857]: 2025-12-02 11:36:26.064466767 +0000 UTC m=+0.120328896 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec  2 06:36:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:36:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:36:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:36:26
Dec  2 06:36:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:36:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:36:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.log', 'volumes', 'images', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta']
Dec  2 06:36:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:36:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:36:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:36:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:36:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:36:26 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1905: 321 pgs: 321 active+clean; 287 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.0 MiB/s wr, 7 op/s
Dec  2 06:36:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:36:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:36:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:36:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:36:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:36:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:36:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:36:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:36:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:36:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:36:27 np0005542249 nova_compute[254900]: 2025-12-02 11:36:27.514 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:27 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:36:27.515 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:23:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:25:d0:a1:75:ed'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:36:27 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:36:27.516 163757 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  2 06:36:28 np0005542249 nova_compute[254900]: 2025-12-02 11:36:28.164 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:28 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1906: 321 pgs: 321 active+clean; 287 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 912 KiB/s rd, 890 KiB/s wr, 6 op/s
Dec  2 06:36:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e505 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:36:30 np0005542249 nova_compute[254900]: 2025-12-02 11:36:30.035 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:30 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1907: 321 pgs: 321 active+clean; 287 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 410 KiB/s rd, 429 KiB/s wr, 3 op/s
Dec  2 06:36:32 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1908: 321 pgs: 321 active+clean; 291 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 819 KiB/s rd, 537 KiB/s wr, 4 op/s
Dec  2 06:36:33 np0005542249 nova_compute[254900]: 2025-12-02 11:36:33.187 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:33 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:36:33.519 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4ecd1ad4-3ade-413e-b6d7-47ab2fad39ae, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:36:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e505 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:36:34 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1909: 321 pgs: 321 active+clean; 295 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 496 KiB/s rd, 917 KiB/s wr, 7 op/s
Dec  2 06:36:35 np0005542249 nova_compute[254900]: 2025-12-02 11:36:35.038 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:35 np0005542249 ovn_controller[153849]: 2025-12-02T11:36:35Z|00284|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec  2 06:36:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:36:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:36:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:36:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:36:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Dec  2 06:36:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:36:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0032368942094810186 of space, bias 1.0, pg target 0.9710682628443056 quantized to 32 (current 32)
Dec  2 06:36:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:36:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  2 06:36:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:36:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Dec  2 06:36:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:36:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:36:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:36:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:36:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:36:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:36:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:36:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:36:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:36:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:36:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:36:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:36:36 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1910: 321 pgs: 321 active+clean; 295 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 440 KiB/s wr, 4 op/s
Dec  2 06:36:36 np0005542249 podman[298877]: 2025-12-02 11:36:36.992084826 +0000 UTC m=+0.067438119 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  2 06:36:37 np0005542249 podman[298878]: 2025-12-02 11:36:37.032818766 +0000 UTC m=+0.103224875 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  2 06:36:38 np0005542249 nova_compute[254900]: 2025-12-02 11:36:38.190 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:38 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1911: 321 pgs: 321 active+clean; 295 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 440 KiB/s wr, 4 op/s
Dec  2 06:36:38 np0005542249 nova_compute[254900]: 2025-12-02 11:36:38.727 254904 DEBUG oslo_concurrency.lockutils [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "3fa2207a-fc9e-44b7-9356-3c2d1ba98e87" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:36:38 np0005542249 nova_compute[254900]: 2025-12-02 11:36:38.727 254904 DEBUG oslo_concurrency.lockutils [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "3fa2207a-fc9e-44b7-9356-3c2d1ba98e87" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:36:38 np0005542249 nova_compute[254900]: 2025-12-02 11:36:38.728 254904 DEBUG oslo_concurrency.lockutils [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "3fa2207a-fc9e-44b7-9356-3c2d1ba98e87-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:36:38 np0005542249 nova_compute[254900]: 2025-12-02 11:36:38.728 254904 DEBUG oslo_concurrency.lockutils [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "3fa2207a-fc9e-44b7-9356-3c2d1ba98e87-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:36:38 np0005542249 nova_compute[254900]: 2025-12-02 11:36:38.728 254904 DEBUG oslo_concurrency.lockutils [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "3fa2207a-fc9e-44b7-9356-3c2d1ba98e87-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:36:38 np0005542249 nova_compute[254900]: 2025-12-02 11:36:38.730 254904 INFO nova.compute.manager [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Terminating instance#033[00m
Dec  2 06:36:38 np0005542249 nova_compute[254900]: 2025-12-02 11:36:38.733 254904 DEBUG nova.compute.manager [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  2 06:36:38 np0005542249 kernel: tapac73987a-aa (unregistering): left promiscuous mode
Dec  2 06:36:38 np0005542249 NetworkManager[48987]: <info>  [1764675398.8038] device (tapac73987a-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  2 06:36:38 np0005542249 nova_compute[254900]: 2025-12-02 11:36:38.840 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:38 np0005542249 ovn_controller[153849]: 2025-12-02T11:36:38Z|00285|binding|INFO|Releasing lport ac73987a-aa98-40fc-a185-3eb1a23885de from this chassis (sb_readonly=0)
Dec  2 06:36:38 np0005542249 ovn_controller[153849]: 2025-12-02T11:36:38Z|00286|binding|INFO|Setting lport ac73987a-aa98-40fc-a185-3eb1a23885de down in Southbound
Dec  2 06:36:38 np0005542249 ovn_controller[153849]: 2025-12-02T11:36:38Z|00287|binding|INFO|Removing iface tapac73987a-aa ovn-installed in OVS
Dec  2 06:36:38 np0005542249 nova_compute[254900]: 2025-12-02 11:36:38.843 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:38 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:36:38.850 163757 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:3c:ac 10.100.0.3'], port_security=['fa:16:3e:52:3c:ac 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3fa2207a-fc9e-44b7-9356-3c2d1ba98e87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28b69a92-5b45-421b-9985-afeebc6820aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58574186a4fd405e83f1a4b650ea8e8c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4140d653-003b-4989-8de5-3e8bad7f5c96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81aeb855-c9bf-4f95-90d1-85f514f075e1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>], logical_port=ac73987a-aa98-40fc-a185-3eb1a23885de) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0f38eb3550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  2 06:36:38 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:36:38.852 163757 INFO neutron.agent.ovn.metadata.agent [-] Port ac73987a-aa98-40fc-a185-3eb1a23885de in datapath 28b69a92-5b45-421b-9985-afeebc6820aa unbound from our chassis#033[00m
Dec  2 06:36:38 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:36:38.853 163757 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 28b69a92-5b45-421b-9985-afeebc6820aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  2 06:36:38 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:36:38.855 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[a102b884-7212-4b71-8b82-ba91883d6a7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:36:38 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:36:38.856 163757 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa namespace which is not needed anymore#033[00m
Dec  2 06:36:38 np0005542249 nova_compute[254900]: 2025-12-02 11:36:38.863 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:38 np0005542249 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Dec  2 06:36:38 np0005542249 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001e.scope: Consumed 17.751s CPU time.
Dec  2 06:36:38 np0005542249 systemd-machined[216222]: Machine qemu-30-instance-0000001e terminated.
Dec  2 06:36:38 np0005542249 nova_compute[254900]: 2025-12-02 11:36:38.977 254904 INFO nova.virt.libvirt.driver [-] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Instance destroyed successfully.#033[00m
Dec  2 06:36:38 np0005542249 nova_compute[254900]: 2025-12-02 11:36:38.979 254904 DEBUG nova.objects.instance [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lazy-loading 'resources' on Instance uuid 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  2 06:36:38 np0005542249 nova_compute[254900]: 2025-12-02 11:36:38.993 254904 DEBUG nova.virt.libvirt.vif [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T11:35:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-828383850',display_name='tempest-TestEncryptedCinderVolumes-server-828383850',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-828383850',id=30,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK920rwaN7hvXue9KA1NjrUwvtvK954cG7ZuCXgqFd9X1K1nkCEcuVbCcgefDelXRvQjOoRaTNZtPceNbknmWJHmkwM014+LbqRvx6BhJSogI4x+qdIBG/Zp5TIVdDeUYQ==',key_name='tempest-TestEncryptedCinderVolumes-250014646',keypairs=<?>,launch_index=0,launched_at=2025-12-02T11:36:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='58574186a4fd405e83f1a4b650ea8e8c',ramdisk_id='',reservation_id='r-xjccmtck',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestEncryptedCinderVolumes-337876243',owner_user_name='tempest-TestEncryptedCinderVolumes-337876243-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T11:36:02Z,user_data=None,user_id='a003d9cef7684ec48ed996b22c11419e',uuid=3fa2207a-fc9e-44b7-9356-3c2d1ba98e87,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac73987a-aa98-40fc-a185-3eb1a23885de", "address": "fa:16:3e:52:3c:ac", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac73987a-aa", "ovs_interfaceid": "ac73987a-aa98-40fc-a185-3eb1a23885de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  2 06:36:38 np0005542249 nova_compute[254900]: 2025-12-02 11:36:38.994 254904 DEBUG nova.network.os_vif_util [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Converting VIF {"id": "ac73987a-aa98-40fc-a185-3eb1a23885de", "address": "fa:16:3e:52:3c:ac", "network": {"id": "28b69a92-5b45-421b-9985-afeebc6820aa", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1979389422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58574186a4fd405e83f1a4b650ea8e8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac73987a-aa", "ovs_interfaceid": "ac73987a-aa98-40fc-a185-3eb1a23885de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  2 06:36:38 np0005542249 nova_compute[254900]: 2025-12-02 11:36:38.995 254904 DEBUG nova.network.os_vif_util [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:52:3c:ac,bridge_name='br-int',has_traffic_filtering=True,id=ac73987a-aa98-40fc-a185-3eb1a23885de,network=Network(28b69a92-5b45-421b-9985-afeebc6820aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac73987a-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  2 06:36:38 np0005542249 nova_compute[254900]: 2025-12-02 11:36:38.996 254904 DEBUG os_vif [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:52:3c:ac,bridge_name='br-int',has_traffic_filtering=True,id=ac73987a-aa98-40fc-a185-3eb1a23885de,network=Network(28b69a92-5b45-421b-9985-afeebc6820aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac73987a-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  2 06:36:38 np0005542249 nova_compute[254900]: 2025-12-02 11:36:38.998 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:38 np0005542249 nova_compute[254900]: 2025-12-02 11:36:38.998 254904 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac73987a-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:36:39 np0005542249 nova_compute[254900]: 2025-12-02 11:36:39.000 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:39 np0005542249 nova_compute[254900]: 2025-12-02 11:36:39.002 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:39 np0005542249 nova_compute[254900]: 2025-12-02 11:36:39.006 254904 INFO os_vif [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:52:3c:ac,bridge_name='br-int',has_traffic_filtering=True,id=ac73987a-aa98-40fc-a185-3eb1a23885de,network=Network(28b69a92-5b45-421b-9985-afeebc6820aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac73987a-aa')#033[00m
Dec  2 06:36:39 np0005542249 neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa[298782]: [NOTICE]   (298788) : haproxy version is 2.8.14-c23fe91
Dec  2 06:36:39 np0005542249 neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa[298782]: [NOTICE]   (298788) : path to executable is /usr/sbin/haproxy
Dec  2 06:36:39 np0005542249 neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa[298782]: [WARNING]  (298788) : Exiting Master process...
Dec  2 06:36:39 np0005542249 neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa[298782]: [ALERT]    (298788) : Current worker (298790) exited with code 143 (Terminated)
Dec  2 06:36:39 np0005542249 neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa[298782]: [WARNING]  (298788) : All workers exited. Exiting... (0)
Dec  2 06:36:39 np0005542249 systemd[1]: libpod-83ce492a8a5ae5af28223973e0a23ce20cb34ab4b8346c01ad14b0b9d4823ece.scope: Deactivated successfully.
Dec  2 06:36:39 np0005542249 podman[298949]: 2025-12-02 11:36:39.039693363 +0000 UTC m=+0.066706941 container died 83ce492a8a5ae5af28223973e0a23ce20cb34ab4b8346c01ad14b0b9d4823ece (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:36:39 np0005542249 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-83ce492a8a5ae5af28223973e0a23ce20cb34ab4b8346c01ad14b0b9d4823ece-userdata-shm.mount: Deactivated successfully.
Dec  2 06:36:39 np0005542249 systemd[1]: var-lib-containers-storage-overlay-02900975817dbd27d95979ce9a687c1687233066e425e79bca6753adbc660e16-merged.mount: Deactivated successfully.
Dec  2 06:36:39 np0005542249 podman[298949]: 2025-12-02 11:36:39.088835298 +0000 UTC m=+0.115848866 container cleanup 83ce492a8a5ae5af28223973e0a23ce20cb34ab4b8346c01ad14b0b9d4823ece (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  2 06:36:39 np0005542249 systemd[1]: libpod-conmon-83ce492a8a5ae5af28223973e0a23ce20cb34ab4b8346c01ad14b0b9d4823ece.scope: Deactivated successfully.
Dec  2 06:36:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e505 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:36:39 np0005542249 podman[299004]: 2025-12-02 11:36:39.166351448 +0000 UTC m=+0.050700968 container remove 83ce492a8a5ae5af28223973e0a23ce20cb34ab4b8346c01ad14b0b9d4823ece (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 06:36:39 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:36:39.172 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[563b41e0-02bf-426a-8f46-6241d5a8bbda]: (4, ('Tue Dec  2 11:36:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa (83ce492a8a5ae5af28223973e0a23ce20cb34ab4b8346c01ad14b0b9d4823ece)\n83ce492a8a5ae5af28223973e0a23ce20cb34ab4b8346c01ad14b0b9d4823ece\nTue Dec  2 11:36:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa (83ce492a8a5ae5af28223973e0a23ce20cb34ab4b8346c01ad14b0b9d4823ece)\n83ce492a8a5ae5af28223973e0a23ce20cb34ab4b8346c01ad14b0b9d4823ece\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:36:39 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:36:39.175 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[bde76c34-726a-461a-95a2-b3e3aa401507]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:36:39 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:36:39.176 163757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28b69a92-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  2 06:36:39 np0005542249 nova_compute[254900]: 2025-12-02 11:36:39.178 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:39 np0005542249 kernel: tap28b69a92-50: left promiscuous mode
Dec  2 06:36:39 np0005542249 nova_compute[254900]: 2025-12-02 11:36:39.192 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:39 np0005542249 nova_compute[254900]: 2025-12-02 11:36:39.193 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:39 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:36:39.196 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[9d3bba40-6a18-4718-8800-1a050b35cbff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:36:39 np0005542249 nova_compute[254900]: 2025-12-02 11:36:39.208 254904 INFO nova.virt.libvirt.driver [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Deleting instance files /var/lib/nova/instances/3fa2207a-fc9e-44b7-9356-3c2d1ba98e87_del#033[00m
Dec  2 06:36:39 np0005542249 nova_compute[254900]: 2025-12-02 11:36:39.209 254904 INFO nova.virt.libvirt.driver [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Deletion of /var/lib/nova/instances/3fa2207a-fc9e-44b7-9356-3c2d1ba98e87_del complete#033[00m
Dec  2 06:36:39 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:36:39.212 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[742de33d-9a81-4d89-82c5-aba0f8cd57d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:36:39 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:36:39.213 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[c135dfd8-fa05-45c6-ae6c-49b2c726d32c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:36:39 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:36:39.229 262398 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b4b6d3-ef1f-467c-8a73-e3e271cc1724]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555227, 'reachable_time': 24433, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299020, 'error': None, 'target': 'ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:36:39 np0005542249 systemd[1]: run-netns-ovnmeta\x2d28b69a92\x2d5b45\x2d421b\x2d9985\x2dafeebc6820aa.mount: Deactivated successfully.
Dec  2 06:36:39 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:36:39.234 164036 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-28b69a92-5b45-421b-9985-afeebc6820aa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  2 06:36:39 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:36:39.234 164036 DEBUG oslo.privsep.daemon [-] privsep: reply[7970aee9-e5aa-41aa-bd77-81e8dd22c0ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  2 06:36:39 np0005542249 nova_compute[254900]: 2025-12-02 11:36:39.323 254904 INFO nova.compute.manager [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Dec  2 06:36:39 np0005542249 nova_compute[254900]: 2025-12-02 11:36:39.324 254904 DEBUG oslo.service.loopingcall [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  2 06:36:39 np0005542249 nova_compute[254900]: 2025-12-02 11:36:39.324 254904 DEBUG nova.compute.manager [-] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  2 06:36:39 np0005542249 nova_compute[254900]: 2025-12-02 11:36:39.325 254904 DEBUG nova.network.neutron [-] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  2 06:36:39 np0005542249 nova_compute[254900]: 2025-12-02 11:36:39.333 254904 DEBUG nova.compute.manager [req-a50f84fa-c302-4e7e-be33-22983184c7e7 req-56131a44-feb0-4f12-96ec-663977ca260a 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Received event network-vif-unplugged-ac73987a-aa98-40fc-a185-3eb1a23885de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:36:39 np0005542249 nova_compute[254900]: 2025-12-02 11:36:39.333 254904 DEBUG oslo_concurrency.lockutils [req-a50f84fa-c302-4e7e-be33-22983184c7e7 req-56131a44-feb0-4f12-96ec-663977ca260a 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "3fa2207a-fc9e-44b7-9356-3c2d1ba98e87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:36:39 np0005542249 nova_compute[254900]: 2025-12-02 11:36:39.334 254904 DEBUG oslo_concurrency.lockutils [req-a50f84fa-c302-4e7e-be33-22983184c7e7 req-56131a44-feb0-4f12-96ec-663977ca260a 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "3fa2207a-fc9e-44b7-9356-3c2d1ba98e87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:36:39 np0005542249 nova_compute[254900]: 2025-12-02 11:36:39.334 254904 DEBUG oslo_concurrency.lockutils [req-a50f84fa-c302-4e7e-be33-22983184c7e7 req-56131a44-feb0-4f12-96ec-663977ca260a 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "3fa2207a-fc9e-44b7-9356-3c2d1ba98e87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:36:39 np0005542249 nova_compute[254900]: 2025-12-02 11:36:39.334 254904 DEBUG nova.compute.manager [req-a50f84fa-c302-4e7e-be33-22983184c7e7 req-56131a44-feb0-4f12-96ec-663977ca260a 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] No waiting events found dispatching network-vif-unplugged-ac73987a-aa98-40fc-a185-3eb1a23885de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:36:39 np0005542249 nova_compute[254900]: 2025-12-02 11:36:39.335 254904 DEBUG nova.compute.manager [req-a50f84fa-c302-4e7e-be33-22983184c7e7 req-56131a44-feb0-4f12-96ec-663977ca260a 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Received event network-vif-unplugged-ac73987a-aa98-40fc-a185-3eb1a23885de for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  2 06:36:40 np0005542249 nova_compute[254900]: 2025-12-02 11:36:40.043 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:40 np0005542249 nova_compute[254900]: 2025-12-02 11:36:40.379 254904 DEBUG nova.network.neutron [-] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  2 06:36:40 np0005542249 nova_compute[254900]: 2025-12-02 11:36:40.400 254904 INFO nova.compute.manager [-] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Took 1.08 seconds to deallocate network for instance.#033[00m
Dec  2 06:36:40 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1912: 321 pgs: 321 active+clean; 295 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 440 KiB/s wr, 5 op/s
Dec  2 06:36:40 np0005542249 nova_compute[254900]: 2025-12-02 11:36:40.777 254904 INFO nova.compute.manager [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Took 0.38 seconds to detach 1 volumes for instance.#033[00m
Dec  2 06:36:40 np0005542249 nova_compute[254900]: 2025-12-02 11:36:40.841 254904 DEBUG oslo_concurrency.lockutils [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:36:40 np0005542249 nova_compute[254900]: 2025-12-02 11:36:40.842 254904 DEBUG oslo_concurrency.lockutils [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:36:40 np0005542249 nova_compute[254900]: 2025-12-02 11:36:40.946 254904 DEBUG oslo_concurrency.processutils [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:36:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:36:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3417478714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:36:41 np0005542249 nova_compute[254900]: 2025-12-02 11:36:41.384 254904 DEBUG oslo_concurrency.processutils [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:36:41 np0005542249 nova_compute[254900]: 2025-12-02 11:36:41.392 254904 DEBUG nova.compute.provider_tree [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:36:41 np0005542249 nova_compute[254900]: 2025-12-02 11:36:41.423 254904 DEBUG nova.scheduler.client.report [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:36:41 np0005542249 nova_compute[254900]: 2025-12-02 11:36:41.443 254904 DEBUG oslo_concurrency.lockutils [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:36:41 np0005542249 nova_compute[254900]: 2025-12-02 11:36:41.453 254904 DEBUG nova.compute.manager [req-e54e4a45-615d-4257-aba9-36af2a898838 req-9a788ef6-cb99-4c13-b1ff-e21084ccca74 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Received event network-vif-plugged-ac73987a-aa98-40fc-a185-3eb1a23885de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:36:41 np0005542249 nova_compute[254900]: 2025-12-02 11:36:41.454 254904 DEBUG oslo_concurrency.lockutils [req-e54e4a45-615d-4257-aba9-36af2a898838 req-9a788ef6-cb99-4c13-b1ff-e21084ccca74 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Acquiring lock "3fa2207a-fc9e-44b7-9356-3c2d1ba98e87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:36:41 np0005542249 nova_compute[254900]: 2025-12-02 11:36:41.454 254904 DEBUG oslo_concurrency.lockutils [req-e54e4a45-615d-4257-aba9-36af2a898838 req-9a788ef6-cb99-4c13-b1ff-e21084ccca74 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "3fa2207a-fc9e-44b7-9356-3c2d1ba98e87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:36:41 np0005542249 nova_compute[254900]: 2025-12-02 11:36:41.454 254904 DEBUG oslo_concurrency.lockutils [req-e54e4a45-615d-4257-aba9-36af2a898838 req-9a788ef6-cb99-4c13-b1ff-e21084ccca74 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] Lock "3fa2207a-fc9e-44b7-9356-3c2d1ba98e87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:36:41 np0005542249 nova_compute[254900]: 2025-12-02 11:36:41.455 254904 DEBUG nova.compute.manager [req-e54e4a45-615d-4257-aba9-36af2a898838 req-9a788ef6-cb99-4c13-b1ff-e21084ccca74 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] No waiting events found dispatching network-vif-plugged-ac73987a-aa98-40fc-a185-3eb1a23885de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  2 06:36:41 np0005542249 nova_compute[254900]: 2025-12-02 11:36:41.455 254904 WARNING nova.compute.manager [req-e54e4a45-615d-4257-aba9-36af2a898838 req-9a788ef6-cb99-4c13-b1ff-e21084ccca74 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Received unexpected event network-vif-plugged-ac73987a-aa98-40fc-a185-3eb1a23885de for instance with vm_state deleted and task_state None.#033[00m
Dec  2 06:36:41 np0005542249 nova_compute[254900]: 2025-12-02 11:36:41.455 254904 DEBUG nova.compute.manager [req-e54e4a45-615d-4257-aba9-36af2a898838 req-9a788ef6-cb99-4c13-b1ff-e21084ccca74 26c09d728e4b4d3ca14bf365644930bb 3beeb17b40ae433686fc10a9180c8d7e - - default default] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Received event network-vif-deleted-ac73987a-aa98-40fc-a185-3eb1a23885de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  2 06:36:41 np0005542249 nova_compute[254900]: 2025-12-02 11:36:41.494 254904 INFO nova.scheduler.client.report [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Deleted allocations for instance 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87#033[00m
Dec  2 06:36:41 np0005542249 nova_compute[254900]: 2025-12-02 11:36:41.571 254904 DEBUG oslo_concurrency.lockutils [None req-98e389f5-8838-4f90-a2d4-92661ce5ba89 a003d9cef7684ec48ed996b22c11419e 58574186a4fd405e83f1a4b650ea8e8c - - default default] Lock "3fa2207a-fc9e-44b7-9356-3c2d1ba98e87" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:36:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:36:42 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1602322816' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:36:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:36:42 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1602322816' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:36:42 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1913: 321 pgs: 321 active+clean; 295 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 483 KiB/s rd, 438 KiB/s wr, 10 op/s
Dec  2 06:36:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:36:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2204448665' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:36:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:36:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2204448665' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:36:44 np0005542249 nova_compute[254900]: 2025-12-02 11:36:44.002 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e505 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:36:44 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1914: 321 pgs: 321 active+clean; 287 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 348 KiB/s wr, 43 op/s
Dec  2 06:36:45 np0005542249 nova_compute[254900]: 2025-12-02 11:36:45.045 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:36:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1981936884' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:36:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:36:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1981936884' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:36:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:36:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1006224911' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:36:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:36:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1006224911' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:36:46 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1915: 321 pgs: 321 active+clean; 287 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 237 KiB/s rd, 1.2 KiB/s wr, 40 op/s
Dec  2 06:36:47 np0005542249 nova_compute[254900]: 2025-12-02 11:36:47.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:36:47 np0005542249 nova_compute[254900]: 2025-12-02 11:36:47.383 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:36:48 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1916: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 2.6 KiB/s wr, 73 op/s
Dec  2 06:36:49 np0005542249 nova_compute[254900]: 2025-12-02 11:36:49.005 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e505 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:36:50 np0005542249 nova_compute[254900]: 2025-12-02 11:36:50.047 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:36:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1540148918' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:36:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:36:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1540148918' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:36:50 np0005542249 nova_compute[254900]: 2025-12-02 11:36:50.377 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:50 np0005542249 nova_compute[254900]: 2025-12-02 11:36:50.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:36:50 np0005542249 nova_compute[254900]: 2025-12-02 11:36:50.382 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:36:50 np0005542249 nova_compute[254900]: 2025-12-02 11:36:50.382 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 06:36:50 np0005542249 nova_compute[254900]: 2025-12-02 11:36:50.417 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 06:36:50 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1917: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 2.7 KiB/s wr, 74 op/s
Dec  2 06:36:50 np0005542249 nova_compute[254900]: 2025-12-02 11:36:50.533 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:52 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1918: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 2.7 KiB/s wr, 73 op/s
Dec  2 06:36:53 np0005542249 nova_compute[254900]: 2025-12-02 11:36:53.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:36:53 np0005542249 nova_compute[254900]: 2025-12-02 11:36:53.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:36:53 np0005542249 nova_compute[254900]: 2025-12-02 11:36:53.976 254904 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764675398.9745634, 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  2 06:36:53 np0005542249 nova_compute[254900]: 2025-12-02 11:36:53.976 254904 INFO nova.compute.manager [-] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] VM Stopped (Lifecycle Event)#033[00m
Dec  2 06:36:54 np0005542249 nova_compute[254900]: 2025-12-02 11:36:54.008 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:54 np0005542249 nova_compute[254900]: 2025-12-02 11:36:54.015 254904 DEBUG nova.compute.manager [None req-ba826eb9-5da5-492a-8d09-440eb17a76df - - - - - -] [instance: 3fa2207a-fc9e-44b7-9356-3c2d1ba98e87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  2 06:36:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e505 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:36:54 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1919: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 207 KiB/s rd, 1.9 KiB/s wr, 67 op/s
Dec  2 06:36:55 np0005542249 nova_compute[254900]: 2025-12-02 11:36:55.049 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:36:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:36:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:36:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:36:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:36:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:36:56 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1920: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.5 KiB/s wr, 33 op/s
Dec  2 06:36:57 np0005542249 podman[299047]: 2025-12-02 11:36:57.008985574 +0000 UTC m=+0.080069281 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  2 06:36:57 np0005542249 nova_compute[254900]: 2025-12-02 11:36:57.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:36:57 np0005542249 nova_compute[254900]: 2025-12-02 11:36:57.418 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:36:57 np0005542249 nova_compute[254900]: 2025-12-02 11:36:57.419 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:36:57 np0005542249 nova_compute[254900]: 2025-12-02 11:36:57.419 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:36:57 np0005542249 nova_compute[254900]: 2025-12-02 11:36:57.420 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:36:57 np0005542249 nova_compute[254900]: 2025-12-02 11:36:57.420 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:36:57 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:36:57 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2048268260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:36:57 np0005542249 nova_compute[254900]: 2025-12-02 11:36:57.901 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:36:58 np0005542249 nova_compute[254900]: 2025-12-02 11:36:58.100 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:36:58 np0005542249 nova_compute[254900]: 2025-12-02 11:36:58.102 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4313MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:36:58 np0005542249 nova_compute[254900]: 2025-12-02 11:36:58.102 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:36:58 np0005542249 nova_compute[254900]: 2025-12-02 11:36:58.102 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:36:58 np0005542249 nova_compute[254900]: 2025-12-02 11:36:58.226 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:36:58 np0005542249 nova_compute[254900]: 2025-12-02 11:36:58.226 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:36:58 np0005542249 nova_compute[254900]: 2025-12-02 11:36:58.241 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Refreshing inventories for resource provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  2 06:36:58 np0005542249 nova_compute[254900]: 2025-12-02 11:36:58.256 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Updating ProviderTree inventory for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  2 06:36:58 np0005542249 nova_compute[254900]: 2025-12-02 11:36:58.256 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Updating inventory in ProviderTree for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  2 06:36:58 np0005542249 nova_compute[254900]: 2025-12-02 11:36:58.275 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Refreshing aggregate associations for resource provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  2 06:36:58 np0005542249 nova_compute[254900]: 2025-12-02 11:36:58.292 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Refreshing trait associations for resource provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062, traits: HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SVM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_MMX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,HW_CPU_X86_CLMUL,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE41,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  2 06:36:58 np0005542249 nova_compute[254900]: 2025-12-02 11:36:58.305 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:36:58 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1921: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.5 KiB/s wr, 33 op/s
Dec  2 06:36:58 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:36:58 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3249932750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:36:58 np0005542249 nova_compute[254900]: 2025-12-02 11:36:58.811 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:36:58 np0005542249 nova_compute[254900]: 2025-12-02 11:36:58.819 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:36:58 np0005542249 nova_compute[254900]: 2025-12-02 11:36:58.834 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:36:58 np0005542249 nova_compute[254900]: 2025-12-02 11:36:58.863 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:36:58 np0005542249 nova_compute[254900]: 2025-12-02 11:36:58.863 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:36:59 np0005542249 nova_compute[254900]: 2025-12-02 11:36:59.011 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:36:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e505 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:37:00 np0005542249 nova_compute[254900]: 2025-12-02 11:37:00.070 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:37:00 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1922: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  2 06:37:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:37:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:37:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:37:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:37:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:37:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:37:01 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 1991a607-f683-4235-bfc9-d4ef529707fc does not exist
Dec  2 06:37:01 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 1a9c8deb-8648-40ba-8ce1-1918eb015f38 does not exist
Dec  2 06:37:01 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 42a17d6e-049f-40b7-a9f3-e3c8f63a6d53 does not exist
Dec  2 06:37:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:37:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:37:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:37:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:37:01 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:37:01 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:37:01 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:37:01 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:37:01 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:37:01 np0005542249 podman[299386]: 2025-12-02 11:37:01.866128007 +0000 UTC m=+0.056425713 container create 0700c309e90de0bec3c4c7a38e62966374bf35c6f610f5676a3bbe9c1d071625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  2 06:37:01 np0005542249 systemd[1]: Started libpod-conmon-0700c309e90de0bec3c4c7a38e62966374bf35c6f610f5676a3bbe9c1d071625.scope.
Dec  2 06:37:01 np0005542249 podman[299386]: 2025-12-02 11:37:01.836376235 +0000 UTC m=+0.026673991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:37:01 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:37:01 np0005542249 podman[299386]: 2025-12-02 11:37:01.971669723 +0000 UTC m=+0.161967469 container init 0700c309e90de0bec3c4c7a38e62966374bf35c6f610f5676a3bbe9c1d071625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  2 06:37:01 np0005542249 podman[299386]: 2025-12-02 11:37:01.984931211 +0000 UTC m=+0.175228877 container start 0700c309e90de0bec3c4c7a38e62966374bf35c6f610f5676a3bbe9c1d071625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_northcutt, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:37:01 np0005542249 podman[299386]: 2025-12-02 11:37:01.989175026 +0000 UTC m=+0.179472702 container attach 0700c309e90de0bec3c4c7a38e62966374bf35c6f610f5676a3bbe9c1d071625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  2 06:37:01 np0005542249 magical_northcutt[299402]: 167 167
Dec  2 06:37:01 np0005542249 systemd[1]: libpod-0700c309e90de0bec3c4c7a38e62966374bf35c6f610f5676a3bbe9c1d071625.scope: Deactivated successfully.
Dec  2 06:37:01 np0005542249 conmon[299402]: conmon 0700c309e90de0bec3c4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0700c309e90de0bec3c4c7a38e62966374bf35c6f610f5676a3bbe9c1d071625.scope/container/memory.events
Dec  2 06:37:01 np0005542249 podman[299386]: 2025-12-02 11:37:01.995090825 +0000 UTC m=+0.185388531 container died 0700c309e90de0bec3c4c7a38e62966374bf35c6f610f5676a3bbe9c1d071625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  2 06:37:02 np0005542249 systemd[1]: var-lib-containers-storage-overlay-a9f3caf94813311ab46d32a36d818426caa64ac8f04c5a27586a99a01b1c7a4e-merged.mount: Deactivated successfully.
Dec  2 06:37:02 np0005542249 podman[299386]: 2025-12-02 11:37:02.0431236 +0000 UTC m=+0.233421276 container remove 0700c309e90de0bec3c4c7a38e62966374bf35c6f610f5676a3bbe9c1d071625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_northcutt, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:37:02 np0005542249 systemd[1]: libpod-conmon-0700c309e90de0bec3c4c7a38e62966374bf35c6f610f5676a3bbe9c1d071625.scope: Deactivated successfully.
Dec  2 06:37:02 np0005542249 podman[299426]: 2025-12-02 11:37:02.207293848 +0000 UTC m=+0.046938357 container create 9443a4d1361532593b88892c0a8f34f516def3a51d8db2465d2dbc12b9b448e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  2 06:37:02 np0005542249 systemd[1]: Started libpod-conmon-9443a4d1361532593b88892c0a8f34f516def3a51d8db2465d2dbc12b9b448e3.scope.
Dec  2 06:37:02 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:37:02 np0005542249 podman[299426]: 2025-12-02 11:37:02.184558465 +0000 UTC m=+0.024203024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:37:02 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd51a582a591e5c65b839b9d972975d81a3ccd8596b8b8a0642ff9ba3b668692/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:37:02 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd51a582a591e5c65b839b9d972975d81a3ccd8596b8b8a0642ff9ba3b668692/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:37:02 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd51a582a591e5c65b839b9d972975d81a3ccd8596b8b8a0642ff9ba3b668692/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:37:02 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd51a582a591e5c65b839b9d972975d81a3ccd8596b8b8a0642ff9ba3b668692/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:37:02 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd51a582a591e5c65b839b9d972975d81a3ccd8596b8b8a0642ff9ba3b668692/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:37:02 np0005542249 podman[299426]: 2025-12-02 11:37:02.30299336 +0000 UTC m=+0.142637879 container init 9443a4d1361532593b88892c0a8f34f516def3a51d8db2465d2dbc12b9b448e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_solomon, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Dec  2 06:37:02 np0005542249 podman[299426]: 2025-12-02 11:37:02.3196734 +0000 UTC m=+0.159317919 container start 9443a4d1361532593b88892c0a8f34f516def3a51d8db2465d2dbc12b9b448e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  2 06:37:02 np0005542249 podman[299426]: 2025-12-02 11:37:02.323452581 +0000 UTC m=+0.163097090 container attach 9443a4d1361532593b88892c0a8f34f516def3a51d8db2465d2dbc12b9b448e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_solomon, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:37:02 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1923: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:37:02 np0005542249 nova_compute[254900]: 2025-12-02 11:37:02.859 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:37:02 np0005542249 nova_compute[254900]: 2025-12-02 11:37:02.861 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:37:02 np0005542249 nova_compute[254900]: 2025-12-02 11:37:02.861 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:37:02 np0005542249 nova_compute[254900]: 2025-12-02 11:37:02.862 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:37:03 np0005542249 wonderful_solomon[299443]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:37:03 np0005542249 wonderful_solomon[299443]: --> relative data size: 1.0
Dec  2 06:37:03 np0005542249 wonderful_solomon[299443]: --> All data devices are unavailable
Dec  2 06:37:03 np0005542249 systemd[1]: libpod-9443a4d1361532593b88892c0a8f34f516def3a51d8db2465d2dbc12b9b448e3.scope: Deactivated successfully.
Dec  2 06:37:03 np0005542249 podman[299426]: 2025-12-02 11:37:03.533247282 +0000 UTC m=+1.372891791 container died 9443a4d1361532593b88892c0a8f34f516def3a51d8db2465d2dbc12b9b448e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  2 06:37:03 np0005542249 systemd[1]: libpod-9443a4d1361532593b88892c0a8f34f516def3a51d8db2465d2dbc12b9b448e3.scope: Consumed 1.174s CPU time.
Dec  2 06:37:03 np0005542249 systemd[1]: var-lib-containers-storage-overlay-cd51a582a591e5c65b839b9d972975d81a3ccd8596b8b8a0642ff9ba3b668692-merged.mount: Deactivated successfully.
Dec  2 06:37:03 np0005542249 podman[299426]: 2025-12-02 11:37:03.609551109 +0000 UTC m=+1.449195618 container remove 9443a4d1361532593b88892c0a8f34f516def3a51d8db2465d2dbc12b9b448e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_solomon, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:37:03 np0005542249 systemd[1]: libpod-conmon-9443a4d1361532593b88892c0a8f34f516def3a51d8db2465d2dbc12b9b448e3.scope: Deactivated successfully.
Dec  2 06:37:04 np0005542249 nova_compute[254900]: 2025-12-02 11:37:04.016 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:37:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e505 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:37:04 np0005542249 podman[299626]: 2025-12-02 11:37:04.337674717 +0000 UTC m=+0.029009413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:37:04 np0005542249 podman[299626]: 2025-12-02 11:37:04.454801187 +0000 UTC m=+0.146135913 container create a005787caec0ce0b9efc6ba54acc98bd395bc1ad0a6624a7109a63475bf4f9e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_roentgen, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  2 06:37:04 np0005542249 systemd[1]: Started libpod-conmon-a005787caec0ce0b9efc6ba54acc98bd395bc1ad0a6624a7109a63475bf4f9e2.scope.
Dec  2 06:37:04 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1924: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:37:04 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:37:04 np0005542249 podman[299626]: 2025-12-02 11:37:04.621322228 +0000 UTC m=+0.312657004 container init a005787caec0ce0b9efc6ba54acc98bd395bc1ad0a6624a7109a63475bf4f9e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_roentgen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  2 06:37:04 np0005542249 podman[299626]: 2025-12-02 11:37:04.629646632 +0000 UTC m=+0.320981328 container start a005787caec0ce0b9efc6ba54acc98bd395bc1ad0a6624a7109a63475bf4f9e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:37:04 np0005542249 agitated_roentgen[299642]: 167 167
Dec  2 06:37:04 np0005542249 systemd[1]: libpod-a005787caec0ce0b9efc6ba54acc98bd395bc1ad0a6624a7109a63475bf4f9e2.scope: Deactivated successfully.
Dec  2 06:37:04 np0005542249 podman[299626]: 2025-12-02 11:37:04.638379718 +0000 UTC m=+0.329714464 container attach a005787caec0ce0b9efc6ba54acc98bd395bc1ad0a6624a7109a63475bf4f9e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  2 06:37:04 np0005542249 podman[299626]: 2025-12-02 11:37:04.640193816 +0000 UTC m=+0.331528532 container died a005787caec0ce0b9efc6ba54acc98bd395bc1ad0a6624a7109a63475bf4f9e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  2 06:37:04 np0005542249 systemd[1]: var-lib-containers-storage-overlay-e1b169e2a3a138f9252f964dd502721402c2511b92420566e6f8746f4a7ca538-merged.mount: Deactivated successfully.
Dec  2 06:37:04 np0005542249 podman[299626]: 2025-12-02 11:37:04.88649821 +0000 UTC m=+0.577832946 container remove a005787caec0ce0b9efc6ba54acc98bd395bc1ad0a6624a7109a63475bf4f9e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_roentgen, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:37:04 np0005542249 systemd[1]: libpod-conmon-a005787caec0ce0b9efc6ba54acc98bd395bc1ad0a6624a7109a63475bf4f9e2.scope: Deactivated successfully.
Dec  2 06:37:05 np0005542249 nova_compute[254900]: 2025-12-02 11:37:05.072 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:37:05 np0005542249 podman[299667]: 2025-12-02 11:37:05.164868878 +0000 UTC m=+0.088773586 container create db80ea9281addf70dfd3e9c8b7150f78bd20069e865a3fae900118d3d144ce69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_williams, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  2 06:37:05 np0005542249 podman[299667]: 2025-12-02 11:37:05.110507612 +0000 UTC m=+0.034412320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:37:05 np0005542249 systemd[1]: Started libpod-conmon-db80ea9281addf70dfd3e9c8b7150f78bd20069e865a3fae900118d3d144ce69.scope.
Dec  2 06:37:05 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:37:05 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c47e7eb13198b60bd7bb370bbd9daf929ab01bd79c42cbc48de55e890066d9f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:37:05 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c47e7eb13198b60bd7bb370bbd9daf929ab01bd79c42cbc48de55e890066d9f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:37:05 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c47e7eb13198b60bd7bb370bbd9daf929ab01bd79c42cbc48de55e890066d9f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:37:05 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c47e7eb13198b60bd7bb370bbd9daf929ab01bd79c42cbc48de55e890066d9f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:37:05 np0005542249 podman[299667]: 2025-12-02 11:37:05.267830014 +0000 UTC m=+0.191734732 container init db80ea9281addf70dfd3e9c8b7150f78bd20069e865a3fae900118d3d144ce69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Dec  2 06:37:05 np0005542249 podman[299667]: 2025-12-02 11:37:05.273865208 +0000 UTC m=+0.197769906 container start db80ea9281addf70dfd3e9c8b7150f78bd20069e865a3fae900118d3d144ce69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_williams, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Dec  2 06:37:05 np0005542249 podman[299667]: 2025-12-02 11:37:05.336157567 +0000 UTC m=+0.260062285 container attach db80ea9281addf70dfd3e9c8b7150f78bd20069e865a3fae900118d3d144ce69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_williams, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  2 06:37:06 np0005542249 laughing_williams[299683]: {
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:    "0": [
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:        {
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "devices": [
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "/dev/loop3"
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            ],
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "lv_name": "ceph_lv0",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "lv_size": "21470642176",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "name": "ceph_lv0",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "tags": {
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.cluster_name": "ceph",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.crush_device_class": "",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.encrypted": "0",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.osd_id": "0",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.type": "block",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.vdo": "0"
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            },
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "type": "block",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "vg_name": "ceph_vg0"
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:        }
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:    ],
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:    "1": [
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:        {
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "devices": [
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "/dev/loop4"
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            ],
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "lv_name": "ceph_lv1",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "lv_size": "21470642176",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "name": "ceph_lv1",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "tags": {
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.cluster_name": "ceph",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.crush_device_class": "",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.encrypted": "0",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.osd_id": "1",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.type": "block",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.vdo": "0"
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            },
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "type": "block",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "vg_name": "ceph_vg1"
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:        }
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:    ],
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:    "2": [
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:        {
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "devices": [
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "/dev/loop5"
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            ],
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "lv_name": "ceph_lv2",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "lv_size": "21470642176",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "name": "ceph_lv2",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "tags": {
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.cluster_name": "ceph",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.crush_device_class": "",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.encrypted": "0",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.osd_id": "2",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.type": "block",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:                "ceph.vdo": "0"
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            },
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "type": "block",
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:            "vg_name": "ceph_vg2"
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:        }
Dec  2 06:37:06 np0005542249 laughing_williams[299683]:    ]
Dec  2 06:37:06 np0005542249 laughing_williams[299683]: }
Dec  2 06:37:06 np0005542249 systemd[1]: libpod-db80ea9281addf70dfd3e9c8b7150f78bd20069e865a3fae900118d3d144ce69.scope: Deactivated successfully.
Dec  2 06:37:06 np0005542249 podman[299667]: 2025-12-02 11:37:06.111648283 +0000 UTC m=+1.035553001 container died db80ea9281addf70dfd3e9c8b7150f78bd20069e865a3fae900118d3d144ce69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_williams, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  2 06:37:06 np0005542249 systemd[1]: var-lib-containers-storage-overlay-c47e7eb13198b60bd7bb370bbd9daf929ab01bd79c42cbc48de55e890066d9f5-merged.mount: Deactivated successfully.
Dec  2 06:37:06 np0005542249 podman[299667]: 2025-12-02 11:37:06.252638636 +0000 UTC m=+1.176543354 container remove db80ea9281addf70dfd3e9c8b7150f78bd20069e865a3fae900118d3d144ce69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_williams, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:37:06 np0005542249 systemd[1]: libpod-conmon-db80ea9281addf70dfd3e9c8b7150f78bd20069e865a3fae900118d3d144ce69.scope: Deactivated successfully.
Dec  2 06:37:06 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1925: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:37:07 np0005542249 podman[299846]: 2025-12-02 11:37:07.105384386 +0000 UTC m=+0.043390752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:37:07 np0005542249 podman[299846]: 2025-12-02 11:37:07.274536839 +0000 UTC m=+0.212543145 container create 78c435722f6df9ea4f220b41c97fabc7a03de3ac40bd520c2bf07175c585f324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hodgkin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  2 06:37:07 np0005542249 systemd[1]: Started libpod-conmon-78c435722f6df9ea4f220b41c97fabc7a03de3ac40bd520c2bf07175c585f324.scope.
Dec  2 06:37:07 np0005542249 podman[299860]: 2025-12-02 11:37:07.422624442 +0000 UTC m=+0.091385616 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:37:07 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:37:07 np0005542249 podman[299846]: 2025-12-02 11:37:07.551903749 +0000 UTC m=+0.489910055 container init 78c435722f6df9ea4f220b41c97fabc7a03de3ac40bd520c2bf07175c585f324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  2 06:37:07 np0005542249 podman[299846]: 2025-12-02 11:37:07.566506503 +0000 UTC m=+0.504512839 container start 78c435722f6df9ea4f220b41c97fabc7a03de3ac40bd520c2bf07175c585f324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  2 06:37:07 np0005542249 podman[299846]: 2025-12-02 11:37:07.571750854 +0000 UTC m=+0.509757140 container attach 78c435722f6df9ea4f220b41c97fabc7a03de3ac40bd520c2bf07175c585f324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hodgkin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  2 06:37:07 np0005542249 competent_hodgkin[299892]: 167 167
Dec  2 06:37:07 np0005542249 systemd[1]: libpod-78c435722f6df9ea4f220b41c97fabc7a03de3ac40bd520c2bf07175c585f324.scope: Deactivated successfully.
Dec  2 06:37:07 np0005542249 podman[299846]: 2025-12-02 11:37:07.573215414 +0000 UTC m=+0.511221720 container died 78c435722f6df9ea4f220b41c97fabc7a03de3ac40bd520c2bf07175c585f324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:37:07 np0005542249 podman[299861]: 2025-12-02 11:37:07.58420696 +0000 UTC m=+0.253882799 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  2 06:37:07 np0005542249 systemd[1]: var-lib-containers-storage-overlay-c7f875eb34d78def4739a073b7cf2d480da411c1012d32a7e3636e585b07a521-merged.mount: Deactivated successfully.
Dec  2 06:37:07 np0005542249 podman[299846]: 2025-12-02 11:37:07.664224739 +0000 UTC m=+0.602231035 container remove 78c435722f6df9ea4f220b41c97fabc7a03de3ac40bd520c2bf07175c585f324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  2 06:37:07 np0005542249 systemd[1]: libpod-conmon-78c435722f6df9ea4f220b41c97fabc7a03de3ac40bd520c2bf07175c585f324.scope: Deactivated successfully.
Dec  2 06:37:07 np0005542249 podman[299929]: 2025-12-02 11:37:07.94383707 +0000 UTC m=+0.087064330 container create 03652560a6259a2379a4821f8e29005347b6ba60a37c40f1c01dc13ecc1c149f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shaw, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:37:07 np0005542249 podman[299929]: 2025-12-02 11:37:07.881077467 +0000 UTC m=+0.024304757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:37:07 np0005542249 systemd[1]: Started libpod-conmon-03652560a6259a2379a4821f8e29005347b6ba60a37c40f1c01dc13ecc1c149f.scope.
Dec  2 06:37:08 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:37:08 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/540b09d28ac04783ab12a5f5bbfec64a52490ca14109805caee31e01868f5332/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:37:08 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/540b09d28ac04783ab12a5f5bbfec64a52490ca14109805caee31e01868f5332/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:37:08 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/540b09d28ac04783ab12a5f5bbfec64a52490ca14109805caee31e01868f5332/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:37:08 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/540b09d28ac04783ab12a5f5bbfec64a52490ca14109805caee31e01868f5332/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:37:08 np0005542249 podman[299929]: 2025-12-02 11:37:08.060922177 +0000 UTC m=+0.204149517 container init 03652560a6259a2379a4821f8e29005347b6ba60a37c40f1c01dc13ecc1c149f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shaw, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:37:08 np0005542249 podman[299929]: 2025-12-02 11:37:08.070551657 +0000 UTC m=+0.213778917 container start 03652560a6259a2379a4821f8e29005347b6ba60a37c40f1c01dc13ecc1c149f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  2 06:37:08 np0005542249 podman[299929]: 2025-12-02 11:37:08.084120854 +0000 UTC m=+0.227348154 container attach 03652560a6259a2379a4821f8e29005347b6ba60a37c40f1c01dc13ecc1c149f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shaw, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Dec  2 06:37:08 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1926: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:37:09 np0005542249 nova_compute[254900]: 2025-12-02 11:37:09.020 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:37:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e505 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:37:09 np0005542249 ecstatic_shaw[299946]: {
Dec  2 06:37:09 np0005542249 ecstatic_shaw[299946]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:37:09 np0005542249 ecstatic_shaw[299946]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:37:09 np0005542249 ecstatic_shaw[299946]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:37:09 np0005542249 ecstatic_shaw[299946]:        "osd_id": 0,
Dec  2 06:37:09 np0005542249 ecstatic_shaw[299946]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:37:09 np0005542249 ecstatic_shaw[299946]:        "type": "bluestore"
Dec  2 06:37:09 np0005542249 ecstatic_shaw[299946]:    },
Dec  2 06:37:09 np0005542249 ecstatic_shaw[299946]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:37:09 np0005542249 ecstatic_shaw[299946]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:37:09 np0005542249 ecstatic_shaw[299946]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:37:09 np0005542249 ecstatic_shaw[299946]:        "osd_id": 2,
Dec  2 06:37:09 np0005542249 ecstatic_shaw[299946]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:37:09 np0005542249 ecstatic_shaw[299946]:        "type": "bluestore"
Dec  2 06:37:09 np0005542249 ecstatic_shaw[299946]:    },
Dec  2 06:37:09 np0005542249 ecstatic_shaw[299946]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:37:09 np0005542249 ecstatic_shaw[299946]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:37:09 np0005542249 ecstatic_shaw[299946]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:37:09 np0005542249 ecstatic_shaw[299946]:        "osd_id": 1,
Dec  2 06:37:09 np0005542249 ecstatic_shaw[299946]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:37:09 np0005542249 ecstatic_shaw[299946]:        "type": "bluestore"
Dec  2 06:37:09 np0005542249 ecstatic_shaw[299946]:    }
Dec  2 06:37:09 np0005542249 ecstatic_shaw[299946]: }
Dec  2 06:37:09 np0005542249 systemd[1]: libpod-03652560a6259a2379a4821f8e29005347b6ba60a37c40f1c01dc13ecc1c149f.scope: Deactivated successfully.
Dec  2 06:37:09 np0005542249 podman[299929]: 2025-12-02 11:37:09.221597752 +0000 UTC m=+1.364825022 container died 03652560a6259a2379a4821f8e29005347b6ba60a37c40f1c01dc13ecc1c149f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shaw, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  2 06:37:09 np0005542249 systemd[1]: libpod-03652560a6259a2379a4821f8e29005347b6ba60a37c40f1c01dc13ecc1c149f.scope: Consumed 1.158s CPU time.
Dec  2 06:37:09 np0005542249 systemd[1]: var-lib-containers-storage-overlay-540b09d28ac04783ab12a5f5bbfec64a52490ca14109805caee31e01868f5332-merged.mount: Deactivated successfully.
Dec  2 06:37:09 np0005542249 podman[299929]: 2025-12-02 11:37:09.424065353 +0000 UTC m=+1.567292643 container remove 03652560a6259a2379a4821f8e29005347b6ba60a37c40f1c01dc13ecc1c149f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shaw, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:37:09 np0005542249 systemd[1]: libpod-conmon-03652560a6259a2379a4821f8e29005347b6ba60a37c40f1c01dc13ecc1c149f.scope: Deactivated successfully.
Dec  2 06:37:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:37:09 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:37:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:37:09 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:37:09 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 1a79a710-c1e6-41a4-912f-284ed0d41865 does not exist
Dec  2 06:37:09 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 51cefcb0-c5ad-463a-b797-3c2a4d850028 does not exist
Dec  2 06:37:10 np0005542249 nova_compute[254900]: 2025-12-02 11:37:10.074 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:37:10 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1927: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:37:10 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:37:10 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:37:12 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1928: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Dec  2 06:37:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e505 do_prune osdmap full prune enabled
Dec  2 06:37:12 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e506 e506: 3 total, 3 up, 3 in
Dec  2 06:37:12 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e506: 3 total, 3 up, 3 in
Dec  2 06:37:14 np0005542249 nova_compute[254900]: 2025-12-02 11:37:14.024 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:37:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e506 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:37:14 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1930: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.3 KiB/s wr, 16 op/s
Dec  2 06:37:15 np0005542249 nova_compute[254900]: 2025-12-02 11:37:15.077 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:37:16 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1931: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.3 KiB/s wr, 16 op/s
Dec  2 06:37:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e506 do_prune osdmap full prune enabled
Dec  2 06:37:16 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e507 e507: 3 total, 3 up, 3 in
Dec  2 06:37:16 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e507: 3 total, 3 up, 3 in
Dec  2 06:37:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:37:17 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3447288980' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:37:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:37:17 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3447288980' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:37:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:37:17 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/351794226' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:37:17 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:37:17 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/351794226' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:37:18 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1933: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 3.2 KiB/s wr, 43 op/s
Dec  2 06:37:19 np0005542249 nova_compute[254900]: 2025-12-02 11:37:19.028 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:37:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e507 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:37:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:37:19.852 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:37:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:37:19.853 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:37:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:37:19.853 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:37:20 np0005542249 nova_compute[254900]: 2025-12-02 11:37:20.080 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:37:20 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1934: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 3.9 KiB/s wr, 94 op/s
Dec  2 06:37:22 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1935: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 3.1 KiB/s wr, 88 op/s
Dec  2 06:37:24 np0005542249 nova_compute[254900]: 2025-12-02 11:37:24.031 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:37:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e507 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:37:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e507 do_prune osdmap full prune enabled
Dec  2 06:37:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e508 e508: 3 total, 3 up, 3 in
Dec  2 06:37:24 np0005542249 ceph-mon[75081]: log_channel(cluster) log [DBG] : osdmap e508: 3 total, 3 up, 3 in
Dec  2 06:37:24 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1937: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 2.5 KiB/s wr, 91 op/s
Dec  2 06:37:25 np0005542249 nova_compute[254900]: 2025-12-02 11:37:25.082 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:37:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:37:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:37:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:37:26
Dec  2 06:37:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:37:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:37:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['volumes', 'backups', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'images', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr']
Dec  2 06:37:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:37:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:37:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:37:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:37:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:37:26 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1938: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 2.0 KiB/s wr, 74 op/s
Dec  2 06:37:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:37:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:37:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:37:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:37:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:37:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:37:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:37:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:37:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:37:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:37:28 np0005542249 podman[300047]: 2025-12-02 11:37:28.063815491 +0000 UTC m=+0.127095639 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  2 06:37:28 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1939: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 716 B/s wr, 54 op/s
Dec  2 06:37:29 np0005542249 nova_compute[254900]: 2025-12-02 11:37:29.035 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e508 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:37:29.643344) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675449643393, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 2334, "num_deletes": 267, "total_data_size": 3353024, "memory_usage": 3400000, "flush_reason": "Manual Compaction"}
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675449669911, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 3287259, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36889, "largest_seqno": 39222, "table_properties": {"data_size": 3276366, "index_size": 7067, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 22946, "raw_average_key_size": 21, "raw_value_size": 3254509, "raw_average_value_size": 3021, "num_data_blocks": 307, "num_entries": 1077, "num_filter_entries": 1077, "num_deletions": 267, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764675280, "oldest_key_time": 1764675280, "file_creation_time": 1764675449, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 26713 microseconds, and 13863 cpu microseconds.
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:37:29.669996) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 3287259 bytes OK
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:37:29.670088) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:37:29.672214) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:37:29.672258) EVENT_LOG_v1 {"time_micros": 1764675449672247, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:37:29.672294) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 3342995, prev total WAL file size 3342995, number of live WAL files 2.
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:37:29.674253) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(3210KB)], [77(10092KB)]
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675449674332, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 13622006, "oldest_snapshot_seqno": -1}
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 7181 keys, 11964230 bytes, temperature: kUnknown
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675449744861, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 11964230, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11908864, "index_size": 36310, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17989, "raw_key_size": 180895, "raw_average_key_size": 25, "raw_value_size": 11772822, "raw_average_value_size": 1639, "num_data_blocks": 1448, "num_entries": 7181, "num_filter_entries": 7181, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764672515, "oldest_key_time": 0, "file_creation_time": 1764675449, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e48d9b43-c5ab-4d63-a013-45c19571f3aa", "db_session_id": "FJAG8GF4HHVLV7YXGWEG", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:37:29.745198) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 11964230 bytes
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:37:29.746275) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.9 rd, 169.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 9.9 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 7722, records dropped: 541 output_compression: NoCompression
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:37:29.746290) EVENT_LOG_v1 {"time_micros": 1764675449746282, "job": 44, "event": "compaction_finished", "compaction_time_micros": 70609, "compaction_time_cpu_micros": 27344, "output_level": 6, "num_output_files": 1, "total_output_size": 11964230, "num_input_records": 7722, "num_output_records": 7181, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675449747044, "job": 44, "event": "table_file_deletion", "file_number": 79}
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764675449748832, "job": 44, "event": "table_file_deletion", "file_number": 77}
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:37:29.673883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:37:29.748933) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:37:29.748940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:37:29.748943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:37:29.748945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:37:29 np0005542249 ceph-mon[75081]: rocksdb: (Original Log Time 2025/12/02-11:37:29.748947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  2 06:37:30 np0005542249 nova_compute[254900]: 2025-12-02 11:37:30.085 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:37:30 np0005542249 ovn_controller[153849]: 2025-12-02T11:37:30Z|00288|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec  2 06:37:30 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1940: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 12 op/s
Dec  2 06:37:32 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1941: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:37:34 np0005542249 nova_compute[254900]: 2025-12-02 11:37:34.038 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:37:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e508 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:37:34 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1942: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:37:35 np0005542249 nova_compute[254900]: 2025-12-02 11:37:35.134 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:37:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:37:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:37:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:37:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:37:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  2 06:37:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:37:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894458247867422 of space, bias 1.0, pg target 0.8683374743602266 quantized to 32 (current 32)
Dec  2 06:37:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:37:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  2 06:37:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:37:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Dec  2 06:37:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:37:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:37:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:37:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:37:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:37:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:37:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:37:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:37:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:37:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:37:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:37:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:37:36 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1943: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:37:38 np0005542249 podman[300067]: 2025-12-02 11:37:38.000275007 +0000 UTC m=+0.069931727 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Dec  2 06:37:38 np0005542249 podman[300068]: 2025-12-02 11:37:38.034732097 +0000 UTC m=+0.096495814 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 06:37:38 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1944: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:37:39 np0005542249 nova_compute[254900]: 2025-12-02 11:37:39.041 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:37:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e508 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:37:40 np0005542249 nova_compute[254900]: 2025-12-02 11:37:40.134 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:37:40 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1945: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:37:42 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1946: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:37:44 np0005542249 nova_compute[254900]: 2025-12-02 11:37:44.045 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:37:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e508 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:37:44 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1947: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:37:45 np0005542249 nova_compute[254900]: 2025-12-02 11:37:45.169 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:37:46 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1948: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:37:47 np0005542249 nova_compute[254900]: 2025-12-02 11:37:47.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:37:47 np0005542249 nova_compute[254900]: 2025-12-02 11:37:47.383 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  2 06:37:48 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1949: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:37:49 np0005542249 nova_compute[254900]: 2025-12-02 11:37:49.048 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:37:49 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e508 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:37:50 np0005542249 nova_compute[254900]: 2025-12-02 11:37:50.171 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:37:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  2 06:37:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3929084716' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  2 06:37:50 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  2 06:37:50 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3929084716' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  2 06:37:50 np0005542249 nova_compute[254900]: 2025-12-02 11:37:50.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:37:50 np0005542249 nova_compute[254900]: 2025-12-02 11:37:50.383 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  2 06:37:50 np0005542249 nova_compute[254900]: 2025-12-02 11:37:50.383 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  2 06:37:50 np0005542249 nova_compute[254900]: 2025-12-02 11:37:50.402 254904 DEBUG nova.compute.manager [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  2 06:37:50 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1950: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:37:52 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1951: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:37:54 np0005542249 nova_compute[254900]: 2025-12-02 11:37:54.051 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:37:54 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e508 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:37:54 np0005542249 nova_compute[254900]: 2025-12-02 11:37:54.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:37:54 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1952: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:37:55 np0005542249 nova_compute[254900]: 2025-12-02 11:37:55.208 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:37:55 np0005542249 nova_compute[254900]: 2025-12-02 11:37:55.382 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:37:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:37:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:37:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:37:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:37:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:37:56 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:37:56 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1953: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:37:58 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1954: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:37:58 np0005542249 podman[300118]: 2025-12-02 11:37:58.985464312 +0000 UTC m=+0.066475393 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  2 06:37:59 np0005542249 nova_compute[254900]: 2025-12-02 11:37:59.054 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:37:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e508 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:37:59 np0005542249 nova_compute[254900]: 2025-12-02 11:37:59.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:37:59 np0005542249 nova_compute[254900]: 2025-12-02 11:37:59.413 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:37:59 np0005542249 nova_compute[254900]: 2025-12-02 11:37:59.413 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:37:59 np0005542249 nova_compute[254900]: 2025-12-02 11:37:59.413 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:37:59 np0005542249 nova_compute[254900]: 2025-12-02 11:37:59.414 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  2 06:37:59 np0005542249 nova_compute[254900]: 2025-12-02 11:37:59.415 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:37:59 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:37:59 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/155745507' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:37:59 np0005542249 nova_compute[254900]: 2025-12-02 11:37:59.921 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:38:00 np0005542249 nova_compute[254900]: 2025-12-02 11:38:00.144 254904 WARNING nova.virt.libvirt.driver [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  2 06:38:00 np0005542249 nova_compute[254900]: 2025-12-02 11:38:00.146 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4358MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  2 06:38:00 np0005542249 nova_compute[254900]: 2025-12-02 11:38:00.146 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:38:00 np0005542249 nova_compute[254900]: 2025-12-02 11:38:00.146 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:38:00 np0005542249 nova_compute[254900]: 2025-12-02 11:38:00.210 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:38:00 np0005542249 nova_compute[254900]: 2025-12-02 11:38:00.235 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  2 06:38:00 np0005542249 nova_compute[254900]: 2025-12-02 11:38:00.236 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  2 06:38:00 np0005542249 nova_compute[254900]: 2025-12-02 11:38:00.277 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  2 06:38:00 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1955: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:38:00 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  2 06:38:00 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2996662668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  2 06:38:00 np0005542249 nova_compute[254900]: 2025-12-02 11:38:00.757 254904 DEBUG oslo_concurrency.processutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  2 06:38:00 np0005542249 nova_compute[254900]: 2025-12-02 11:38:00.765 254904 DEBUG nova.compute.provider_tree [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed in ProviderTree for provider: 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  2 06:38:00 np0005542249 nova_compute[254900]: 2025-12-02 11:38:00.800 254904 DEBUG nova.scheduler.client.report [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Inventory has not changed for provider 02b9b0a3-ac9d-4426-baf4-5ebd782a4062 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  2 06:38:00 np0005542249 nova_compute[254900]: 2025-12-02 11:38:00.802 254904 DEBUG nova.compute.resource_tracker [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  2 06:38:00 np0005542249 nova_compute[254900]: 2025-12-02 11:38:00.802 254904 DEBUG oslo_concurrency.lockutils [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:38:02 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1956: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:38:02 np0005542249 nova_compute[254900]: 2025-12-02 11:38:02.798 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:38:02 np0005542249 nova_compute[254900]: 2025-12-02 11:38:02.799 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:38:02 np0005542249 nova_compute[254900]: 2025-12-02 11:38:02.823 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:38:02 np0005542249 nova_compute[254900]: 2025-12-02 11:38:02.823 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:38:03 np0005542249 nova_compute[254900]: 2025-12-02 11:38:03.381 254904 DEBUG oslo_service.periodic_task [None req-ace0d83d-922f-4e7c-880c-577b02d195a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  2 06:38:04 np0005542249 nova_compute[254900]: 2025-12-02 11:38:04.058 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:38:04 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e508 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:38:04 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1957: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:38:05 np0005542249 nova_compute[254900]: 2025-12-02 11:38:05.213 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:38:06 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1958: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:38:08 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1959: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:38:08 np0005542249 podman[300186]: 2025-12-02 11:38:08.985207016 +0000 UTC m=+0.059464094 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  2 06:38:09 np0005542249 podman[300187]: 2025-12-02 11:38:09.049906121 +0000 UTC m=+0.113007759 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  2 06:38:09 np0005542249 nova_compute[254900]: 2025-12-02 11:38:09.061 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:38:09 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e508 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:38:10 np0005542249 nova_compute[254900]: 2025-12-02 11:38:10.214 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:38:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:38:10 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:38:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  2 06:38:10 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:38:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  2 06:38:10 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:38:10 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 0c22b8a1-551d-4ba9-a87b-224337c5adf8 does not exist
Dec  2 06:38:10 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev e273f089-3f99-4db1-b22f-43be15638618 does not exist
Dec  2 06:38:10 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev 1adbf561-bb02-465f-920c-0ed042a448ec does not exist
Dec  2 06:38:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  2 06:38:10 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  2 06:38:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  2 06:38:10 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:38:10 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:38:10 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:38:10 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1960: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:38:10 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  2 06:38:10 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:38:10 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  2 06:38:11 np0005542249 podman[300501]: 2025-12-02 11:38:11.320206384 +0000 UTC m=+0.061676435 container create 3252478b63e159c0716054e371244c5aca2001a29a9f88c6014f3705aedc9312 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_fermat, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:38:11 np0005542249 systemd[1]: Started libpod-conmon-3252478b63e159c0716054e371244c5aca2001a29a9f88c6014f3705aedc9312.scope.
Dec  2 06:38:11 np0005542249 podman[300501]: 2025-12-02 11:38:11.290158703 +0000 UTC m=+0.031628794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:38:11 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:38:11 np0005542249 podman[300501]: 2025-12-02 11:38:11.448298098 +0000 UTC m=+0.189768189 container init 3252478b63e159c0716054e371244c5aca2001a29a9f88c6014f3705aedc9312 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_fermat, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:38:11 np0005542249 podman[300501]: 2025-12-02 11:38:11.461494644 +0000 UTC m=+0.202964695 container start 3252478b63e159c0716054e371244c5aca2001a29a9f88c6014f3705aedc9312 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_fermat, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  2 06:38:11 np0005542249 podman[300501]: 2025-12-02 11:38:11.466942721 +0000 UTC m=+0.208412822 container attach 3252478b63e159c0716054e371244c5aca2001a29a9f88c6014f3705aedc9312 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_fermat, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:38:11 np0005542249 vigilant_fermat[300518]: 167 167
Dec  2 06:38:11 np0005542249 systemd[1]: libpod-3252478b63e159c0716054e371244c5aca2001a29a9f88c6014f3705aedc9312.scope: Deactivated successfully.
Dec  2 06:38:11 np0005542249 podman[300501]: 2025-12-02 11:38:11.471801152 +0000 UTC m=+0.213271233 container died 3252478b63e159c0716054e371244c5aca2001a29a9f88c6014f3705aedc9312 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_fermat, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  2 06:38:11 np0005542249 systemd[1]: var-lib-containers-storage-overlay-29f1f3d9282cad324f1466c103d843e5a5546711f4ebf84b31521c30d208897f-merged.mount: Deactivated successfully.
Dec  2 06:38:11 np0005542249 podman[300501]: 2025-12-02 11:38:11.522846088 +0000 UTC m=+0.264316129 container remove 3252478b63e159c0716054e371244c5aca2001a29a9f88c6014f3705aedc9312 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_fermat, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  2 06:38:11 np0005542249 systemd[1]: libpod-conmon-3252478b63e159c0716054e371244c5aca2001a29a9f88c6014f3705aedc9312.scope: Deactivated successfully.
Dec  2 06:38:11 np0005542249 podman[300543]: 2025-12-02 11:38:11.766648005 +0000 UTC m=+0.050303378 container create 0d2a574ed82e85bd8f90f7101bba544bd4c4f9cf052a5daebab9f894f795a31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:38:11 np0005542249 systemd[1]: Started libpod-conmon-0d2a574ed82e85bd8f90f7101bba544bd4c4f9cf052a5daebab9f894f795a31c.scope.
Dec  2 06:38:11 np0005542249 podman[300543]: 2025-12-02 11:38:11.74867891 +0000 UTC m=+0.032334293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:38:11 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:38:11 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db090eee80df3acb8007404f5b7c5485854226f7fc78d3852efac986ce3cc4cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:38:11 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db090eee80df3acb8007404f5b7c5485854226f7fc78d3852efac986ce3cc4cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:38:11 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db090eee80df3acb8007404f5b7c5485854226f7fc78d3852efac986ce3cc4cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:38:11 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db090eee80df3acb8007404f5b7c5485854226f7fc78d3852efac986ce3cc4cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:38:11 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db090eee80df3acb8007404f5b7c5485854226f7fc78d3852efac986ce3cc4cc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  2 06:38:11 np0005542249 podman[300543]: 2025-12-02 11:38:11.876783015 +0000 UTC m=+0.160438388 container init 0d2a574ed82e85bd8f90f7101bba544bd4c4f9cf052a5daebab9f894f795a31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_elion, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:38:11 np0005542249 podman[300543]: 2025-12-02 11:38:11.904708278 +0000 UTC m=+0.188363671 container start 0d2a574ed82e85bd8f90f7101bba544bd4c4f9cf052a5daebab9f894f795a31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_elion, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:38:11 np0005542249 podman[300543]: 2025-12-02 11:38:11.984848769 +0000 UTC m=+0.268504222 container attach 0d2a574ed82e85bd8f90f7101bba544bd4c4f9cf052a5daebab9f894f795a31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_elion, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  2 06:38:12 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1961: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:38:13 np0005542249 adoring_elion[300559]: --> passed data devices: 0 physical, 3 LVM
Dec  2 06:38:13 np0005542249 adoring_elion[300559]: --> relative data size: 1.0
Dec  2 06:38:13 np0005542249 adoring_elion[300559]: --> All data devices are unavailable
Dec  2 06:38:13 np0005542249 systemd[1]: libpod-0d2a574ed82e85bd8f90f7101bba544bd4c4f9cf052a5daebab9f894f795a31c.scope: Deactivated successfully.
Dec  2 06:38:13 np0005542249 systemd[1]: libpod-0d2a574ed82e85bd8f90f7101bba544bd4c4f9cf052a5daebab9f894f795a31c.scope: Consumed 1.155s CPU time.
Dec  2 06:38:13 np0005542249 podman[300588]: 2025-12-02 11:38:13.17649179 +0000 UTC m=+0.030113444 container died 0d2a574ed82e85bd8f90f7101bba544bd4c4f9cf052a5daebab9f894f795a31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:38:13 np0005542249 systemd[1]: var-lib-containers-storage-overlay-db090eee80df3acb8007404f5b7c5485854226f7fc78d3852efac986ce3cc4cc-merged.mount: Deactivated successfully.
Dec  2 06:38:13 np0005542249 podman[300588]: 2025-12-02 11:38:13.245347107 +0000 UTC m=+0.098968731 container remove 0d2a574ed82e85bd8f90f7101bba544bd4c4f9cf052a5daebab9f894f795a31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_elion, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:38:13 np0005542249 systemd[1]: libpod-conmon-0d2a574ed82e85bd8f90f7101bba544bd4c4f9cf052a5daebab9f894f795a31c.scope: Deactivated successfully.
Dec  2 06:38:14 np0005542249 nova_compute[254900]: 2025-12-02 11:38:14.063 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:38:14 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e508 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:38:14 np0005542249 podman[300741]: 2025-12-02 11:38:14.151492986 +0000 UTC m=+0.069924967 container create f2302bbca543472ad4417e5f3843f1620baff7346994877c0a88bb2f5b15dae8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_jemison, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:38:14 np0005542249 systemd[1]: Started libpod-conmon-f2302bbca543472ad4417e5f3843f1620baff7346994877c0a88bb2f5b15dae8.scope.
Dec  2 06:38:14 np0005542249 podman[300741]: 2025-12-02 11:38:14.123932793 +0000 UTC m=+0.042364834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:38:14 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:38:14 np0005542249 podman[300741]: 2025-12-02 11:38:14.23801476 +0000 UTC m=+0.156446731 container init f2302bbca543472ad4417e5f3843f1620baff7346994877c0a88bb2f5b15dae8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  2 06:38:14 np0005542249 podman[300741]: 2025-12-02 11:38:14.244528455 +0000 UTC m=+0.162960406 container start f2302bbca543472ad4417e5f3843f1620baff7346994877c0a88bb2f5b15dae8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_jemison, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:38:14 np0005542249 podman[300741]: 2025-12-02 11:38:14.248240496 +0000 UTC m=+0.166672467 container attach f2302bbca543472ad4417e5f3843f1620baff7346994877c0a88bb2f5b15dae8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_jemison, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:38:14 np0005542249 beautiful_jemison[300757]: 167 167
Dec  2 06:38:14 np0005542249 systemd[1]: libpod-f2302bbca543472ad4417e5f3843f1620baff7346994877c0a88bb2f5b15dae8.scope: Deactivated successfully.
Dec  2 06:38:14 np0005542249 conmon[300757]: conmon f2302bbca543472ad441 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f2302bbca543472ad4417e5f3843f1620baff7346994877c0a88bb2f5b15dae8.scope/container/memory.events
Dec  2 06:38:14 np0005542249 podman[300741]: 2025-12-02 11:38:14.254075733 +0000 UTC m=+0.172507694 container died f2302bbca543472ad4417e5f3843f1620baff7346994877c0a88bb2f5b15dae8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:38:14 np0005542249 systemd[1]: var-lib-containers-storage-overlay-bf121b10047bf10f8f9a0630c1971553c18cc41ceae755c3c1d72fe561537bd0-merged.mount: Deactivated successfully.
Dec  2 06:38:14 np0005542249 podman[300741]: 2025-12-02 11:38:14.290652499 +0000 UTC m=+0.209084450 container remove f2302bbca543472ad4417e5f3843f1620baff7346994877c0a88bb2f5b15dae8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_jemison, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:38:14 np0005542249 systemd[1]: libpod-conmon-f2302bbca543472ad4417e5f3843f1620baff7346994877c0a88bb2f5b15dae8.scope: Deactivated successfully.
Dec  2 06:38:14 np0005542249 podman[300781]: 2025-12-02 11:38:14.439914095 +0000 UTC m=+0.041333966 container create 1a2d4aa4a75921c35d06928a1eeae170b0db7a15e805c6c37bfa1f0c782d7d6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_taussig, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  2 06:38:14 np0005542249 systemd[1]: Started libpod-conmon-1a2d4aa4a75921c35d06928a1eeae170b0db7a15e805c6c37bfa1f0c782d7d6a.scope.
Dec  2 06:38:14 np0005542249 podman[300781]: 2025-12-02 11:38:14.42380402 +0000 UTC m=+0.025223911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:38:14 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:38:14 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03ee9d2c25010535c0b19bdcd796a6a2c2f11257063f58c0b7c625f743d114f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:38:14 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03ee9d2c25010535c0b19bdcd796a6a2c2f11257063f58c0b7c625f743d114f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:38:14 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03ee9d2c25010535c0b19bdcd796a6a2c2f11257063f58c0b7c625f743d114f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:38:14 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03ee9d2c25010535c0b19bdcd796a6a2c2f11257063f58c0b7c625f743d114f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:38:14 np0005542249 podman[300781]: 2025-12-02 11:38:14.553866168 +0000 UTC m=+0.155286079 container init 1a2d4aa4a75921c35d06928a1eeae170b0db7a15e805c6c37bfa1f0c782d7d6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_taussig, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  2 06:38:14 np0005542249 podman[300781]: 2025-12-02 11:38:14.562350307 +0000 UTC m=+0.163770178 container start 1a2d4aa4a75921c35d06928a1eeae170b0db7a15e805c6c37bfa1f0c782d7d6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_taussig, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:38:14 np0005542249 podman[300781]: 2025-12-02 11:38:14.571894975 +0000 UTC m=+0.173314896 container attach 1a2d4aa4a75921c35d06928a1eeae170b0db7a15e805c6c37bfa1f0c782d7d6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:38:14 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1962: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:38:15 np0005542249 nova_compute[254900]: 2025-12-02 11:38:15.217 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]: {
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:    "0": [
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:        {
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "devices": [
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "/dev/loop3"
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            ],
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "lv_name": "ceph_lv0",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "lv_size": "21470642176",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e72cc75-6117-4faf-a687-17040ed0df80,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "lv_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "name": "ceph_lv0",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "tags": {
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.block_uuid": "J6JKrS-Ay2L-N6PY-JXDk-oC52-VoGt-UEXDeD",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.cluster_name": "ceph",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.crush_device_class": "",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.encrypted": "0",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.osd_fsid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.osd_id": "0",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.type": "block",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.vdo": "0"
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            },
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "type": "block",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "vg_name": "ceph_vg0"
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:        }
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:    ],
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:    "1": [
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:        {
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "devices": [
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "/dev/loop4"
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            ],
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "lv_name": "ceph_lv1",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "lv_size": "21470642176",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=cb22d311-a01e-4327-afb4-565a5b394930,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "lv_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "name": "ceph_lv1",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "tags": {
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.block_uuid": "dUOe4Z-GTbd-qRYF-6FhJ-GQiU-A5BH-LWJriG",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.cluster_name": "ceph",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.crush_device_class": "",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.encrypted": "0",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.osd_fsid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.osd_id": "1",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.type": "block",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.vdo": "0"
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            },
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "type": "block",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "vg_name": "ceph_vg1"
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:        }
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:    ],
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:    "2": [
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:        {
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "devices": [
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "/dev/loop5"
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            ],
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "lv_name": "ceph_lv2",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "lv_size": "21470642176",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95bc4eaa-1a14-59bf-acf2-4b3da055547d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=844c55bd-4f5a-4ef7-af48-77f5584b8079,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "lv_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "name": "ceph_lv2",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "tags": {
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.block_uuid": "H8UriF-dVU7-PJk9-Itvn-tc5u-vJPm-O147ag",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.cephx_lockbox_secret": "",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.cluster_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.cluster_name": "ceph",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.crush_device_class": "",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.encrypted": "0",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.osd_fsid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.osd_id": "2",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.type": "block",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:                "ceph.vdo": "0"
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            },
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "type": "block",
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:            "vg_name": "ceph_vg2"
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:        }
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]:    ]
Dec  2 06:38:15 np0005542249 trusting_taussig[300797]: }
Dec  2 06:38:15 np0005542249 systemd[1]: libpod-1a2d4aa4a75921c35d06928a1eeae170b0db7a15e805c6c37bfa1f0c782d7d6a.scope: Deactivated successfully.
Dec  2 06:38:15 np0005542249 conmon[300797]: conmon 1a2d4aa4a75921c35d06 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1a2d4aa4a75921c35d06928a1eeae170b0db7a15e805c6c37bfa1f0c782d7d6a.scope/container/memory.events
Dec  2 06:38:15 np0005542249 podman[300806]: 2025-12-02 11:38:15.483869312 +0000 UTC m=+0.043742251 container died 1a2d4aa4a75921c35d06928a1eeae170b0db7a15e805c6c37bfa1f0c782d7d6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  2 06:38:15 np0005542249 systemd[1]: var-lib-containers-storage-overlay-03ee9d2c25010535c0b19bdcd796a6a2c2f11257063f58c0b7c625f743d114f0-merged.mount: Deactivated successfully.
Dec  2 06:38:15 np0005542249 podman[300806]: 2025-12-02 11:38:15.543973212 +0000 UTC m=+0.103846061 container remove 1a2d4aa4a75921c35d06928a1eeae170b0db7a15e805c6c37bfa1f0c782d7d6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_taussig, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  2 06:38:15 np0005542249 systemd[1]: libpod-conmon-1a2d4aa4a75921c35d06928a1eeae170b0db7a15e805c6c37bfa1f0c782d7d6a.scope: Deactivated successfully.
Dec  2 06:38:16 np0005542249 podman[300962]: 2025-12-02 11:38:16.423449414 +0000 UTC m=+0.073796342 container create cb488b55b1b61f85d1a62219d00cc34dc0b80dea28af3b9bbdc72d51097e7239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Dec  2 06:38:16 np0005542249 systemd[1]: Started libpod-conmon-cb488b55b1b61f85d1a62219d00cc34dc0b80dea28af3b9bbdc72d51097e7239.scope.
Dec  2 06:38:16 np0005542249 podman[300962]: 2025-12-02 11:38:16.394705318 +0000 UTC m=+0.045052316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:38:16 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:38:16 np0005542249 podman[300962]: 2025-12-02 11:38:16.524439597 +0000 UTC m=+0.174786595 container init cb488b55b1b61f85d1a62219d00cc34dc0b80dea28af3b9bbdc72d51097e7239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_tesla, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:38:16 np0005542249 podman[300962]: 2025-12-02 11:38:16.536094911 +0000 UTC m=+0.186441859 container start cb488b55b1b61f85d1a62219d00cc34dc0b80dea28af3b9bbdc72d51097e7239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  2 06:38:16 np0005542249 podman[300962]: 2025-12-02 11:38:16.540365887 +0000 UTC m=+0.190712835 container attach cb488b55b1b61f85d1a62219d00cc34dc0b80dea28af3b9bbdc72d51097e7239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  2 06:38:16 np0005542249 loving_tesla[300979]: 167 167
Dec  2 06:38:16 np0005542249 systemd[1]: libpod-cb488b55b1b61f85d1a62219d00cc34dc0b80dea28af3b9bbdc72d51097e7239.scope: Deactivated successfully.
Dec  2 06:38:16 np0005542249 podman[300962]: 2025-12-02 11:38:16.545612628 +0000 UTC m=+0.195959576 container died cb488b55b1b61f85d1a62219d00cc34dc0b80dea28af3b9bbdc72d51097e7239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_tesla, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Dec  2 06:38:16 np0005542249 systemd[1]: var-lib-containers-storage-overlay-458312506c08967ae4c7ad3f4c5adb8f877ac8d96ee8b56202c08b65ba794c72-merged.mount: Deactivated successfully.
Dec  2 06:38:16 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1963: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:38:16 np0005542249 podman[300962]: 2025-12-02 11:38:16.606141101 +0000 UTC m=+0.256488039 container remove cb488b55b1b61f85d1a62219d00cc34dc0b80dea28af3b9bbdc72d51097e7239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_tesla, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  2 06:38:16 np0005542249 systemd[1]: libpod-conmon-cb488b55b1b61f85d1a62219d00cc34dc0b80dea28af3b9bbdc72d51097e7239.scope: Deactivated successfully.
Dec  2 06:38:16 np0005542249 podman[301005]: 2025-12-02 11:38:16.850624335 +0000 UTC m=+0.063308879 container create c00ce674ac8e3776d6d6b71f1a07e4148b4ecc2421b1cc3c193f03c4617a4dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_solomon, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:38:16 np0005542249 systemd[1]: Started libpod-conmon-c00ce674ac8e3776d6d6b71f1a07e4148b4ecc2421b1cc3c193f03c4617a4dc2.scope.
Dec  2 06:38:16 np0005542249 podman[301005]: 2025-12-02 11:38:16.828909879 +0000 UTC m=+0.041594463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  2 06:38:16 np0005542249 systemd[1]: Started libcrun container.
Dec  2 06:38:16 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1778cd07b89f3efdacb6bebb5be0e5156ee3680241594b00541e30da33de4c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  2 06:38:16 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1778cd07b89f3efdacb6bebb5be0e5156ee3680241594b00541e30da33de4c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  2 06:38:16 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1778cd07b89f3efdacb6bebb5be0e5156ee3680241594b00541e30da33de4c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  2 06:38:16 np0005542249 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1778cd07b89f3efdacb6bebb5be0e5156ee3680241594b00541e30da33de4c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  2 06:38:16 np0005542249 podman[301005]: 2025-12-02 11:38:16.967783924 +0000 UTC m=+0.180468508 container init c00ce674ac8e3776d6d6b71f1a07e4148b4ecc2421b1cc3c193f03c4617a4dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_solomon, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  2 06:38:16 np0005542249 podman[301005]: 2025-12-02 11:38:16.981962717 +0000 UTC m=+0.194647271 container start c00ce674ac8e3776d6d6b71f1a07e4148b4ecc2421b1cc3c193f03c4617a4dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_solomon, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  2 06:38:16 np0005542249 podman[301005]: 2025-12-02 11:38:16.985904003 +0000 UTC m=+0.198588557 container attach c00ce674ac8e3776d6d6b71f1a07e4148b4ecc2421b1cc3c193f03c4617a4dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_solomon, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  2 06:38:18 np0005542249 quizzical_solomon[301021]: {
Dec  2 06:38:18 np0005542249 quizzical_solomon[301021]:    "7e72cc75-6117-4faf-a687-17040ed0df80": {
Dec  2 06:38:18 np0005542249 quizzical_solomon[301021]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:38:18 np0005542249 quizzical_solomon[301021]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  2 06:38:18 np0005542249 quizzical_solomon[301021]:        "osd_id": 0,
Dec  2 06:38:18 np0005542249 quizzical_solomon[301021]:        "osd_uuid": "7e72cc75-6117-4faf-a687-17040ed0df80",
Dec  2 06:38:18 np0005542249 quizzical_solomon[301021]:        "type": "bluestore"
Dec  2 06:38:18 np0005542249 quizzical_solomon[301021]:    },
Dec  2 06:38:18 np0005542249 quizzical_solomon[301021]:    "844c55bd-4f5a-4ef7-af48-77f5584b8079": {
Dec  2 06:38:18 np0005542249 quizzical_solomon[301021]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:38:18 np0005542249 quizzical_solomon[301021]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  2 06:38:18 np0005542249 quizzical_solomon[301021]:        "osd_id": 2,
Dec  2 06:38:18 np0005542249 quizzical_solomon[301021]:        "osd_uuid": "844c55bd-4f5a-4ef7-af48-77f5584b8079",
Dec  2 06:38:18 np0005542249 quizzical_solomon[301021]:        "type": "bluestore"
Dec  2 06:38:18 np0005542249 quizzical_solomon[301021]:    },
Dec  2 06:38:18 np0005542249 quizzical_solomon[301021]:    "cb22d311-a01e-4327-afb4-565a5b394930": {
Dec  2 06:38:18 np0005542249 quizzical_solomon[301021]:        "ceph_fsid": "95bc4eaa-1a14-59bf-acf2-4b3da055547d",
Dec  2 06:38:18 np0005542249 quizzical_solomon[301021]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  2 06:38:18 np0005542249 quizzical_solomon[301021]:        "osd_id": 1,
Dec  2 06:38:18 np0005542249 quizzical_solomon[301021]:        "osd_uuid": "cb22d311-a01e-4327-afb4-565a5b394930",
Dec  2 06:38:18 np0005542249 quizzical_solomon[301021]:        "type": "bluestore"
Dec  2 06:38:18 np0005542249 quizzical_solomon[301021]:    }
Dec  2 06:38:18 np0005542249 quizzical_solomon[301021]: }
Dec  2 06:38:18 np0005542249 systemd[1]: libpod-c00ce674ac8e3776d6d6b71f1a07e4148b4ecc2421b1cc3c193f03c4617a4dc2.scope: Deactivated successfully.
Dec  2 06:38:18 np0005542249 podman[301005]: 2025-12-02 11:38:18.160765791 +0000 UTC m=+1.373450435 container died c00ce674ac8e3776d6d6b71f1a07e4148b4ecc2421b1cc3c193f03c4617a4dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_solomon, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  2 06:38:18 np0005542249 systemd[1]: libpod-c00ce674ac8e3776d6d6b71f1a07e4148b4ecc2421b1cc3c193f03c4617a4dc2.scope: Consumed 1.188s CPU time.
Dec  2 06:38:18 np0005542249 systemd[1]: var-lib-containers-storage-overlay-a1778cd07b89f3efdacb6bebb5be0e5156ee3680241594b00541e30da33de4c2-merged.mount: Deactivated successfully.
Dec  2 06:38:18 np0005542249 podman[301005]: 2025-12-02 11:38:18.236208736 +0000 UTC m=+1.448893290 container remove c00ce674ac8e3776d6d6b71f1a07e4148b4ecc2421b1cc3c193f03c4617a4dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_solomon, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 06:38:18 np0005542249 systemd[1]: libpod-conmon-c00ce674ac8e3776d6d6b71f1a07e4148b4ecc2421b1cc3c193f03c4617a4dc2.scope: Deactivated successfully.
Dec  2 06:38:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  2 06:38:18 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:38:18 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  2 06:38:18 np0005542249 ceph-mon[75081]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:38:18 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev eb935536-8b3a-4778-ac1e-f39474a0516b does not exist
Dec  2 06:38:18 np0005542249 ceph-mgr[75372]: [progress WARNING root] complete: ev a1e6c555-1763-4ab7-b871-985f5901b54f does not exist
Dec  2 06:38:18 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1964: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:38:19 np0005542249 nova_compute[254900]: 2025-12-02 11:38:19.067 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:38:19 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e508 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:38:19 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:38:19 np0005542249 ceph-mon[75081]: from='mgr.14130 192.168.122.100:0/3685445186' entity='mgr.compute-0.ntxcvs' 
Dec  2 06:38:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:38:19.854 163757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  2 06:38:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:38:19.854 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  2 06:38:19 np0005542249 ovn_metadata_agent[163733]: 2025-12-02 11:38:19.854 163757 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  2 06:38:20 np0005542249 nova_compute[254900]: 2025-12-02 11:38:20.220 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:38:20 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1965: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:38:22 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1966: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:38:22 np0005542249 systemd-logind[787]: New session 52 of user zuul.
Dec  2 06:38:22 np0005542249 systemd[1]: Started Session 52 of User zuul.
Dec  2 06:38:24 np0005542249 nova_compute[254900]: 2025-12-02 11:38:24.070 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:38:24 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e508 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:38:24 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1967: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:38:25 np0005542249 nova_compute[254900]: 2025-12-02 11:38:25.237 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:38:26 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.19173 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 06:38:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:38:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:38:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Optimize plan auto_2025-12-02_11:38:26
Dec  2 06:38:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  2 06:38:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] do_upmap
Dec  2 06:38:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'vms', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'images']
Dec  2 06:38:26 np0005542249 ceph-mgr[75372]: [balancer INFO root] prepared 0/10 changes
Dec  2 06:38:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:38:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:38:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] scanning for idle connections..
Dec  2 06:38:26 np0005542249 ceph-mgr[75372]: [volumes INFO mgr_util] cleaning up connections: []
Dec  2 06:38:26 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1968: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:38:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  2 06:38:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:38:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  2 06:38:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  2 06:38:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:38:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:38:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  2 06:38:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:38:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  2 06:38:26 np0005542249 ceph-mgr[75372]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  2 06:38:26 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.19175 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 06:38:27 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Dec  2 06:38:27 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2065000557' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  2 06:38:28 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1969: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:38:29 np0005542249 nova_compute[254900]: 2025-12-02 11:38:29.075 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:38:29 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e508 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:38:30 np0005542249 podman[301406]: 2025-12-02 11:38:30.055156444 +0000 UTC m=+0.119699448 container health_status 130400eaf961ceaaa203e2cc0a5af0fe03396f0667ec510c1291b1ee03bff193 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  2 06:38:30 np0005542249 nova_compute[254900]: 2025-12-02 11:38:30.238 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:38:30 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1970: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:38:32 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1971: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:38:33 np0005542249 ovs-vsctl[301477]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  2 06:38:34 np0005542249 nova_compute[254900]: 2025-12-02 11:38:34.077 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:38:34 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e508 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:38:34 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1972: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:38:34 np0005542249 virtqemud[254597]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec  2 06:38:35 np0005542249 virtqemud[254597]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec  2 06:38:35 np0005542249 virtqemud[254597]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  2 06:38:35 np0005542249 nova_compute[254900]: 2025-12-02 11:38:35.240 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:38:35 np0005542249 ceph-mds[101614]: mds.cephfs.compute-0.bydekr asok_command: cache status {prefix=cache status} (starting...)
Dec  2 06:38:35 np0005542249 ceph-mds[101614]: mds.cephfs.compute-0.bydekr asok_command: client ls {prefix=client ls} (starting...)
Dec  2 06:38:36 np0005542249 lvm[301840]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  2 06:38:36 np0005542249 lvm[301840]: VG ceph_vg1 finished
Dec  2 06:38:36 np0005542249 lvm[301843]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  2 06:38:36 np0005542249 lvm[301843]: VG ceph_vg2 finished
Dec  2 06:38:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] _maybe_adjust
Dec  2 06:38:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:38:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  2 06:38:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:38:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  2 06:38:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:38:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894458247867422 of space, bias 1.0, pg target 0.8683374743602266 quantized to 32 (current 32)
Dec  2 06:38:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:38:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  2 06:38:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:38:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Dec  2 06:38:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:38:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  2 06:38:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:38:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:38:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:38:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  2 06:38:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:38:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  2 06:38:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:38:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  2 06:38:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  2 06:38:36 np0005542249 ceph-mgr[75372]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  2 06:38:36 np0005542249 lvm[301855]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  2 06:38:36 np0005542249 lvm[301855]: VG ceph_vg0 finished
Dec  2 06:38:36 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.19179 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 06:38:36 np0005542249 ceph-mds[101614]: mds.cephfs.compute-0.bydekr asok_command: damage ls {prefix=damage ls} (starting...)
Dec  2 06:38:36 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1973: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:38:36 np0005542249 ceph-mds[101614]: mds.cephfs.compute-0.bydekr asok_command: dump loads {prefix=dump loads} (starting...)
Dec  2 06:38:36 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.19181 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 06:38:36 np0005542249 ceph-mds[101614]: mds.cephfs.compute-0.bydekr asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec  2 06:38:36 np0005542249 ceph-mds[101614]: mds.cephfs.compute-0.bydekr asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec  2 06:38:37 np0005542249 ceph-mds[101614]: mds.cephfs.compute-0.bydekr asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec  2 06:38:37 np0005542249 ceph-mds[101614]: mds.cephfs.compute-0.bydekr asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec  2 06:38:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Dec  2 06:38:37 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3411786402' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  2 06:38:37 np0005542249 ceph-mds[101614]: mds.cephfs.compute-0.bydekr asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec  2 06:38:37 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.19187 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 06:38:37 np0005542249 ceph-mgr[75372]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  2 06:38:37 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T11:38:37.555+0000 7fc2b9048640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  2 06:38:37 np0005542249 ceph-mds[101614]: mds.cephfs.compute-0.bydekr asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec  2 06:38:37 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  2 06:38:37 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/373212916' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  2 06:38:37 np0005542249 ceph-mds[101614]: mds.cephfs.compute-0.bydekr asok_command: ops {prefix=ops} (starting...)
Dec  2 06:38:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Dec  2 06:38:38 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1513190603' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec  2 06:38:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Dec  2 06:38:38 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1529501815' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec  2 06:38:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec  2 06:38:38 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3358391765' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  2 06:38:38 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1974: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:38:38 np0005542249 ceph-mds[101614]: mds.cephfs.compute-0.bydekr asok_command: session ls {prefix=session ls} (starting...)
Dec  2 06:38:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Dec  2 06:38:38 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2202615071' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec  2 06:38:38 np0005542249 ceph-mds[101614]: mds.cephfs.compute-0.bydekr asok_command: status {prefix=status} (starting...)
Dec  2 06:38:38 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec  2 06:38:38 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3339975890' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  2 06:38:39 np0005542249 nova_compute[254900]: 2025-12-02 11:38:39.080 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:38:39 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.19201 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 06:38:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e508 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:38:39 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  2 06:38:39 np0005542249 ceph-mon[75081]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 8677 writes, 39K keys, 8677 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 8677 writes, 8677 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1770 writes, 7952 keys, 1770 commit groups, 1.0 writes per commit group, ingest: 10.57 MB, 0.02 MB/s#012Interval WAL: 1770 writes, 1770 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    113.7      0.39              0.18        22    0.018       0      0       0.0       0.0#012  L6      1/0   11.41 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.9    151.6    127.5      1.37              0.67        21    0.065    115K    11K       0.0       0.0#012 Sum      1/0   11.41 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.9    118.0    124.5      1.76              0.85        43    0.041    115K    11K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.9    114.8    117.4      0.49              0.26        10    0.049     35K   2561       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    151.6    127.5      1.37              0.67        21    0.065    115K    11K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    114.4      0.39              0.18        21    0.018       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.043, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.21 GB write, 0.07 MB/s write, 0.20 GB read, 0.07 MB/s read, 1.8 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.09 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560e2b4e71f0#2 capacity: 304.00 MB usage: 24.26 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000255 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1660,23.32 MB,7.67051%) FilterBlock(44,333.61 KB,0.107168%) IndexBlock(44,634.55 KB,0.20384%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  2 06:38:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec  2 06:38:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3006460535' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  2 06:38:39 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.19205 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 06:38:39 np0005542249 podman[302288]: 2025-12-02 11:38:39.714977721 +0000 UTC m=+0.097458550 container health_status 301660b5961629ac564857138dcba46d0947a2a1c7d3debbb9f5976c1df04193 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  2 06:38:39 np0005542249 podman[302290]: 2025-12-02 11:38:39.729096172 +0000 UTC m=+0.117826569 container health_status 5c31229430d6adbfb6e358463fdefc47061e3db1d274ed40ca82e25890f29998 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec  2 06:38:39 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  2 06:38:39 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1930970493' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  2 06:38:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Dec  2 06:38:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2117242158' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  2 06:38:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Dec  2 06:38:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2582870161' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec  2 06:38:40 np0005542249 nova_compute[254900]: 2025-12-02 11:38:40.243 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:38:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Dec  2 06:38:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3941631278' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec  2 06:38:40 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1975: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:38:40 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec  2 06:38:40 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3457131957' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  2 06:38:40 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.19217 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 06:38:40 np0005542249 ceph-mgr[75372]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  2 06:38:40 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T11:38:40.912+0000 7fc2b9048640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  2 06:38:41 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.19219 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 06:38:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Dec  2 06:38:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3978944072' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec  2 06:38:41 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.19223 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 06:38:41 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Dec  2 06:38:41 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/850923591' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec  2 06:38:41 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.19227 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 06:38:42 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.19231 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  2 06:38:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec  2 06:38:42 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1007433185' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  2 06:38:42 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1976: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 90357760 unmapped: 3973120 heap: 94330880 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 90791936 unmapped: 3538944 heap: 94330880 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.967088699s of 10.248430252s, submitted: 93
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 148 ms_handle_reset con 0x55c08548bc00 session 0x55c084fcb4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 90841088 unmapped: 3489792 heap: 94330880 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fb502000/0x0/0x4ffc00000, data 0x163938b/0x171c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 90873856 unmapped: 3457024 heap: 94330880 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 149 ms_handle_reset con 0x55c085549000 session 0x55c0854b4b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138575 data_alloc: 234881024 data_used: 14426112
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 90890240 unmapped: 3440640 heap: 94330880 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 150 ms_handle_reset con 0x55c08774ec00 session 0x55c0873f05a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 90931200 unmapped: 3399680 heap: 94330880 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 151 ms_handle_reset con 0x55c08774f400 session 0x55c087b79c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 151 ms_handle_reset con 0x55c08774f000 session 0x55c0862cef00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 90923008 unmapped: 3407872 heap: 94330880 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb4f7000/0x0/0x4ffc00000, data 0x163e68e/0x1725000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 152 ms_handle_reset con 0x55c08548bc00 session 0x55c085ed4f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 152 ms_handle_reset con 0x55c085549000 session 0x55c0876fe960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 152 ms_handle_reset con 0x55c08774ec00 session 0x55c087252000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 90931200 unmapped: 3399680 heap: 94330880 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 90931200 unmapped: 3399680 heap: 94330880 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb4f6000/0x0/0x4ffc00000, data 0x164024f/0x1727000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb4f6000/0x0/0x4ffc00000, data 0x164024f/0x1727000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147865 data_alloc: 234881024 data_used: 14446592
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 90931200 unmapped: 3399680 heap: 94330880 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb4f6000/0x0/0x4ffc00000, data 0x164024f/0x1727000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb4f6000/0x0/0x4ffc00000, data 0x164024f/0x1727000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 90931200 unmapped: 3399680 heap: 94330880 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96198656 unmapped: 4702208 heap: 100900864 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.940852165s of 10.338023186s, submitted: 137
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96419840 unmapped: 4481024 heap: 100900864 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 95526912 unmapped: 5373952 heap: 100900864 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa97d000/0x0/0x4ffc00000, data 0x21ba24f/0x22a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248011 data_alloc: 234881024 data_used: 14860288
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 95526912 unmapped: 5373952 heap: 100900864 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 95526912 unmapped: 5373952 heap: 100900864 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 95543296 unmapped: 5357568 heap: 100900864 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa979000/0x0/0x4ffc00000, data 0x21bbcb2/0x22a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 95543296 unmapped: 5357568 heap: 100900864 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 95174656 unmapped: 5726208 heap: 100900864 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa977000/0x0/0x4ffc00000, data 0x21becb2/0x22a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246451 data_alloc: 234881024 data_used: 14864384
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 95174656 unmapped: 5726208 heap: 100900864 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 95174656 unmapped: 5726208 heap: 100900864 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 95174656 unmapped: 5726208 heap: 100900864 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c08774f400 session 0x55c0854b45a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 95174656 unmapped: 5726208 heap: 100900864 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa977000/0x0/0x4ffc00000, data 0x21becb2/0x22a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 95174656 unmapped: 5726208 heap: 100900864 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.494656563s of 11.552546501s, submitted: 30
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c08774fc00 session 0x55c085bb12c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c08548bc00 session 0x55c0873bcf00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247804 data_alloc: 234881024 data_used: 15126528
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 95199232 unmapped: 5701632 heap: 100900864 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa977000/0x0/0x4ffc00000, data 0x21becb2/0x22a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 95199232 unmapped: 5701632 heap: 100900864 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 95199232 unmapped: 5701632 heap: 100900864 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa977000/0x0/0x4ffc00000, data 0x21becb2/0x22a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 95199232 unmapped: 5701632 heap: 100900864 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c085549000 session 0x55c085096780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 95215616 unmapped: 5685248 heap: 100900864 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251106 data_alloc: 234881024 data_used: 15126528
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 95215616 unmapped: 5685248 heap: 100900864 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c08774ec00 session 0x55c0871332c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c08774f800 session 0x55c0882592c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 95174656 unmapped: 5726208 heap: 100900864 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c085459000 session 0x55c088258b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c08548b400 session 0x55c0876ff4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa976000/0x0/0x4ffc00000, data 0x21becc2/0x22a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 95191040 unmapped: 5709824 heap: 100900864 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c085549000 session 0x55c085c49a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c08548bc00 session 0x55c0854534a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c08774ec00 session 0x55c0876fe960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c08774f800 session 0x55c087b78d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c08548b400 session 0x55c0873c25a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 6152192 heap: 100900864 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 6152192 heap: 100900864 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144801 data_alloc: 234881024 data_used: 14184448
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 6152192 heap: 100900864 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 6152192 heap: 100900864 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x1595c40/0x167c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.086220741s of 12.422695160s, submitted: 101
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c08548bc00 session 0x55c0876ff860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 6152192 heap: 100900864 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c085549000 session 0x55c087b79860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c08774ec00 session 0x55c087a8d0e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c08774f800 session 0x55c0854b45a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c08548b400 session 0x55c088258b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c08548bc00 session 0x55c085bb12c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96239616 unmapped: 5709824 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96239616 unmapped: 5709824 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1164494 data_alloc: 234881024 data_used: 13922304
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96239616 unmapped: 5709824 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96239616 unmapped: 5709824 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c085549000 session 0x55c0862cc780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c08774ec00 session 0x55c0876ffa40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa253000/0x0/0x4ffc00000, data 0x1744c40/0x182b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96239616 unmapped: 5709824 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c08774f400 session 0x55c0876ffc20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96239616 unmapped: 5709824 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c08548b400 session 0x55c0879d3860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96239616 unmapped: 5709824 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169586 data_alloc: 234881024 data_used: 14540800
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96141312 unmapped: 5808128 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa253000/0x0/0x4ffc00000, data 0x1744c40/0x182b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96370688 unmapped: 5578752 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.964822769s of 10.045549393s, submitted: 24
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c08774ec00 session 0x55c0873f0b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96444416 unmapped: 5505024 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96444416 unmapped: 5505024 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c0861aec00 session 0x55c085f70780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96444416 unmapped: 5505024 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171300 data_alloc: 234881024 data_used: 14708736
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96444416 unmapped: 5505024 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c0856fc800 session 0x55c0873f0b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 ms_handle_reset con 0x55c087b00800 session 0x55c0876ffa40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 154 ms_handle_reset con 0x55c08548b400 session 0x55c0876fef00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fa24f000/0x0/0x4ffc00000, data 0x17467bd/0x182e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96509952 unmapped: 5439488 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 154 ms_handle_reset con 0x55c08548bc00 session 0x55c0879d21e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 154 ms_handle_reset con 0x55c085549000 session 0x55c0873be780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 5513216 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 154 ms_handle_reset con 0x55c0856fc800 session 0x55c0891185a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 155 ms_handle_reset con 0x55c0861aec00 session 0x55c089119c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96501760 unmapped: 5447680 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 155 ms_handle_reset con 0x55c08548b400 session 0x55c089119e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 155 ms_handle_reset con 0x55c08548bc00 session 0x55c0879d2960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96526336 unmapped: 5423104 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159365 data_alloc: 234881024 data_used: 13930496
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96526336 unmapped: 5423104 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 156 ms_handle_reset con 0x55c085549000 session 0x55c0879d3860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fa435000/0x0/0x4ffc00000, data 0x155af6d/0x1646000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 156 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96624640 unmapped: 5324800 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 157 ms_handle_reset con 0x55c0856fc800 session 0x55c0879d32c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96624640 unmapped: 5324800 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 157 ms_handle_reset con 0x55c0861aec00 session 0x55c085ddad20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.555638313s of 10.735566139s, submitted: 56
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 157 ms_handle_reset con 0x55c08548b400 session 0x55c085ddaf00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 5308416 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fa433000/0x0/0x4ffc00000, data 0x155cb06/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 158 handle_osd_map epochs [158,159], i have 158, src has [1,159]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 160 ms_handle_reset con 0x55c08548bc00 session 0x55c085ddb0e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96690176 unmapped: 5259264 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170367 data_alloc: 234881024 data_used: 13934592
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96690176 unmapped: 5259264 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 160 ms_handle_reset con 0x55c085549000 session 0x55c085ddb2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96690176 unmapped: 5259264 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96690176 unmapped: 5259264 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 161 ms_handle_reset con 0x55c0856fc800 session 0x55c085ddb680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 162 ms_handle_reset con 0x55c08774ec00 session 0x55c085ddba40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x1563952/0x1656000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 5251072 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 162 ms_handle_reset con 0x55c08548b400 session 0x55c085ddbc20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 5251072 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 162 ms_handle_reset con 0x55c085549000 session 0x55c088259860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180326 data_alloc: 234881024 data_used: 13955072
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 163 ms_handle_reset con 0x55c08548bc00 session 0x55c085ddbe00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 5251072 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 163 ms_handle_reset con 0x55c0856fc800 session 0x55c0879d21e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 5251072 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 163 ms_handle_reset con 0x55c087b00800 session 0x55c0879d2960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 163 ms_handle_reset con 0x55c0855cbc00 session 0x55c088258f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96731136 unmapped: 5218304 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 163 ms_handle_reset con 0x55c08548b400 session 0x55c0891185a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.343214989s of 10.583092690s, submitted: 78
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 164 ms_handle_reset con 0x55c08548bc00 session 0x55c088259e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96763904 unmapped: 5185536 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0x1567028/0x165a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 164 ms_handle_reset con 0x55c085549000 session 0x55c0879a8f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 164 ms_handle_reset con 0x55c0856fc800 session 0x55c0879a92c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96763904 unmapped: 5185536 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186891 data_alloc: 234881024 data_used: 13971456
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96763904 unmapped: 5185536 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96788480 unmapped: 5160960 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 165 ms_handle_reset con 0x55c08548b400 session 0x55c0879a94a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 165 ms_handle_reset con 0x55c08548bc00 session 0x55c0879a9680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96788480 unmapped: 5160960 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa41d000/0x0/0x4ffc00000, data 0x156a6d8/0x1660000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 165 ms_handle_reset con 0x55c085549000 session 0x55c0879a9860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96788480 unmapped: 5160960 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 165 handle_osd_map epochs [165,166], i have 165, src has [1,166]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96788480 unmapped: 5160960 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa41c000/0x0/0x4ffc00000, data 0x156a6e8/0x1661000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa41c000/0x0/0x4ffc00000, data 0x156a6e8/0x1661000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 166 ms_handle_reset con 0x55c0855cbc00 session 0x55c0879a9a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 166 ms_handle_reset con 0x55c087b00800 session 0x55c0879a9c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192875 data_alloc: 234881024 data_used: 13971456
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96829440 unmapped: 5120000 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96829440 unmapped: 5120000 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa41a000/0x0/0x4ffc00000, data 0x156c157/0x1663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96829440 unmapped: 5120000 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96829440 unmapped: 5120000 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96829440 unmapped: 5120000 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa41a000/0x0/0x4ffc00000, data 0x156c157/0x1663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 166 handle_osd_map epochs [167,167], i have 167, src has [1,167]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.744790077s of 11.818330765s, submitted: 37
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195849 data_alloc: 234881024 data_used: 13971456
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96837632 unmapped: 5111808 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96837632 unmapped: 5111808 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x156dbba/0x1666000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96837632 unmapped: 5111808 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96837632 unmapped: 5111808 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96837632 unmapped: 5111808 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195849 data_alloc: 234881024 data_used: 13971456
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96837632 unmapped: 5111808 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 167 ms_handle_reset con 0x55c08548b400 session 0x55c0879a9e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96845824 unmapped: 5103616 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 167 ms_handle_reset con 0x55c08548bc00 session 0x55c088259860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96845824 unmapped: 5103616 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x156dbca/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96845824 unmapped: 5103616 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 167 ms_handle_reset con 0x55c085549000 session 0x55c088258f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96845824 unmapped: 5103616 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200053 data_alloc: 234881024 data_used: 13971456
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96854016 unmapped: 5095424 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.459310532s of 10.482207298s, submitted: 54
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 167 ms_handle_reset con 0x55c08774e000 session 0x55c085453e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 168 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x156f813/0x166d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96878592 unmapped: 5070848 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 168 ms_handle_reset con 0x55c085459c00 session 0x55c085c485a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 168 ms_handle_reset con 0x55c08548b400 session 0x55c085c49a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 168 handle_osd_map epochs [168,169], i have 168, src has [1,169]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 169 handle_osd_map epochs [169,169], i have 169, src has [1,169]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 169 ms_handle_reset con 0x55c08774fc00 session 0x55c087a8d680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 169 ms_handle_reset con 0x55c08548bc00 session 0x55c087705c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 169 ms_handle_reset con 0x55c085549000 session 0x55c088380000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96935936 unmapped: 5013504 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 169 handle_osd_map epochs [169,170], i have 169, src has [1,170]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 170 heartbeat osd_stat(store_statfs(0x4fa40c000/0x0/0x4ffc00000, data 0x15713ff/0x1671000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 170 ms_handle_reset con 0x55c08774e000 session 0x55c088380780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 170 ms_handle_reset con 0x55c08548b400 session 0x55c08836cb40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 170 ms_handle_reset con 0x55c08548bc00 session 0x55c085c48780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 5251072 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 170 ms_handle_reset con 0x55c085549000 session 0x55c0879d21e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 170 ms_handle_reset con 0x55c08774fc00 session 0x55c087142d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 170 handle_osd_map epochs [171,171], i have 170, src has [1,171]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 171 ms_handle_reset con 0x55c0884ac000 session 0x55c088380000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 171 ms_handle_reset con 0x55c0855cbc00 session 0x55c088259e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96722944 unmapped: 5226496 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 171 ms_handle_reset con 0x55c08548b400 session 0x55c085ddb0e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224822 data_alloc: 234881024 data_used: 13991936
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96747520 unmapped: 5201920 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 171 ms_handle_reset con 0x55c08548bc00 session 0x55c085ddb2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa406000/0x0/0x4ffc00000, data 0x1574b93/0x1678000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96755712 unmapped: 5193728 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96763904 unmapped: 5185536 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 171 ms_handle_reset con 0x55c08774fc00 session 0x55c086ee83c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 171 ms_handle_reset con 0x55c0884ae800 session 0x55c085ddbe00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 171 ms_handle_reset con 0x55c0884ae000 session 0x55c085c485a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 171 ms_handle_reset con 0x55c08548b400 session 0x55c086ee8b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 171 handle_osd_map epochs [171,172], i have 171, src has [1,172]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 172 ms_handle_reset con 0x55c08548bc00 session 0x55c085c49a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 172 ms_handle_reset con 0x55c0855cbc00 session 0x55c0876dc5a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 172 ms_handle_reset con 0x55c085549000 session 0x55c085ddba40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96804864 unmapped: 5144576 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x157672c/0x167b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 172 handle_osd_map epochs [172,173], i have 172, src has [1,173]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96813056 unmapped: 5136384 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 173 ms_handle_reset con 0x55c08548bc00 session 0x55c0876dc780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 173 handle_osd_map epochs [174,174], i have 173, src has [1,174]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 174 ms_handle_reset con 0x55c08548b400 session 0x55c086ee85a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236286 data_alloc: 234881024 data_used: 14012416
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 5079040 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 174 ms_handle_reset con 0x55c085549000 session 0x55c0873bc960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.281872749s of 10.702765465s, submitted: 148
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 5079040 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 174 handle_osd_map epochs [174,175], i have 174, src has [1,175]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 5079040 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 175 ms_handle_reset con 0x55c0855cbc00 session 0x55c087b783c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 175 heartbeat osd_stat(store_statfs(0x4fa3fc000/0x0/0x4ffc00000, data 0x157b879/0x1681000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 5079040 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 175 ms_handle_reset con 0x55c0884ae000 session 0x55c08836d2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 175 heartbeat osd_stat(store_statfs(0x4fa3fc000/0x0/0x4ffc00000, data 0x157b879/0x1681000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 175 ms_handle_reset con 0x55c08548b400 session 0x55c0862301e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 175 ms_handle_reset con 0x55c08548bc00 session 0x55c0876dc960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 175 handle_osd_map epochs [176,176], i have 175, src has [1,176]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 98238464 unmapped: 3710976 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 176 ms_handle_reset con 0x55c085549000 session 0x55c0891192c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 176 ms_handle_reset con 0x55c0855cbc00 session 0x55c088381a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238492 data_alloc: 234881024 data_used: 15077376
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 176 ms_handle_reset con 0x55c0884ae000 session 0x55c088381e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 98254848 unmapped: 3694592 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 176 ms_handle_reset con 0x55c08548b400 session 0x55c088380b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 98263040 unmapped: 3686400 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 98353152 unmapped: 3596288 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 176 ms_handle_reset con 0x55c085549000 session 0x55c0876dcb40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 176 ms_handle_reset con 0x55c0855cbc00 session 0x55c0879a92c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 176 ms_handle_reset con 0x55c0884ae000 session 0x55c0876dcf00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 176 handle_osd_map epochs [177,177], i have 176, src has [1,177]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 3547136 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 177 ms_handle_reset con 0x55c0884aec00 session 0x55c0862cf680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 177 ms_handle_reset con 0x55c08774fc00 session 0x55c087b443c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 177 handle_osd_map epochs [177,178], i have 177, src has [1,178]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 177 handle_osd_map epochs [178,178], i have 178, src has [1,178]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 178 heartbeat osd_stat(store_statfs(0x4f9fe6000/0x0/0x4ffc00000, data 0x157f01b/0x1687000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 178 ms_handle_reset con 0x55c0884ae800 session 0x55c0879a9860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 178 ms_handle_reset con 0x55c08548b400 session 0x55c086231680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 178 ms_handle_reset con 0x55c08548bc00 session 0x55c089119860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 98451456 unmapped: 3497984 heap: 101949440 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 178 ms_handle_reset con 0x55c0884ae000 session 0x55c085ed4f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 178 ms_handle_reset con 0x55c08548b400 session 0x55c085ddb4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305245 data_alloc: 234881024 data_used: 15089664
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99155968 unmapped: 10715136 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99172352 unmapped: 10698752 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 178 ms_handle_reset con 0x55c08774fc00 session 0x55c0852f4d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 178 handle_osd_map epochs [179,179], i have 178, src has [1,179]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.234540939s of 10.339504242s, submitted: 126
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 179 ms_handle_reset con 0x55c08548bc00 session 0x55c0879a8f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 179 ms_handle_reset con 0x55c0884ae800 session 0x55c085096780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99270656 unmapped: 10600448 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99270656 unmapped: 10600448 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 179 handle_osd_map epochs [180,180], i have 179, src has [1,180]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 180 ms_handle_reset con 0x55c08701cc00 session 0x55c0873c4960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 180 heartbeat osd_stat(store_statfs(0x4f9f10000/0x0/0x4ffc00000, data 0x15827c9/0x168c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 180 ms_handle_reset con 0x55c08548b400 session 0x55c0877054a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99295232 unmapped: 10575872 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259615 data_alloc: 234881024 data_used: 15110144
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99295232 unmapped: 10575872 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 180 ms_handle_reset con 0x55c08548bc00 session 0x55c0862cc3c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99319808 unmapped: 10551296 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 180 ms_handle_reset con 0x55c08774fc00 session 0x55c0862cda40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99319808 unmapped: 10551296 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 180 heartbeat osd_stat(store_statfs(0x4f9fde000/0x0/0x4ffc00000, data 0x15843c6/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 180 ms_handle_reset con 0x55c0884ae800 session 0x55c0879d2960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99336192 unmapped: 10534912 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 180 handle_osd_map epochs [181,181], i have 180, src has [1,181]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 181 ms_handle_reset con 0x55c087afe800 session 0x55c0862ce1e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 181 handle_osd_map epochs [181,182], i have 181, src has [1,182]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99336192 unmapped: 10534912 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269864 data_alloc: 234881024 data_used: 15122432
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99336192 unmapped: 10534912 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 182 heartbeat osd_stat(store_statfs(0x4f9fd7000/0x0/0x4ffc00000, data 0x15879b4/0x1695000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 182 ms_handle_reset con 0x55c08548b400 session 0x55c08836c1e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 182 handle_osd_map epochs [182,183], i have 182, src has [1,183]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 183 ms_handle_reset con 0x55c08548bc00 session 0x55c0882581e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99344384 unmapped: 10526720 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.607067108s of 10.937035561s, submitted: 125
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99344384 unmapped: 10526720 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 183 ms_handle_reset con 0x55c08774fc00 session 0x55c08544a1e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 183 ms_handle_reset con 0x55c0884ae800 session 0x55c085097860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 183 heartbeat osd_stat(store_statfs(0x4f9fd7000/0x0/0x4ffc00000, data 0x1589575/0x1697000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99377152 unmapped: 10493952 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99377152 unmapped: 10493952 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269974 data_alloc: 234881024 data_used: 15122432
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 183 ms_handle_reset con 0x55c087b09800 session 0x55c0873bcd20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99377152 unmapped: 10493952 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99377152 unmapped: 10493952 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99368960 unmapped: 10502144 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 183 ms_handle_reset con 0x55c08548b400 session 0x55c0876fed20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 183 heartbeat osd_stat(store_statfs(0x4f9fd6000/0x0/0x4ffc00000, data 0x15895d7/0x1698000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99098624 unmapped: 10772480 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 183 ms_handle_reset con 0x55c08548bc00 session 0x55c0879d23c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 183 handle_osd_map epochs [184,184], i have 183, src has [1,184]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 184 ms_handle_reset con 0x55c08774fc00 session 0x55c0872530e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99131392 unmapped: 10739712 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 184 heartbeat osd_stat(store_statfs(0x4f9fd1000/0x0/0x4ffc00000, data 0x158b1ee/0x169c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280167 data_alloc: 234881024 data_used: 15130624
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99131392 unmapped: 10739712 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 184 handle_osd_map epochs [185,185], i have 184, src has [1,185]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 185 ms_handle_reset con 0x55c0884ae800 session 0x55c0852f5c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99172352 unmapped: 10698752 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 185 ms_handle_reset con 0x55c08701a000 session 0x55c087b790e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 185 handle_osd_map epochs [185,186], i have 185, src has [1,186]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 186 ms_handle_reset con 0x55c08548b400 session 0x55c08506c3c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 186 heartbeat osd_stat(store_statfs(0x4f9fce000/0x0/0x4ffc00000, data 0x158cd6b/0x169f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 186 ms_handle_reset con 0x55c08548bc00 session 0x55c087128b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.791210175s of 10.045490265s, submitted: 85
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99180544 unmapped: 10690560 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 186 heartbeat osd_stat(store_statfs(0x4f9fce000/0x0/0x4ffc00000, data 0x158cd6b/0x169f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 186 handle_osd_map epochs [187,187], i have 186, src has [1,187]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 187 heartbeat osd_stat(store_statfs(0x4f9fcb000/0x0/0x4ffc00000, data 0x158e8da/0x16a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 187 ms_handle_reset con 0x55c08701a000 session 0x55c0873bf2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99188736 unmapped: 10682368 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 187 ms_handle_reset con 0x55c08774fc00 session 0x55c0873f0000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 187 ms_handle_reset con 0x55c0884ae800 session 0x55c0877052c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 187 heartbeat osd_stat(store_statfs(0x4f9fcb000/0x0/0x4ffc00000, data 0x1590449/0x16a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99016704 unmapped: 10854400 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285839 data_alloc: 234881024 data_used: 15065088
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 187 ms_handle_reset con 0x55c08548b400 session 0x55c087253680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99016704 unmapped: 10854400 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 187 heartbeat osd_stat(store_statfs(0x4f9fcb000/0x0/0x4ffc00000, data 0x1590449/0x16a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99016704 unmapped: 10854400 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99016704 unmapped: 10854400 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 187 handle_osd_map epochs [188,188], i have 187, src has [1,188]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 188 ms_handle_reset con 0x55c08548bc00 session 0x55c0872532c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99016704 unmapped: 10854400 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 188 ms_handle_reset con 0x55c08774fc00 session 0x55c085c48780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 188 ms_handle_reset con 0x55c08701a000 session 0x55c0876dd680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 188 heartbeat osd_stat(store_statfs(0x4f9fc5000/0x0/0x4ffc00000, data 0x1592054/0x16a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 188 handle_osd_map epochs [189,189], i have 188, src has [1,189]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99033088 unmapped: 10838016 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 189 ms_handle_reset con 0x55c087b02c00 session 0x55c0876dcd20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 189 ms_handle_reset con 0x55c0871a2400 session 0x55c08836cb40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 189 ms_handle_reset con 0x55c08548b400 session 0x55c087142b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297174 data_alloc: 234881024 data_used: 15081472
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99057664 unmapped: 10813440 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 189 ms_handle_reset con 0x55c08548bc00 session 0x55c087705a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 189 ms_handle_reset con 0x55c08774fc00 session 0x55c08543be00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 99074048 unmapped: 10797056 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 189 handle_osd_map epochs [190,190], i have 189, src has [1,190]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 190 ms_handle_reset con 0x55c0895d0000 session 0x55c085ed5a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 190 ms_handle_reset con 0x55c08701a000 session 0x55c085c481e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 100139008 unmapped: 9732096 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.291968346s of 10.569903374s, submitted: 100
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 190 ms_handle_reset con 0x55c0895d0000 session 0x55c0873be960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 190 ms_handle_reset con 0x55c08548b400 session 0x55c0873bf860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 100163584 unmapped: 9707520 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 190 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x1595616/0x16ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 190 ms_handle_reset con 0x55c08548bc00 session 0x55c086ee81e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 100163584 unmapped: 9707520 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299941 data_alloc: 234881024 data_used: 15089664
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 190 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x1595616/0x16ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 100163584 unmapped: 9707520 heap: 109871104 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 190 ms_handle_reset con 0x55c0871a2400 session 0x55c0876dd2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 190 ms_handle_reset con 0x55c08774fc00 session 0x55c085f13a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 190 ms_handle_reset con 0x55c08548bc00 session 0x55c087129680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 190 ms_handle_reset con 0x55c08701a000 session 0x55c0871430e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 190 ms_handle_reset con 0x55c0895d0000 session 0x55c0873bd860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 190 ms_handle_reset con 0x55c0895d0400 session 0x55c085ed54a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 190 ms_handle_reset con 0x55c08548b400 session 0x55c0873c23c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 101138432 unmapped: 12410880 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 190 heartbeat osd_stat(store_statfs(0x4f990c000/0x0/0x4ffc00000, data 0x1c4a63f/0x1d62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 101138432 unmapped: 12410880 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 101138432 unmapped: 12410880 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 190 handle_osd_map epochs [190,191], i have 190, src has [1,191]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 12402688 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363491 data_alloc: 234881024 data_used: 15097856
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 12402688 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 ms_handle_reset con 0x55c0895d0400 session 0x55c0862cfa40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 heartbeat osd_stat(store_statfs(0x4f9907000/0x0/0x4ffc00000, data 0x1c4c0eb/0x1d66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 101146624 unmapped: 12402688 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 ms_handle_reset con 0x55c08548bc00 session 0x55c08543b2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 101171200 unmapped: 12378112 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 101171200 unmapped: 12378112 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 ms_handle_reset con 0x55c0895d0000 session 0x55c0876d9c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.671701431s of 10.911547661s, submitted: 81
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 ms_handle_reset con 0x55c0895d0800 session 0x55c0876d92c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 102498304 unmapped: 11051008 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 heartbeat osd_stat(store_statfs(0x4f9907000/0x0/0x4ffc00000, data 0x1c4c0db/0x1d65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 ms_handle_reset con 0x55c08548b400 session 0x55c0876d8b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406645 data_alloc: 234881024 data_used: 20131840
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 105496576 unmapped: 8052736 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 105496576 unmapped: 8052736 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 8044544 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 ms_handle_reset con 0x55c08548bc00 session 0x55c0876d85a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 ms_handle_reset con 0x55c0895d0000 session 0x55c088380f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 105521152 unmapped: 8028160 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 heartbeat osd_stat(store_statfs(0x4f9909000/0x0/0x4ffc00000, data 0x1c4c0db/0x1d65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 ms_handle_reset con 0x55c0895d0400 session 0x55c087b44f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 105521152 unmapped: 8028160 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 heartbeat osd_stat(store_statfs(0x4f9909000/0x0/0x4ffc00000, data 0x1c4c0db/0x1d65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 heartbeat osd_stat(store_statfs(0x4f9908000/0x0/0x4ffc00000, data 0x1c4c0eb/0x1d66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407904 data_alloc: 234881024 data_used: 20131840
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 105553920 unmapped: 7995392 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 ms_handle_reset con 0x55c0895d0800 session 0x55c085ed4b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 ms_handle_reset con 0x55c08548b400 session 0x55c0850974a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 105553920 unmapped: 7995392 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 heartbeat osd_stat(store_statfs(0x4f9908000/0x0/0x4ffc00000, data 0x1c4c0db/0x1d65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 105570304 unmapped: 7979008 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 105570304 unmapped: 7979008 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.173526764s of 10.243024826s, submitted: 20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 108183552 unmapped: 5365760 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1454604 data_alloc: 234881024 data_used: 20328448
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 4997120 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 heartbeat osd_stat(store_statfs(0x4f92a7000/0x0/0x4ffc00000, data 0x22ae0db/0x23c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 109010944 unmapped: 4538368 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 ms_handle_reset con 0x55c08548bc00 session 0x55c089119a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 5128192 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 heartbeat osd_stat(store_statfs(0x4f9298000/0x0/0x4ffc00000, data 0x22bc13d/0x23d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 108453888 unmapped: 5095424 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 ms_handle_reset con 0x55c0895d0000 session 0x55c085ddb680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 ms_handle_reset con 0x55c0895d0400 session 0x55c0873bed20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 5054464 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 heartbeat osd_stat(store_statfs(0x4f9298000/0x0/0x4ffc00000, data 0x22bc0db/0x23d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1454656 data_alloc: 234881024 data_used: 20340736
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 5054464 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 ms_handle_reset con 0x55c0895d0c00 session 0x55c0854b5680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 heartbeat osd_stat(store_statfs(0x4f9297000/0x0/0x4ffc00000, data 0x22bc13d/0x23d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 5070848 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 5070848 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 ms_handle_reset con 0x55c08548b400 session 0x55c0854b50e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 108503040 unmapped: 5046272 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 108503040 unmapped: 5046272 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1454528 data_alloc: 234881024 data_used: 20340736
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 108503040 unmapped: 5046272 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.139367104s of 12.447360039s, submitted: 113
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 ms_handle_reset con 0x55c08548bc00 session 0x55c0854b5860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 heartbeat osd_stat(store_statfs(0x4f9299000/0x0/0x4ffc00000, data 0x22bc0db/0x23d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 5038080 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 191 handle_osd_map epochs [191,192], i have 191, src has [1,192]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 192 ms_handle_reset con 0x55c0895d0400 session 0x55c0876dc1e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 108519424 unmapped: 5029888 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9294000/0x0/0x4ffc00000, data 0x22bdcba/0x23d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 192 handle_osd_map epochs [193,193], i have 192, src has [1,193]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 193 ms_handle_reset con 0x55c0895d1000 session 0x55c0862cf4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 193 ms_handle_reset con 0x55c0895d1400 session 0x55c0873c3860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 193 ms_handle_reset con 0x55c0895d0000 session 0x55c085452780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 4988928 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 193 handle_osd_map epochs [193,194], i have 193, src has [1,194]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 194 ms_handle_reset con 0x55c08548b400 session 0x55c085452960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 194 ms_handle_reset con 0x55c08548bc00 session 0x55c0854b8780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 4440064 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1467220 data_alloc: 234881024 data_used: 20348928
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 4440064 heap: 113549312 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 194 ms_handle_reset con 0x55c0895d0400 session 0x55c0854b8b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 194 ms_handle_reset con 0x55c0895d1000 session 0x55c085f710e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 194 ms_handle_reset con 0x55c08548b400 session 0x55c0872521e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 194 heartbeat osd_stat(store_statfs(0x4f8b8e000/0x0/0x4ffc00000, data 0x29c1408/0x2adf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 194 ms_handle_reset con 0x55c08548bc00 session 0x55c0862305a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 194 ms_handle_reset con 0x55c0895d0000 session 0x55c086230d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 109182976 unmapped: 15917056 heap: 125100032 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 194 ms_handle_reset con 0x55c08701a000 session 0x55c08544af00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 194 ms_handle_reset con 0x55c08774fc00 session 0x55c08490d2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 194 ms_handle_reset con 0x55c08774fc00 session 0x55c085ddaf00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 105537536 unmapped: 19562496 heap: 125100032 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 194 ms_handle_reset con 0x55c08548b400 session 0x55c0876dc3c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 194 ms_handle_reset con 0x55c08548bc00 session 0x55c088381e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 17727488 heap: 125100032 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 194 handle_osd_map epochs [194,195], i have 194, src has [1,195]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 17727488 heap: 125100032 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 195 ms_handle_reset con 0x55c08701a000 session 0x55c08634d680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 195 ms_handle_reset con 0x55c0895d0000 session 0x55c087252b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1379198 data_alloc: 234881024 data_used: 15179776
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 107397120 unmapped: 17702912 heap: 125100032 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 195 ms_handle_reset con 0x55c08548b400 session 0x55c087252f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 195 ms_handle_reset con 0x55c08548bc00 session 0x55c0873bf4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 195 heartbeat osd_stat(store_statfs(0x4f98af000/0x0/0x4ffc00000, data 0x1c9de6b/0x1dbd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 195 ms_handle_reset con 0x55c08701a000 session 0x55c0876d9860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 107397120 unmapped: 17702912 heap: 125100032 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.943634033s of 10.143130302s, submitted: 58
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 195 ms_handle_reset con 0x55c08774fc00 session 0x55c0876d8780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 195 handle_osd_map epochs [196,196], i have 195, src has [1,196]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 196 ms_handle_reset con 0x55c0895d1c00 session 0x55c0854b8d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 196 ms_handle_reset con 0x55c0895d1800 session 0x55c087b45860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 17653760 heap: 125100032 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 196 ms_handle_reset con 0x55c08548b400 session 0x55c086ee8d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 13033472 heap: 125100032 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 13033472 heap: 125100032 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 196 handle_osd_map epochs [197,197], i have 196, src has [1,197]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 197 ms_handle_reset con 0x55c08548bc00 session 0x55c08544bc20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 197 ms_handle_reset con 0x55c0895d0000 session 0x55c0854b4780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 197 ms_handle_reset con 0x55c0895d0400 session 0x55c0871423c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1453722 data_alloc: 234881024 data_used: 22536192
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 112123904 unmapped: 12976128 heap: 125100032 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 197 ms_handle_reset con 0x55c08548bc00 session 0x55c08634de00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 197 ms_handle_reset con 0x55c08548b400 session 0x55c086ee94a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 197 ms_handle_reset con 0x55c0895d0000 session 0x55c0873bc3c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 107642880 unmapped: 17457152 heap: 125100032 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 197 heartbeat osd_stat(store_statfs(0x4f9ea6000/0x0/0x4ffc00000, data 0x16a6582/0x17c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 197 ms_handle_reset con 0x55c0895d1800 session 0x55c0854b4d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 197 ms_handle_reset con 0x55c08701a000 session 0x55c0862ce5a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 107642880 unmapped: 17457152 heap: 125100032 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 107642880 unmapped: 17457152 heap: 125100032 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 197 handle_osd_map epochs [197,198], i have 197, src has [1,198]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 107642880 unmapped: 17457152 heap: 125100032 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 198 heartbeat osd_stat(store_statfs(0x4f9ea4000/0x0/0x4ffc00000, data 0x16a7f74/0x17c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353624 data_alloc: 234881024 data_used: 15208448
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 107642880 unmapped: 17457152 heap: 125100032 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 198 ms_handle_reset con 0x55c08548b400 session 0x55c085ed4780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 107651072 unmapped: 17448960 heap: 125100032 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.076057434s of 10.318167686s, submitted: 73
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 198 ms_handle_reset con 0x55c08548bc00 session 0x55c0873f03c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 107175936 unmapped: 17924096 heap: 125100032 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 107175936 unmapped: 17924096 heap: 125100032 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 198 handle_osd_map epochs [199,199], i have 198, src has [1,199]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 199 heartbeat osd_stat(store_statfs(0x4f9a09000/0x0/0x4ffc00000, data 0x1b41fe6/0x1c65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 199 ms_handle_reset con 0x55c0895d0000 session 0x55c0856443c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 199 ms_handle_reset con 0x55c0895d1800 session 0x55c085f71a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 199 ms_handle_reset con 0x55c08774fc00 session 0x55c085645680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 199 ms_handle_reset con 0x55c08548b400 session 0x55c0871430e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 199 ms_handle_reset con 0x55c08774fc00 session 0x55c0850972c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 199 ms_handle_reset con 0x55c08701a000 session 0x55c085453a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 199 ms_handle_reset con 0x55c08548bc00 session 0x55c087142b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 199 ms_handle_reset con 0x55c0895d1400 session 0x55c085452780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 199 ms_handle_reset con 0x55c08548b400 session 0x55c0873c2b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 199 ms_handle_reset con 0x55c08548bc00 session 0x55c0873c3860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 17481728 heap: 125100032 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 199 ms_handle_reset con 0x55c08701a000 session 0x55c0854b5680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 199 ms_handle_reset con 0x55c08774fc00 session 0x55c0854b5860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465936 data_alloc: 234881024 data_used: 15159296
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 199 ms_handle_reset con 0x55c0895d0c00 session 0x55c0873be3c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 199 handle_osd_map epochs [199,200], i have 199, src has [1,200]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 107642880 unmapped: 17457152 heap: 125100032 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 200 ms_handle_reset con 0x55c08548b400 session 0x55c0873be960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 200 ms_handle_reset con 0x55c08701a000 session 0x55c085c48d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 200 heartbeat osd_stat(store_statfs(0x4f9220000/0x0/0x4ffc00000, data 0x2323804/0x244d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 16113664 heap: 125100032 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 200 ms_handle_reset con 0x55c08548bc00 session 0x55c0852f52c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 200 handle_osd_map epochs [201,201], i have 200, src has [1,201]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 201 ms_handle_reset con 0x55c08774fc00 session 0x55c087133c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 201 ms_handle_reset con 0x55c0895d0c00 session 0x55c087132d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 201 ms_handle_reset con 0x55c08548b400 session 0x55c0876fef00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 108937216 unmapped: 16162816 heap: 125100032 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 108937216 unmapped: 16162816 heap: 125100032 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 201 ms_handle_reset con 0x55c08548bc00 session 0x55c0876ffa40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 201 ms_handle_reset con 0x55c0895d1800 session 0x55c0873c3e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 201 ms_handle_reset con 0x55c08774fc00 session 0x55c087b44000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 201 ms_handle_reset con 0x55c0895d0000 session 0x55c0862cc1e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 110125056 unmapped: 19177472 heap: 129302528 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1633522 data_alloc: 234881024 data_used: 15179776
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 110125056 unmapped: 19177472 heap: 129302528 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 202 heartbeat osd_stat(store_statfs(0x4f7e08000/0x0/0x4ffc00000, data 0x373b341/0x3866000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 202 ms_handle_reset con 0x55c088477400 session 0x55c087129680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 202 ms_handle_reset con 0x55c08548b400 session 0x55c085ed4b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 202 ms_handle_reset con 0x55c08701a000 session 0x55c0876feb40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 16695296 heap: 129302528 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.942081451s of 10.458888054s, submitted: 107
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 202 ms_handle_reset con 0x55c088477800 session 0x55c0862314a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 202 ms_handle_reset con 0x55c0895d0000 session 0x55c0873bd4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 16695296 heap: 129302528 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 202 handle_osd_map epochs [203,203], i have 202, src has [1,203]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 203 ms_handle_reset con 0x55c08548bc00 session 0x55c085ed5a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 203 ms_handle_reset con 0x55c08701a000 session 0x55c087132780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 203 ms_handle_reset con 0x55c08548b400 session 0x55c08543a000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 20652032 heap: 129302528 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 203 handle_osd_map epochs [203,204], i have 203, src has [1,204]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 204 ms_handle_reset con 0x55c088477400 session 0x55c085ed4d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 204 ms_handle_reset con 0x55c088477800 session 0x55c0879d25a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 109371392 unmapped: 19931136 heap: 129302528 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 204 ms_handle_reset con 0x55c08548b400 session 0x55c0879d3e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1434682 data_alloc: 234881024 data_used: 15237120
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 109305856 unmapped: 19996672 heap: 129302528 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 204 handle_osd_map epochs [204,205], i have 204, src has [1,205]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 205 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x1b4e1d7/0x1c7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 205 ms_handle_reset con 0x55c08548bc00 session 0x55c0879d3a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 205 ms_handle_reset con 0x55c08701a000 session 0x55c0879d3c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 109314048 unmapped: 19988480 heap: 129302528 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 205 ms_handle_reset con 0x55c08774fc00 session 0x55c087142f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 205 ms_handle_reset con 0x55c088477400 session 0x55c0879d2960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 109338624 unmapped: 19963904 heap: 129302528 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 28262400 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 205 handle_osd_map epochs [206,206], i have 205, src has [1,206]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 206 ms_handle_reset con 0x55c08548bc00 session 0x55c088380d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 206 ms_handle_reset con 0x55c08701a000 session 0x55c08634da40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 110043136 unmapped: 27656192 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 206 ms_handle_reset con 0x55c08774fc00 session 0x55c087253e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 207 heartbeat osd_stat(store_statfs(0x4f71c1000/0x0/0x4ffc00000, data 0x437dcc4/0x44ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 207 ms_handle_reset con 0x55c088477000 session 0x55c085ed54a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1783245 data_alloc: 234881024 data_used: 15257600
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 110149632 unmapped: 27549696 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 208 ms_handle_reset con 0x55c0895d1800 session 0x55c0854534a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 27426816 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.026422501s of 10.013953209s, submitted: 219
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 208 ms_handle_reset con 0x55c0895d1800 session 0x55c0883812c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 110592000 unmapped: 27107328 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 208 ms_handle_reset con 0x55c08548bc00 session 0x55c088259680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 208 ms_handle_reset con 0x55c08701a000 session 0x55c08634c3c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 18628608 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 208 handle_osd_map epochs [209,209], i have 208, src has [1,209]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 209 ms_handle_reset con 0x55c088477000 session 0x55c087132d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 209 ms_handle_reset con 0x55c08774fc00 session 0x55c085096b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 209 ms_handle_reset con 0x55c08774fc00 session 0x55c0873f1860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 18423808 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 209 handle_osd_map epochs [209,210], i have 209, src has [1,210]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 210 ms_handle_reset con 0x55c08701a000 session 0x55c08544a1e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 210 ms_handle_reset con 0x55c08548bc00 session 0x55c0876d8960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 210 ms_handle_reset con 0x55c088477000 session 0x55c08836c3c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 210 ms_handle_reset con 0x55c088476c00 session 0x55c085c48d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 210 heartbeat osd_stat(store_statfs(0x4f167c000/0x0/0x4ffc00000, data 0x9ebaf39/0x9ff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2423964 data_alloc: 234881024 data_used: 15269888
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 26648576 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 210 ms_handle_reset con 0x55c08548bc00 session 0x55c0862cde00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 26910720 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 210 ms_handle_reset con 0x55c08701a000 session 0x55c088380b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 210 handle_osd_map epochs [211,211], i have 210, src has [1,211]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 211 ms_handle_reset con 0x55c088477000 session 0x55c0876d9860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 211 ms_handle_reset con 0x55c08774fc00 session 0x55c0862cf0e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 26828800 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 211 handle_osd_map epochs [212,212], i have 211, src has [1,212]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 212 ms_handle_reset con 0x55c0895d1800 session 0x55c087a8d2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119480320 unmapped: 18219008 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 212 ms_handle_reset con 0x55c08548bc00 session 0x55c0876d8780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 212 handle_osd_map epochs [213,213], i have 212, src has [1,213]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 213 ms_handle_reset con 0x55c08701a000 session 0x55c0876d8000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 213 ms_handle_reset con 0x55c0895d1800 session 0x55c086231680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 26427392 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2758394 data_alloc: 234881024 data_used: 15220736
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 213 heartbeat osd_stat(store_statfs(0x4ede73000/0x0/0x4ffc00000, data 0xd6c1d81/0xd7fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 26427392 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 213 heartbeat osd_stat(store_statfs(0x4ece73000/0x0/0x4ffc00000, data 0xe6c1d81/0xe7fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 111484928 unmapped: 26214400 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 213 handle_osd_map epochs [213,214], i have 213, src has [1,214]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 214 heartbeat osd_stat(store_statfs(0x4ec674000/0x0/0x4ffc00000, data 0xeec1d81/0xeffa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.023032188s of 10.007152557s, submitted: 159
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 25001984 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 121380864 unmapped: 16318464 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 214 handle_osd_map epochs [214,215], i have 214, src has [1,215]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 215 ms_handle_reset con 0x55c08774fc00 session 0x55c0879d34a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 215 ms_handle_reset con 0x55c088477000 session 0x55c087b79680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 215 heartbeat osd_stat(store_statfs(0x4e8e6c000/0x0/0x4ffc00000, data 0x126c5425/0x12800000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 24354816 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3668084 data_alloc: 234881024 data_used: 15233024
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 15761408 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 23994368 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 215 heartbeat osd_stat(store_statfs(0x4e566c000/0x0/0x4ffc00000, data 0x15ec5497/0x16002000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 23863296 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 215 ms_handle_reset con 0x55c088477000 session 0x55c085096960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 215 ms_handle_reset con 0x55c08548bc00 session 0x55c085452b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 215 ms_handle_reset con 0x55c08701a000 session 0x55c088381680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 215 ms_handle_reset con 0x55c08774fc00 session 0x55c087b794a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 215 ms_handle_reset con 0x55c0895d1800 session 0x55c08836c1e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 215 ms_handle_reset con 0x55c0895d1800 session 0x55c0850974a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 215 ms_handle_reset con 0x55c08548bc00 session 0x55c0862cc1e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 215 ms_handle_reset con 0x55c08701a000 session 0x55c089118b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 114073600 unmapped: 23625728 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 215 ms_handle_reset con 0x55c08774fc00 session 0x55c0873be960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 23527424 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 215 handle_osd_map epochs [215,216], i have 215, src has [1,216]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 215 handle_osd_map epochs [216,216], i have 216, src has [1,216]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 216 ms_handle_reset con 0x55c08548b400 session 0x55c0877054a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 216 ms_handle_reset con 0x55c08548b400 session 0x55c0879d32c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 216 heartbeat osd_stat(store_statfs(0x4e27cf000/0x0/0x4ffc00000, data 0x18951519/0x18a8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4007427 data_alloc: 234881024 data_used: 15245312
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 23437312 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 216 ms_handle_reset con 0x55c08548bc00 session 0x55c087705a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 216 ms_handle_reset con 0x55c08701a000 session 0x55c087143c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 216 ms_handle_reset con 0x55c08774fc00 session 0x55c0879d23c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 23339008 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 216 handle_osd_map epochs [217,217], i have 216, src has [1,217]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.603573799s of 10.004977226s, submitted: 157
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 217 ms_handle_reset con 0x55c087afdc00 session 0x55c0879d34a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 217 ms_handle_reset con 0x55c08548b400 session 0x55c0876d8960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 217 ms_handle_reset con 0x55c08548bc00 session 0x55c0876d8000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 217 ms_handle_reset con 0x55c0895d1800 session 0x55c0854b9a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 217 ms_handle_reset con 0x55c08701a000 session 0x55c087132960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17088512 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 217 ms_handle_reset con 0x55c08774fc00 session 0x55c087a8d2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 217 ms_handle_reset con 0x55c08548b400 session 0x55c085645680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 217 ms_handle_reset con 0x55c08548bc00 session 0x55c0879a85a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 217 ms_handle_reset con 0x55c08701a000 session 0x55c085bb12c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 217 ms_handle_reset con 0x55c0895d1800 session 0x55c0873bdc20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 217 ms_handle_reset con 0x55c087b0a000 session 0x55c0854b5860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 217 ms_handle_reset con 0x55c088477000 session 0x55c0879a8960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 217 ms_handle_reset con 0x55c088476800 session 0x55c0876d92c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 24068096 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 217 handle_osd_map epochs [218,218], i have 217, src has [1,218]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 218 ms_handle_reset con 0x55c08548b400 session 0x55c0854b90e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 218 heartbeat osd_stat(store_statfs(0x4f7f29000/0x0/0x4ffc00000, data 0x21f1c70/0x2333000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 25190400 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1596281 data_alloc: 234881024 data_used: 15257600
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 25190400 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 25190400 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 218 handle_osd_map epochs [219,219], i have 218, src has [1,219]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 219 ms_handle_reset con 0x55c08548bc00 session 0x55c086230f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 219 heartbeat osd_stat(store_statfs(0x4f9132000/0x0/0x4ffc00000, data 0x1f67a7d/0x20a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 219 heartbeat osd_stat(store_statfs(0x4f9132000/0x0/0x4ffc00000, data 0x1f67a7d/0x20a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 25182208 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 219 ms_handle_reset con 0x55c08701a000 session 0x55c0873c4780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 25182208 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 25174016 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 220 ms_handle_reset con 0x55c088476800 session 0x55c0876d81e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1636295 data_alloc: 234881024 data_used: 18964480
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 115539968 unmapped: 22159360 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 221 ms_handle_reset con 0x55c088477000 session 0x55c085ddb2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 221 ms_handle_reset con 0x55c0895d1800 session 0x55c0854b5c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 21553152 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.448704720s of 10.025163651s, submitted: 193
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 221 heartbeat osd_stat(store_statfs(0x4f91a8000/0x0/0x4ffc00000, data 0x1f6ce02/0x20b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 21544960 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 222 ms_handle_reset con 0x55c085459c00 session 0x55c08634da40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 222 heartbeat osd_stat(store_statfs(0x4f91a8000/0x0/0x4ffc00000, data 0x1f6cddf/0x20b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 116170752 unmapped: 21528576 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 223 ms_handle_reset con 0x55c087689c00 session 0x55c087133a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 21520384 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1679079 data_alloc: 234881024 data_used: 19013632
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 223 ms_handle_reset con 0x55c085459c00 session 0x55c0871332c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 21520384 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 223 ms_handle_reset con 0x55c088476800 session 0x55c0873bc960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 223 ms_handle_reset con 0x55c088477000 session 0x55c0854b4780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 21454848 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 223 ms_handle_reset con 0x55c0895d1800 session 0x55c087128000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 21454848 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 223 handle_osd_map epochs [223,224], i have 223, src has [1,224]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 21454848 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 224 heartbeat osd_stat(store_statfs(0x4f91a8000/0x0/0x4ffc00000, data 0x1f6ffab/0x20b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,1])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 224 ms_handle_reset con 0x55c087688c00 session 0x55c0871434a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 21454848 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 224 heartbeat osd_stat(store_statfs(0x4f91a2000/0x0/0x4ffc00000, data 0x1f71c44/0x20bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1685629 data_alloc: 234881024 data_used: 19034112
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 21454848 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119947264 unmapped: 17752064 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 224 ms_handle_reset con 0x55c085459c00 session 0x55c087133c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.653691292s of 10.053786278s, submitted: 144
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 18653184 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 224 ms_handle_reset con 0x55c088476800 session 0x55c0871330e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 224 ms_handle_reset con 0x55c088477000 session 0x55c08634da40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 18653184 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 224 ms_handle_reset con 0x55c0895d1800 session 0x55c0854b5e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 224 ms_handle_reset con 0x55c087688000 session 0x55c0854b5c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 18653184 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1749962 data_alloc: 234881024 data_used: 19095552
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 18653184 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 224 heartbeat osd_stat(store_statfs(0x4f8a2e000/0x0/0x4ffc00000, data 0x26dfbd2/0x2827000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 18546688 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 224 ms_handle_reset con 0x55c085459c00 session 0x55c085dda780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 118808576 unmapped: 18890752 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 224 ms_handle_reset con 0x55c088476800 session 0x55c085ddb680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 224 ms_handle_reset con 0x55c088477000 session 0x55c085ddb2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 3095 syncs, 3.47 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5062 writes, 17K keys, 5062 commit groups, 1.0 writes per commit group, ingest: 10.95 MB, 0.02 MB/s#012Interval WAL: 5062 writes, 2214 syncs, 2.29 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 18817024 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 224 handle_osd_map epochs [225,225], i have 224, src has [1,225]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 225 ms_handle_reset con 0x55c0895d1800 session 0x55c0873bd4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 225 ms_handle_reset con 0x55c0855cbc00 session 0x55c0852f4d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 118931456 unmapped: 18767872 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 225 ms_handle_reset con 0x55c0855cbc00 session 0x55c085453680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 225 ms_handle_reset con 0x55c085459c00 session 0x55c0852f5a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 225 heartbeat osd_stat(store_statfs(0x4f8a2f000/0x0/0x4ffc00000, data 0x26e47b1/0x282e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1748406 data_alloc: 234881024 data_used: 19111936
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 18817024 heap: 137699328 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 118898688 unmapped: 23003136 heap: 141901824 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 225 ms_handle_reset con 0x55c088ea5800 session 0x55c08543b4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.647482872s of 10.022741318s, submitted: 63
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 225 ms_handle_reset con 0x55c088ea4c00 session 0x55c0873c4780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 27082752 heap: 146104320 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 225 ms_handle_reset con 0x55c087afcc00 session 0x55c089118d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 27426816 heap: 146104320 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 225 heartbeat osd_stat(store_statfs(0x4f4e2f000/0x0/0x4ffc00000, data 0x62e4813/0x642f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,1,0,1])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 225 ms_handle_reset con 0x55c085459c00 session 0x55c087133a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 225 handle_osd_map epochs [225,226], i have 225, src has [1,226]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 226 ms_handle_reset con 0x55c0855cbc00 session 0x55c0854b4f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 31547392 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 227 ms_handle_reset con 0x55c088ea4c00 session 0x55c0873bed20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 227 ms_handle_reset con 0x55c087afcc00 session 0x55c0854b83c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2949955 data_alloc: 234881024 data_used: 19124224
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 122994688 unmapped: 27312128 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 227 ms_handle_reset con 0x55c08545d000 session 0x55c0856443c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: mgrc ms_handle_reset ms_handle_reset con 0x55c085ce2400
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2127781581
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2127781581,v1:192.168.122.100:6801/2127781581]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: mgrc handle_mgr_configure stats_period=5
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 26959872 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 227 ms_handle_reset con 0x55c087afcc00 session 0x55c085ed4780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 227 handle_osd_map epochs [228,228], i have 227, src has [1,228]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 228 ms_handle_reset con 0x55c085459c00 session 0x55c0876d9680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119300096 unmapped: 31006720 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 228 ms_handle_reset con 0x55c088ea4c00 session 0x55c0854b5680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 228 heartbeat osd_stat(store_statfs(0x4eb98f000/0x0/0x4ffc00000, data 0xf780f6f/0xf8cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 228 handle_osd_map epochs [228,229], i have 228, src has [1,229]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 127795200 unmapped: 22511616 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 229 ms_handle_reset con 0x55c088477000 session 0x55c08543b2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 229 ms_handle_reset con 0x55c088ea5800 session 0x55c087253860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 229 ms_handle_reset con 0x55c085459c00 session 0x55c085453680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 229 ms_handle_reset con 0x55c088476800 session 0x55c0854532c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 229 ms_handle_reset con 0x55c0855cbc00 session 0x55c08836da40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 229 ms_handle_reset con 0x55c087afcc00 session 0x55c085453860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 229 ms_handle_reset con 0x55c088ea4c00 session 0x55c0854b5860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 229 ms_handle_reset con 0x55c088477000 session 0x55c08544bc20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 229 heartbeat osd_stat(store_statfs(0x4e6987000/0x0/0x4ffc00000, data 0x14784af6/0x148d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 29745152 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 229 ms_handle_reset con 0x55c0855cbc00 session 0x55c086231a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 229 ms_handle_reset con 0x55c087afcc00 session 0x55c08506c3c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 229 handle_osd_map epochs [229,230], i have 229, src has [1,230]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1847300 data_alloc: 234881024 data_used: 19140608
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 230 ms_handle_reset con 0x55c085459c00 session 0x55c0854b4780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 31834112 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 230 ms_handle_reset con 0x55c088476800 session 0x55c0854b5c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 118489088 unmapped: 31817728 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 230 ms_handle_reset con 0x55c087afcc00 session 0x55c0852f4d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 230 handle_osd_map epochs [230,231], i have 230, src has [1,231]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 230 handle_osd_map epochs [231,231], i have 231, src has [1,231]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.839793205s of 10.000823975s, submitted: 240
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 231 ms_handle_reset con 0x55c085459c00 session 0x55c087133a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 231 ms_handle_reset con 0x55c088477000 session 0x55c0873bc960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 31973376 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 231 handle_osd_map epochs [232,232], i have 231, src has [1,232]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 232 ms_handle_reset con 0x55c0888d3400 session 0x55c085ddb680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 232 ms_handle_reset con 0x55c0855cbc00 session 0x55c08836c960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 31924224 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 232 handle_osd_map epochs [232,233], i have 232, src has [1,233]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 233 ms_handle_reset con 0x55c085459c00 session 0x55c0876dcf00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 118390784 unmapped: 31916032 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 233 heartbeat osd_stat(store_statfs(0x4f8a18000/0x0/0x4ffc00000, data 0x26f097a/0x2844000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 234 ms_handle_reset con 0x55c088477000 session 0x55c0854b85a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 234 ms_handle_reset con 0x55c087afcc00 session 0x55c08506cb40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 234 ms_handle_reset con 0x55c0888d3400 session 0x55c0876dcb40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1848642 data_alloc: 234881024 data_used: 19161088
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 120963072 unmapped: 29343744 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 120963072 unmapped: 29343744 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 234 ms_handle_reset con 0x55c087b0dc00 session 0x55c0873c2b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 234 ms_handle_reset con 0x55c0895d1800 session 0x55c0854523c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 234 heartbeat osd_stat(store_statfs(0x4f8a10000/0x0/0x4ffc00000, data 0x26f4174/0x284a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 120963072 unmapped: 29343744 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 234 ms_handle_reset con 0x55c08548b400 session 0x55c0876d8b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 234 ms_handle_reset con 0x55c08548bc00 session 0x55c087128b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 234 handle_osd_map epochs [235,235], i have 234, src has [1,235]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 120963072 unmapped: 29343744 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 235 ms_handle_reset con 0x55c085459c00 session 0x55c0862cd680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 235 ms_handle_reset con 0x55c087afcc00 session 0x55c087133860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 235 ms_handle_reset con 0x55c085459c00 session 0x55c0873c43c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 32178176 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 235 ms_handle_reset con 0x55c08548b400 session 0x55c0873be3c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 235 ms_handle_reset con 0x55c08548bc00 session 0x55c0873f14a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1767495 data_alloc: 234881024 data_used: 15396864
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 31924224 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 235 heartbeat osd_stat(store_statfs(0x4f9140000/0x0/0x4ffc00000, data 0x1fc6c3b/0x211e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 31924224 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 31924224 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 235 heartbeat osd_stat(store_statfs(0x4f9140000/0x0/0x4ffc00000, data 0x1fc6c3b/0x211e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 235 handle_osd_map epochs [235,236], i have 235, src has [1,236]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.979068756s of 11.420689583s, submitted: 121
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 118022144 unmapped: 32284672 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 236 handle_osd_map epochs [236,237], i have 236, src has [1,237]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 118022144 unmapped: 32284672 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 237 handle_osd_map epochs [237,238], i have 237, src has [1,238]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1775702 data_alloc: 234881024 data_used: 15343616
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 118046720 unmapped: 32260096 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 239 ms_handle_reset con 0x55c0895d1800 session 0x55c0871330e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 31154176 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 239 ms_handle_reset con 0x55c088477000 session 0x55c0876fef00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 31113216 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 239 ms_handle_reset con 0x55c085459c00 session 0x55c08506cd20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 239 ms_handle_reset con 0x55c08548b400 session 0x55c087133a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 239 ms_handle_reset con 0x55c08548bc00 session 0x55c085645680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 31096832 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 239 ms_handle_reset con 0x55c088477000 session 0x55c085645860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 239 heartbeat osd_stat(store_statfs(0x4f9134000/0x0/0x4ffc00000, data 0x1fcd9cf/0x2128000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 239 ms_handle_reset con 0x55c0895d1800 session 0x55c085ddb680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 239 ms_handle_reset con 0x55c085459c00 session 0x55c08544a780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 239 ms_handle_reset con 0x55c08548b400 session 0x55c08544af00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 239 ms_handle_reset con 0x55c08548bc00 session 0x55c0862cda40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 239 ms_handle_reset con 0x55c088477000 session 0x55c0873c4f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 30941184 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1840533 data_alloc: 234881024 data_used: 15347712
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 30941184 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 30941184 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 30941184 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 30941184 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 239 handle_osd_map epochs [240,240], i have 239, src has [1,240]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.882652283s of 11.191446304s, submitted: 81
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119398400 unmapped: 30908416 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 240 ms_handle_reset con 0x55c0895d1800 session 0x55c0873c3e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 240 heartbeat osd_stat(store_statfs(0x4f8966000/0x0/0x4ffc00000, data 0x279c9df/0x28f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1843987 data_alloc: 234881024 data_used: 15360000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119414784 unmapped: 30892032 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 240 ms_handle_reset con 0x55c08548bc00 session 0x55c0873c2000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28573696 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 240 handle_osd_map epochs [240,241], i have 240, src has [1,241]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 241 ms_handle_reset con 0x55c0888d3400 session 0x55c0871323c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 122339328 unmapped: 27967488 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 242 ms_handle_reset con 0x55c088477000 session 0x55c0854b9a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 122339328 unmapped: 27967488 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 242 heartbeat osd_stat(store_statfs(0x4f895a000/0x0/0x4ffc00000, data 0x27a1b6c/0x2902000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 122339328 unmapped: 27967488 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 242 heartbeat osd_stat(store_statfs(0x4f895a000/0x0/0x4ffc00000, data 0x27a1b6c/0x2902000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 242 handle_osd_map epochs [242,243], i have 242, src has [1,243]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1924042 data_alloc: 234881024 data_used: 19640320
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 243 ms_handle_reset con 0x55c087688800 session 0x55c0879a8780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 243 ms_handle_reset con 0x55c087688400 session 0x55c0876fe780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 122642432 unmapped: 27664384 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 243 ms_handle_reset con 0x55c087688400 session 0x55c0872530e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 243 ms_handle_reset con 0x55c08548bc00 session 0x55c085c49a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 243 ms_handle_reset con 0x55c087688800 session 0x55c085c481e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 243 ms_handle_reset con 0x55c087688000 session 0x55c0862cc5a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 243 ms_handle_reset con 0x55c087b0c800 session 0x55c08836de00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 27631616 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 27631616 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 27631616 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 244 ms_handle_reset con 0x55c08548bc00 session 0x55c085ddbe00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 27631616 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 244 ms_handle_reset con 0x55c087688000 session 0x55c085ddba40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 244 heartbeat osd_stat(store_statfs(0x4f8896000/0x0/0x4ffc00000, data 0x286531a/0x29c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 244 ms_handle_reset con 0x55c087688400 session 0x55c085452f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1925528 data_alloc: 234881024 data_used: 19640320
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 244 heartbeat osd_stat(store_statfs(0x4f8896000/0x0/0x4ffc00000, data 0x286531a/0x29c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 27631616 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.822957993s of 11.179040909s, submitted: 66
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 244 ms_handle_reset con 0x55c087688800 session 0x55c086230d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 27615232 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 244 ms_handle_reset con 0x55c087689800 session 0x55c0871432c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 124346368 unmapped: 25960448 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 244 handle_osd_map epochs [245,245], i have 245, src has [1,245]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 25812992 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 245 ms_handle_reset con 0x55c087689800 session 0x55c0883810e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 245 ms_handle_reset con 0x55c08548bc00 session 0x55c08836cd20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123764736 unmapped: 26542080 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1990401 data_alloc: 234881024 data_used: 20389888
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123764736 unmapped: 26542080 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 245 heartbeat osd_stat(store_statfs(0x4f8205000/0x0/0x4ffc00000, data 0x2ef4dd8/0x3059000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 26468352 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 26468352 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 26468352 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 26468352 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1990561 data_alloc: 234881024 data_used: 20393984
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 26468352 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 245 heartbeat osd_stat(store_statfs(0x4f8205000/0x0/0x4ffc00000, data 0x2ef4dd8/0x3059000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 26451968 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 26451968 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 245 heartbeat osd_stat(store_statfs(0x4f8205000/0x0/0x4ffc00000, data 0x2ef4dd8/0x3059000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 26451968 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 245 heartbeat osd_stat(store_statfs(0x4f8205000/0x0/0x4ffc00000, data 0x2ef4dd8/0x3059000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.884574890s of 13.328641891s, submitted: 91
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 246 ms_handle_reset con 0x55c087688400 session 0x55c085ddb680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 26435584 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2034145 data_alloc: 234881024 data_used: 20422656
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 124452864 unmapped: 25853952 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 246 ms_handle_reset con 0x55c087688800 session 0x55c0862cc5a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 25739264 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 247 handle_osd_map epochs [247,248], i have 247, src has [1,248]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 248 ms_handle_reset con 0x55c087689c00 session 0x55c087253680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 124624896 unmapped: 25681920 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 248 heartbeat osd_stat(store_statfs(0x4f7d0d000/0x0/0x4ffc00000, data 0x33e60b1/0x354f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 248 ms_handle_reset con 0x55c08548bc00 session 0x55c087253860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 124624896 unmapped: 25681920 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 248 handle_osd_map epochs [248,249], i have 248, src has [1,249]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 249 ms_handle_reset con 0x55c087688400 session 0x55c0876ff4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 249 ms_handle_reset con 0x55c087688800 session 0x55c0876ff680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 25632768 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 249 handle_osd_map epochs [249,250], i have 249, src has [1,250]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 250 ms_handle_reset con 0x55c087689800 session 0x55c0876fe780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 250 heartbeat osd_stat(store_statfs(0x4f7d0c000/0x0/0x4ffc00000, data 0x33e7c20/0x3551000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2049107 data_alloc: 234881024 data_used: 20430848
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 25624576 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 250 ms_handle_reset con 0x55c087688c00 session 0x55c0854b8000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 250 ms_handle_reset con 0x55c08548bc00 session 0x55c0854b9a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 25624576 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 25600000 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 25600000 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 250 ms_handle_reset con 0x55c087688000 session 0x55c0854b5680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 250 ms_handle_reset con 0x55c085459c00 session 0x55c0873c25a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 250 ms_handle_reset con 0x55c08548b400 session 0x55c085c48780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.180858612s of 10.405808449s, submitted: 89
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 25600000 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 251 heartbeat osd_stat(store_statfs(0x4f7d06000/0x0/0x4ffc00000, data 0x33eb3d2/0x3557000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 251 ms_handle_reset con 0x55c087688400 session 0x55c085c48960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 251 ms_handle_reset con 0x55c085459c00 session 0x55c0871332c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1887343 data_alloc: 234881024 data_used: 12349440
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 118890496 unmapped: 31416320 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 118890496 unmapped: 31416320 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 118890496 unmapped: 31416320 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 251 heartbeat osd_stat(store_statfs(0x4f8b64000/0x0/0x4ffc00000, data 0x258e424/0x26fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 251 handle_osd_map epochs [251,252], i have 251, src has [1,252]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 118906880 unmapped: 31399936 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 252 ms_handle_reset con 0x55c08548b400 session 0x55c08544a780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 252 ms_handle_reset con 0x55c08548bc00 session 0x55c08544af00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 252 heartbeat osd_stat(store_statfs(0x4f9440000/0x0/0x4ffc00000, data 0x1cb0e77/0x1e1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 118906880 unmapped: 31399936 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1825352 data_alloc: 234881024 data_used: 12337152
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 118923264 unmapped: 31383552 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 253 ms_handle_reset con 0x55c087688000 session 0x55c0871430e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 253 ms_handle_reset con 0x55c085548800 session 0x55c085bb0780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 118931456 unmapped: 31375360 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 253 ms_handle_reset con 0x55c08548b400 session 0x55c085ed43c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 31244288 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 254 ms_handle_reset con 0x55c085548800 session 0x55c08544b2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 254 ms_handle_reset con 0x55c08548bc00 session 0x55c087704f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 254 ms_handle_reset con 0x55c087688000 session 0x55c086230d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 254 ms_handle_reset con 0x55c087688800 session 0x55c0862301e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 254 ms_handle_reset con 0x55c08548b400 session 0x55c086231a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 254 ms_handle_reset con 0x55c08548bc00 session 0x55c085f710e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 31211520 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 254 handle_osd_map epochs [254,255], i have 254, src has [1,255]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 254 handle_osd_map epochs [255,255], i have 255, src has [1,255]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 255 ms_handle_reset con 0x55c085548800 session 0x55c0873be3c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119103488 unmapped: 31203328 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 255 heartbeat osd_stat(store_statfs(0x4f9419000/0x0/0x4ffc00000, data 0x1cd415e/0x1e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 255 heartbeat osd_stat(store_statfs(0x4f9419000/0x0/0x4ffc00000, data 0x1cd415e/0x1e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1836845 data_alloc: 234881024 data_used: 12337152
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119103488 unmapped: 31203328 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 255 ms_handle_reset con 0x55c087688000 session 0x55c087252d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 255 ms_handle_reset con 0x55c087689800 session 0x55c087132d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119103488 unmapped: 31203328 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 255 ms_handle_reset con 0x55c087689800 session 0x55c087133c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.866715431s of 13.228395462s, submitted: 83
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 255 ms_handle_reset con 0x55c08548b400 session 0x55c0879d2960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119447552 unmapped: 30859264 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119455744 unmapped: 30851072 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 heartbeat osd_stat(store_statfs(0x4f93eb000/0x0/0x4ffc00000, data 0x1cffbd1/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119554048 unmapped: 30752768 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1847920 data_alloc: 234881024 data_used: 12394496
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119554048 unmapped: 30752768 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119554048 unmapped: 30752768 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c0895d0800 session 0x55c0876d8f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c0895d1000 session 0x55c0876d9680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c08774f400 session 0x55c0876d85a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c08548b400 session 0x55c0876d8000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c087689800 session 0x55c0876d92c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 heartbeat osd_stat(store_statfs(0x4f93eb000/0x0/0x4ffc00000, data 0x1cffbd1/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 30662656 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 30662656 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 heartbeat osd_stat(store_statfs(0x4f9363000/0x0/0x4ffc00000, data 0x1d88bd1/0x1efb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c087688000 session 0x55c087704f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 30662656 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1856828 data_alloc: 234881024 data_used: 12652544
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c088477000 session 0x55c087253c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c0888d3400 session 0x55c085452960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 30662656 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c08548b400 session 0x55c0876d81e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c087688000 session 0x55c0876d9860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 31096832 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 heartbeat osd_stat(store_statfs(0x4f94bf000/0x0/0x4ffc00000, data 0x181cbae/0x198e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 31096832 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119300096 unmapped: 31006720 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119341056 unmapped: 30965760 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 heartbeat osd_stat(store_statfs(0x4f94bf000/0x0/0x4ffc00000, data 0x181cbae/0x198e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1815160 data_alloc: 234881024 data_used: 12685312
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 119341056 unmapped: 30965760 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.924436569s of 13.314795494s, submitted: 127
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c0895d0800 session 0x55c0854b5680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 122478592 unmapped: 27828224 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 121741312 unmapped: 28565504 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 121741312 unmapped: 28565504 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 heartbeat osd_stat(store_statfs(0x4f8e69000/0x0/0x4ffc00000, data 0x1e73bae/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 121741312 unmapped: 28565504 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 heartbeat osd_stat(store_statfs(0x4f8e69000/0x0/0x4ffc00000, data 0x1e73bae/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1869378 data_alloc: 234881024 data_used: 12533760
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 121741312 unmapped: 28565504 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 121741312 unmapped: 28565504 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 121741312 unmapped: 28565504 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 121741312 unmapped: 28565504 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 heartbeat osd_stat(store_statfs(0x4f8e69000/0x0/0x4ffc00000, data 0x1e73bae/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 heartbeat osd_stat(store_statfs(0x4f8e69000/0x0/0x4ffc00000, data 0x1e73bae/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 28459008 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1923194 data_alloc: 234881024 data_used: 12750848
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123756544 unmapped: 26550272 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.192254066s of 10.496803284s, submitted: 67
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 26411008 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 heartbeat osd_stat(store_statfs(0x4f879e000/0x0/0x4ffc00000, data 0x2536bae/0x26a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 26411008 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.19233 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 26411008 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 26411008 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1933390 data_alloc: 234881024 data_used: 12570624
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 26411008 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c0895d1000 session 0x55c087253680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 26402816 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c08774e000 session 0x55c087253860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c08774e000 session 0x55c0862cc5a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 heartbeat osd_stat(store_statfs(0x4f87a2000/0x0/0x4ffc00000, data 0x2538c72/0x26ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c08548b400 session 0x55c085ddb680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 26460160 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c087688000 session 0x55c08836cd20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 26460160 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 26460160 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c0895d0800 session 0x55c087a8c000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1932815 data_alloc: 234881024 data_used: 12570624
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 26460160 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c0895d1000 session 0x55c087a8d0e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.784958839s of 10.002567291s, submitted: 54
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c08548b400 session 0x55c087a8c780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 heartbeat osd_stat(store_statfs(0x4f87a2000/0x0/0x4ffc00000, data 0x2538c10/0x26ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 heartbeat osd_stat(store_statfs(0x4f87a3000/0x0/0x4ffc00000, data 0x2538c10/0x26ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 26435584 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c087688000 session 0x55c0854b83c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 26435584 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c08774e000 session 0x55c087253a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 26411008 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 26411008 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 heartbeat osd_stat(store_statfs(0x4f87a3000/0x0/0x4ffc00000, data 0x2539bae/0x26ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c08774f000 session 0x55c0852f52c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1930702 data_alloc: 234881024 data_used: 12570624
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 124952576 unmapped: 25354240 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 26402816 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c0895d0800 session 0x55c08634d680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 26402816 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c08548bc00 session 0x55c085ed5c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c085548800 session 0x55c085453a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 26402816 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c08548b400 session 0x55c0854b4b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123158528 unmapped: 27148288 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1870752 data_alloc: 234881024 data_used: 12414976
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123158528 unmapped: 27148288 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 heartbeat osd_stat(store_statfs(0x4f8e83000/0x0/0x4ffc00000, data 0x1e5abde/0x1fcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 heartbeat osd_stat(store_statfs(0x4f8e83000/0x0/0x4ffc00000, data 0x1e5abde/0x1fcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123158528 unmapped: 27148288 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123158528 unmapped: 27148288 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.443594933s of 12.575583458s, submitted: 44
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c087688000 session 0x55c0876dc1e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123150336 unmapped: 27156480 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 ms_handle_reset con 0x55c08774e000 session 0x55c0876dc780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123150336 unmapped: 27156480 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1869420 data_alloc: 234881024 data_used: 12410880
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123150336 unmapped: 27156480 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123174912 unmapped: 27131904 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 257 ms_handle_reset con 0x55c08548bc00 session 0x55c0854b5c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 257 heartbeat osd_stat(store_statfs(0x4f8e84000/0x0/0x4ffc00000, data 0x1e5ab3c/0x1fca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123215872 unmapped: 27090944 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123232256 unmapped: 27074560 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 257 ms_handle_reset con 0x55c087688000 session 0x55c0879d3680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 257 handle_osd_map epochs [257,258], i have 257, src has [1,258]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 257 handle_osd_map epochs [258,258], i have 258, src has [1,258]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 258 ms_handle_reset con 0x55c085548800 session 0x55c0854b4000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 258 heartbeat osd_stat(store_statfs(0x4f8e81000/0x0/0x4ffc00000, data 0x1e5c6b9/0x1fcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 258 heartbeat osd_stat(store_statfs(0x4f8e7d000/0x0/0x4ffc00000, data 0x1e5e236/0x1fd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 258 ms_handle_reset con 0x55c087b11c00 session 0x55c089119a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 27049984 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 259 ms_handle_reset con 0x55c08701c000 session 0x55c0852f5c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1882008 data_alloc: 234881024 data_used: 12427264
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 27000832 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 259 ms_handle_reset con 0x55c085548800 session 0x55c087133860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 260 ms_handle_reset con 0x55c08701c000 session 0x55c085bb1c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 260 ms_handle_reset con 0x55c087b11c00 session 0x55c0873f1680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123215872 unmapped: 27090944 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 261 ms_handle_reset con 0x55c087a14c00 session 0x55c085c48d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 261 ms_handle_reset con 0x55c087688000 session 0x55c0873f10e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 261 ms_handle_reset con 0x55c08548bc00 session 0x55c08543a000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 261 ms_handle_reset con 0x55c085548800 session 0x55c0873c30e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123265024 unmapped: 27041792 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 262 ms_handle_reset con 0x55c08701c000 session 0x55c0852f5a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 262 heartbeat osd_stat(store_statfs(0x4f9e93000/0x0/0x4ffc00000, data 0x1e636d3/0x1fd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 262 heartbeat osd_stat(store_statfs(0x4f9e93000/0x0/0x4ffc00000, data 0x1e636d3/0x1fd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 26943488 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.284187317s of 10.707715988s, submitted: 131
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 262 ms_handle_reset con 0x55c08949cc00 session 0x55c085f134a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 262 ms_handle_reset con 0x55c08949d000 session 0x55c0871430e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 262 ms_handle_reset con 0x55c08548b400 session 0x55c0873f1e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 262 heartbeat osd_stat(store_statfs(0x4f9e8f000/0x0/0x4ffc00000, data 0x1e652a4/0x1fdc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 18505728 heap: 150306816 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 262 ms_handle_reset con 0x55c087689800 session 0x55c0879d2000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 262 ms_handle_reset con 0x55c088477000 session 0x55c0852f4000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2238285 data_alloc: 234881024 data_used: 12439552
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 127737856 unmapped: 47783936 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 128155648 unmapped: 47366144 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 262 ms_handle_reset con 0x55c08949c800 session 0x55c085f13680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 50462720 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 263 ms_handle_reset con 0x55c08949c000 session 0x55c085f12960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 129269760 unmapped: 46252032 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 263 heartbeat osd_stat(store_statfs(0x4f01dd000/0x0/0x4ffc00000, data 0xbb17d07/0xbc90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 129286144 unmapped: 46235648 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 263 heartbeat osd_stat(store_statfs(0x4ee5db000/0x0/0x4ffc00000, data 0xd717d79/0xd892000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [0,0,0,0,0,0,1,2])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528751 data_alloc: 234881024 data_used: 11862016
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 125280256 unmapped: 50241536 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 137977856 unmapped: 37543936 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 142327808 unmapped: 33193984 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 263 ms_handle_reset con 0x55c08949d000 session 0x55c0862301e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 263 ms_handle_reset con 0x55c087688000 session 0x55c0876ff4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 125673472 unmapped: 49848320 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 125681664 unmapped: 49840128 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.521252155s of 10.721201897s, submitted: 114
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 264 heartbeat osd_stat(store_statfs(0x4e55dc000/0x0/0x4ffc00000, data 0x16717d79/0x16892000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [0,0,1])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 264 ms_handle_reset con 0x55c08548bc00 session 0x55c08544a1e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4195422 data_alloc: 234881024 data_used: 11874304
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 48742400 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 264 handle_osd_map epochs [264,265], i have 264, src has [1,265]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 265 ms_handle_reset con 0x55c08548b400 session 0x55c087143a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 265 ms_handle_reset con 0x55c087689800 session 0x55c0876dde00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 126812160 unmapped: 48709632 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 265 ms_handle_reset con 0x55c08548bc00 session 0x55c086ee83c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 265 ms_handle_reset con 0x55c08548b400 session 0x55c0879d2f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 265 ms_handle_reset con 0x55c087689800 session 0x55c0883805a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 48701440 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 48701440 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 265 ms_handle_reset con 0x55c087688000 session 0x55c0876d94a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 265 ms_handle_reset con 0x55c08949c800 session 0x55c0879d2780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 266 ms_handle_reset con 0x55c08548b400 session 0x55c0876d9e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 126844928 unmapped: 48676864 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 266 ms_handle_reset con 0x55c087688000 session 0x55c0873c34a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 266 heartbeat osd_stat(store_statfs(0x4e55d4000/0x0/0x4ffc00000, data 0x1671d00a/0x16899000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 266 handle_osd_map epochs [266,267], i have 266, src has [1,267]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4205772 data_alloc: 234881024 data_used: 11874304
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 267 ms_handle_reset con 0x55c087689800 session 0x55c085c48780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 267 ms_handle_reset con 0x55c08548bc00 session 0x55c0854b8780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 267 ms_handle_reset con 0x55c08949c800 session 0x55c08543b2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 48644096 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 48644096 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 267 ms_handle_reset con 0x55c08548b400 session 0x55c087a8dc20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 268 heartbeat osd_stat(store_statfs(0x4e55ce000/0x0/0x4ffc00000, data 0x1671f29d/0x1689e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 48619520 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 268 ms_handle_reset con 0x55c087689800 session 0x55c0883814a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 268 ms_handle_reset con 0x55c08949cc00 session 0x55c0873be3c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 268 ms_handle_reset con 0x55c08949d000 session 0x55c087a8d4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 268 ms_handle_reset con 0x55c087688000 session 0x55c085453860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 268 ms_handle_reset con 0x55c08548bc00 session 0x55c085c49a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 126926848 unmapped: 48594944 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 270 ms_handle_reset con 0x55c08548b400 session 0x55c087253e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 126992384 unmapped: 48529408 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 270 ms_handle_reset con 0x55c08949d000 session 0x55c0876d9e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 270 ms_handle_reset con 0x55c08949cc00 session 0x55c08506d680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.818608284s of 10.102108955s, submitted: 99
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4416520 data_alloc: 234881024 data_used: 11882496
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 270 handle_osd_map epochs [270,271], i have 270, src has [1,271]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 270 handle_osd_map epochs [271,271], i have 271, src has [1,271]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 126664704 unmapped: 48857088 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 271 ms_handle_reset con 0x55c087689800 session 0x55c0879d25a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 271 handle_osd_map epochs [271,272], i have 271, src has [1,272]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 48619520 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 272 ms_handle_reset con 0x55c08701c000 session 0x55c0871425a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 132243456 unmapped: 43278336 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 272 ms_handle_reset con 0x55c08548b400 session 0x55c0854b8b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 273 heartbeat osd_stat(store_statfs(0x4df5be000/0x0/0x4ffc00000, data 0x1c727dc2/0x1c8b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 136871936 unmapped: 38649856 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 273 ms_handle_reset con 0x55c08949cc00 session 0x55c0872530e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 273 ms_handle_reset con 0x55c08548bc00 session 0x55c0872523c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 137256960 unmapped: 38264832 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5438587 data_alloc: 234881024 data_used: 11911168
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 273 ms_handle_reset con 0x55c08949d000 session 0x55c085f13860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 38100992 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 273 ms_handle_reset con 0x55c08548b400 session 0x55c0879d3a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 273 heartbeat osd_stat(store_statfs(0x4d9dbb000/0x0/0x4ffc00000, data 0x21f29977/0x220b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 129196032 unmapped: 46325760 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 273 ms_handle_reset con 0x55c08701c000 session 0x55c0879d3c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 133791744 unmapped: 41730048 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 274 ms_handle_reset con 0x55c08548bc00 session 0x55c089118960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 274 handle_osd_map epochs [274,275], i have 274, src has [1,275]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 141189120 unmapped: 34332672 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 275 ms_handle_reset con 0x55c087a14c00 session 0x55c087b44d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 275 ms_handle_reset con 0x55c08949cc00 session 0x55c0873bed20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 275 ms_handle_reset con 0x55c088477000 session 0x55c085645680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 275 ms_handle_reset con 0x55c085548800 session 0x55c0879d2780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 275 ms_handle_reset con 0x55c087b11c00 session 0x55c0873c5a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 46841856 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.041857243s of 10.022785187s, submitted: 143
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 275 ms_handle_reset con 0x55c08548b400 session 0x55c087143c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 275 ms_handle_reset con 0x55c08548bc00 session 0x55c086230f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 275 ms_handle_reset con 0x55c087a14c00 session 0x55c085f13680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6283766 data_alloc: 234881024 data_used: 11919360
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 275 handle_osd_map epochs [275,276], i have 275, src has [1,276]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 276 ms_handle_reset con 0x55c08548b400 session 0x55c0883812c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 276 ms_handle_reset con 0x55c08701c000 session 0x55c08544af00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 46784512 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 276 ms_handle_reset con 0x55c087b11c00 session 0x55c08543a5a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 128753664 unmapped: 46768128 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 277 ms_handle_reset con 0x55c085548800 session 0x55c089118780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 277 ms_handle_reset con 0x55c08548b400 session 0x55c087a8c3c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 277 heartbeat osd_stat(store_statfs(0x4d31b3000/0x0/0x4ffc00000, data 0x28b30475/0x28cbb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 277 ms_handle_reset con 0x55c088477000 session 0x55c087142f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 128753664 unmapped: 46768128 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 128761856 unmapped: 46759936 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 278 ms_handle_reset con 0x55c085548800 session 0x55c0876ff0e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 278 ms_handle_reset con 0x55c08701c000 session 0x55c087132960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 128786432 unmapped: 46735360 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6292056 data_alloc: 234881024 data_used: 11935744
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 278 ms_handle_reset con 0x55c087a14c00 session 0x55c087a8d860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 128794624 unmapped: 46727168 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 278 ms_handle_reset con 0x55c08548b400 session 0x55c085c481e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 43245568 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 278 ms_handle_reset con 0x55c085548800 session 0x55c0854b5e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 43245568 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 278 heartbeat osd_stat(store_statfs(0x4d31b0000/0x0/0x4ffc00000, data 0x28b320b4/0x28cbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 278 handle_osd_map epochs [278,279], i have 278, src has [1,279]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 132300800 unmapped: 43220992 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 132300800 unmapped: 43220992 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.898116112s of 10.177804947s, submitted: 96
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 279 ms_handle_reset con 0x55c08701c000 session 0x55c0862303c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 279 ms_handle_reset con 0x55c088477000 session 0x55c0879d2b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 279 heartbeat osd_stat(store_statfs(0x4d31ac000/0x0/0x4ffc00000, data 0x28b33b6b/0x28cc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6299353 data_alloc: 234881024 data_used: 15622144
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 141271040 unmapped: 34250752 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 280 ms_handle_reset con 0x55c087b11c00 session 0x55c0850972c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 280 ms_handle_reset con 0x55c08548b400 session 0x55c087a8c5a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 280 ms_handle_reset con 0x55c085548800 session 0x55c08544a1e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 132587520 unmapped: 42934272 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 42926080 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 281 ms_handle_reset con 0x55c08701c000 session 0x55c0876feb40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 281 handle_osd_map epochs [281,282], i have 281, src has [1,282]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 282 ms_handle_reset con 0x55c087171400 session 0x55c0879d2000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 133103616 unmapped: 42418176 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 282 ms_handle_reset con 0x55c087171800 session 0x55c085f13680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 282 handle_osd_map epochs [282,283], i have 282, src has [1,283]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 282 handle_osd_map epochs [283,283], i have 283, src has [1,283]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 133185536 unmapped: 42336256 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 283 ms_handle_reset con 0x55c088477000 session 0x55c0852f5e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 283 ms_handle_reset con 0x55c087b11c00 session 0x55c08544b2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 283 heartbeat osd_stat(store_statfs(0x4d2261000/0x0/0x4ffc00000, data 0x29a76b75/0x29c0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6436403 data_alloc: 234881024 data_used: 15630336
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 133201920 unmapped: 42319872 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 133201920 unmapped: 42319872 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 133201920 unmapped: 42319872 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 283 heartbeat osd_stat(store_statfs(0x4d2261000/0x0/0x4ffc00000, data 0x29a76b75/0x29c0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 133201920 unmapped: 42319872 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 133201920 unmapped: 42319872 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6436723 data_alloc: 234881024 data_used: 15638528
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 133201920 unmapped: 42319872 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 283 heartbeat osd_stat(store_statfs(0x4d2261000/0x0/0x4ffc00000, data 0x29a76b75/0x29c0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 133201920 unmapped: 42319872 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 133201920 unmapped: 42319872 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 133201920 unmapped: 42319872 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 283 heartbeat osd_stat(store_statfs(0x4d2261000/0x0/0x4ffc00000, data 0x29a76b75/0x29c0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.236049652s of 13.762421608s, submitted: 115
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 heartbeat osd_stat(store_statfs(0x4d225e000/0x0/0x4ffc00000, data 0x29a785d8/0x29c0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 133201920 unmapped: 42319872 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 ms_handle_reset con 0x55c08548b400 session 0x55c087a8c1e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 ms_handle_reset con 0x55c085548800 session 0x55c087704000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 ms_handle_reset con 0x55c08701c000 session 0x55c089119680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 ms_handle_reset con 0x55c08548b400 session 0x55c085dda1e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6442643 data_alloc: 234881024 data_used: 15638528
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 ms_handle_reset con 0x55c085548800 session 0x55c0879d3860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 133586944 unmapped: 41934848 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 ms_handle_reset con 0x55c08701c000 session 0x55c0877054a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 ms_handle_reset con 0x55c087b11c00 session 0x55c085ddb860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 ms_handle_reset con 0x55c088477000 session 0x55c085ddb680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 ms_handle_reset con 0x55c08548b400 session 0x55c0876d9680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 41918464 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 heartbeat osd_stat(store_statfs(0x4d2183000/0x0/0x4ffc00000, data 0x29b535e8/0x29ceb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 heartbeat osd_stat(store_statfs(0x4d2183000/0x0/0x4ffc00000, data 0x29b535e8/0x29ceb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 41918464 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 ms_handle_reset con 0x55c085548800 session 0x55c0876d92c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 heartbeat osd_stat(store_statfs(0x4d2183000/0x0/0x4ffc00000, data 0x29b535e8/0x29ceb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 41918464 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 ms_handle_reset con 0x55c08701c000 session 0x55c0876d81e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 ms_handle_reset con 0x55c087b11c00 session 0x55c08544b860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 heartbeat osd_stat(store_statfs(0x4d2182000/0x0/0x4ffc00000, data 0x29b5364a/0x29cec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 ms_handle_reset con 0x55c087171400 session 0x55c087133a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 ms_handle_reset con 0x55c08548b400 session 0x55c087133c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 133619712 unmapped: 41902080 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6460223 data_alloc: 234881024 data_used: 15642624
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 133627904 unmapped: 41893888 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 133627904 unmapped: 41893888 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 133881856 unmapped: 41639936 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 ms_handle_reset con 0x55c087171400 session 0x55c086230f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 ms_handle_reset con 0x55c087b11c00 session 0x55c0873bed20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 ms_handle_reset con 0x55c087171c00 session 0x55c087b44d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 ms_handle_reset con 0x55c087170c00 session 0x55c089118960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 ms_handle_reset con 0x55c08548b400 session 0x55c0879d3a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 134045696 unmapped: 41476096 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 heartbeat osd_stat(store_statfs(0x4d1fef000/0x0/0x4ffc00000, data 0x29ce36df/0x29e7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 134045696 unmapped: 41476096 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6482518 data_alloc: 234881024 data_used: 16457728
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 heartbeat osd_stat(store_statfs(0x4d1fef000/0x0/0x4ffc00000, data 0x29ce36df/0x29e7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 134045696 unmapped: 41476096 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 134045696 unmapped: 41476096 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 134045696 unmapped: 41476096 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.292927742s of 14.520388603s, submitted: 59
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 ms_handle_reset con 0x55c087171400 session 0x55c085f13860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 134062080 unmapped: 41459712 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 ms_handle_reset con 0x55c087170800 session 0x55c085ed4f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 ms_handle_reset con 0x55c087170400 session 0x55c0873f0b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 134144000 unmapped: 41377792 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6480595 data_alloc: 234881024 data_used: 16457728
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 134144000 unmapped: 41377792 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 ms_handle_reset con 0x55c0884ae800 session 0x55c087133c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 heartbeat osd_stat(store_statfs(0x4d1fef000/0x0/0x4ffc00000, data 0x29ce36df/0x29e7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 134144000 unmapped: 41377792 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 heartbeat osd_stat(store_statfs(0x4d1fef000/0x0/0x4ffc00000, data 0x29ce36df/0x29e7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 134152192 unmapped: 41369600 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 heartbeat osd_stat(store_statfs(0x4d1fef000/0x0/0x4ffc00000, data 0x29ce36df/0x29e7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 134217728 unmapped: 41304064 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 ms_handle_reset con 0x55c08548b400 session 0x55c087133a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 ms_handle_reset con 0x55c087170400 session 0x55c089119680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 137191424 unmapped: 38330368 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6671091 data_alloc: 234881024 data_used: 18653184
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 137461760 unmapped: 38060032 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 ms_handle_reset con 0x55c0884ae000 session 0x55c087252f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 137715712 unmapped: 37806080 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 139632640 unmapped: 35889152 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 285 ms_handle_reset con 0x55c0884aec00 session 0x55c0876d8d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.584631920s of 10.112170219s, submitted: 122
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 286 ms_handle_reset con 0x55c087789000 session 0x55c0879d3860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 286 ms_handle_reset con 0x55c087789000 session 0x55c0850961e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 286 ms_handle_reset con 0x55c087171400 session 0x55c0872523c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 37765120 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 286 heartbeat osd_stat(store_statfs(0x4d0ba1000/0x0/0x4ffc00000, data 0x2b5b7320/0x2b2cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 286 ms_handle_reset con 0x55c08548b400 session 0x55c085ddbe00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 37756928 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6711639 data_alloc: 234881024 data_used: 18743296
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 37732352 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 286 ms_handle_reset con 0x55c0884ae000 session 0x55c08490d2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 286 ms_handle_reset con 0x55c087170400 session 0x55c0871421e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 138100736 unmapped: 37421056 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 286 ms_handle_reset con 0x55c08548b400 session 0x55c0876ff4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 286 handle_osd_map epochs [286,287], i have 286, src has [1,287]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 287 ms_handle_reset con 0x55c087171400 session 0x55c0876d9680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 287 ms_handle_reset con 0x55c087789000 session 0x55c085c49860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 287 ms_handle_reset con 0x55c0884ae000 session 0x55c0876dcf00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 138117120 unmapped: 37404672 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 287 ms_handle_reset con 0x55c0884aec00 session 0x55c08543b4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 140369920 unmapped: 35151872 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 287 heartbeat osd_stat(store_statfs(0x4cf09d000/0x0/0x4ffc00000, data 0x2bf120c1/0x2bc2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 34562048 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6808240 data_alloc: 234881024 data_used: 19484672
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 34562048 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 34562048 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 34553856 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 34545664 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 287 heartbeat osd_stat(store_statfs(0x4cf087000/0x0/0x4ffc00000, data 0x2bf200c1/0x2bc39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 34545664 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6808240 data_alloc: 234881024 data_used: 19484672
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 34545664 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 34545664 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 34545664 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 34545664 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.437864304s of 15.845450401s, submitted: 99
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 34537472 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 287 heartbeat osd_stat(store_statfs(0x4cf087000/0x0/0x4ffc00000, data 0x2bf200c1/0x2bc39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6808692 data_alloc: 234881024 data_used: 19492864
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 34537472 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 140861440 unmapped: 34660352 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 140861440 unmapped: 34660352 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 287 ms_handle_reset con 0x55c085548800 session 0x55c0876fed20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 287 ms_handle_reset con 0x55c087170800 session 0x55c085f13680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 287 ms_handle_reset con 0x55c08701c000 session 0x55c087705e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 287 ms_handle_reset con 0x55c08548b400 session 0x55c0873c3860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 287 ms_handle_reset con 0x55c087171400 session 0x55c085c48d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 287 ms_handle_reset con 0x55c087171400 session 0x55c085ddaf00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 140034048 unmapped: 35487744 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 287 heartbeat osd_stat(store_statfs(0x4cee93000/0x0/0x4ffc00000, data 0x2c1212c1/0x2be3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 288 ms_handle_reset con 0x55c08548b400 session 0x55c0852f4780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 140058624 unmapped: 35463168 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ceb5b000/0x0/0x4ffc00000, data 0x2c047e1e/0x2bd61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6815105 data_alloc: 234881024 data_used: 19353600
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 139501568 unmapped: 36020224 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ceb5b000/0x0/0x4ffc00000, data 0x2c047e1e/0x2bd61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 288 ms_handle_reset con 0x55c087789000 session 0x55c0876d8960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 288 handle_osd_map epochs [288,289], i have 288, src has [1,289]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 289 heartbeat osd_stat(store_statfs(0x4ceb5b000/0x0/0x4ffc00000, data 0x2c047e1e/0x2bd61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [1])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 143106048 unmapped: 32415744 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 289 ms_handle_reset con 0x55c08545c000 session 0x55c0876d92c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 289 ms_handle_reset con 0x55c08774f000 session 0x55c0876d94a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 290 ms_handle_reset con 0x55c085458800 session 0x55c0873bcf00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 290 ms_handle_reset con 0x55c087788400 session 0x55c08544bc20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 290 ms_handle_reset con 0x55c087171400 session 0x55c0862303c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 290 ms_handle_reset con 0x55c0884ae000 session 0x55c0883814a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 290 ms_handle_reset con 0x55c08548b400 session 0x55c087142f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 290 ms_handle_reset con 0x55c08545c000 session 0x55c087133860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 142573568 unmapped: 32948224 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 290 ms_handle_reset con 0x55c085458800 session 0x55c0873c4000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 290 heartbeat osd_stat(store_statfs(0x4cd34a000/0x0/0x4ffc00000, data 0x2df3cd0d/0x2d573000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 290 ms_handle_reset con 0x55c087171400 session 0x55c089118f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 155893760 unmapped: 19628032 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.488698959s of 10.397934914s, submitted: 267
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 292 ms_handle_reset con 0x55c087788400 session 0x55c0876d9a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 292 ms_handle_reset con 0x55c0884ae000 session 0x55c0877054a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 292 ms_handle_reset con 0x55c085458800 session 0x55c0854b4d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 156041216 unmapped: 19480576 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 292 ms_handle_reset con 0x55c08545c000 session 0x55c087252000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7120832 data_alloc: 234881024 data_used: 27119616
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 292 ms_handle_reset con 0x55c087171400 session 0x55c0854b50e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 156090368 unmapped: 19431424 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 292 ms_handle_reset con 0x55c087788400 session 0x55c085452f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 19243008 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 292 heartbeat osd_stat(store_statfs(0x4cd334000/0x0/0x4ffc00000, data 0x2df4dfdc/0x2d586000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 292 ms_handle_reset con 0x55c0856fc800 session 0x55c0873c4960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 292 ms_handle_reset con 0x55c08774e800 session 0x55c0876d8960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 150175744 unmapped: 25346048 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 293 handle_osd_map epochs [293,293], i have 293, src has [1,293]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 293 ms_handle_reset con 0x55c087789000 session 0x55c085ddb0e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 150306816 unmapped: 25214976 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 150306816 unmapped: 25214976 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 293 ms_handle_reset con 0x55c085458800 session 0x55c0876ff860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6918924 data_alloc: 234881024 data_used: 27127808
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 150306816 unmapped: 25214976 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 293 heartbeat osd_stat(store_statfs(0x4ce0e4000/0x0/0x4ffc00000, data 0x2c4cfb25/0x2c1ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 25534464 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 293 ms_handle_reset con 0x55c0856fc800 session 0x55c0873c3860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 293 ms_handle_reset con 0x55c087171400 session 0x55c085f13680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 293 ms_handle_reset con 0x55c085458800 session 0x55c0876fed20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 294 ms_handle_reset con 0x55c0856fc800 session 0x55c0876dcf00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 294 ms_handle_reset con 0x55c08545c000 session 0x55c0876d8000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 294 ms_handle_reset con 0x55c087789000 session 0x55c0876d9680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 294 ms_handle_reset con 0x55c08774e800 session 0x55c08836d2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 294 ms_handle_reset con 0x55c08774e800 session 0x55c0873f1c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 294 ms_handle_reset con 0x55c085458800 session 0x55c0873f0f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 294 ms_handle_reset con 0x55c08545c000 session 0x55c0883812c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 152469504 unmapped: 23052288 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 295 ms_handle_reset con 0x55c0856fc800 session 0x55c085f132c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 152952832 unmapped: 22568960 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ce1d1000/0x0/0x4ffc00000, data 0x2cb1d213/0x2c6e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.555541992s of 10.217858315s, submitted: 215
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153190400 unmapped: 22331392 heap: 175521792 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7242812 data_alloc: 251658240 data_used: 28270592
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194551808 unmapped: 10354688 heap: 204906496 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 152682496 unmapped: 56426496 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 152961024 unmapped: 56147968 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 295 ms_handle_reset con 0x55c0888d3000 session 0x55c0856443c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 161898496 unmapped: 47210496 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 296 heartbeat osd_stat(store_statfs(0x4c4dd2000/0x0/0x4ffc00000, data 0x35f20c99/0x35aeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 158138368 unmapped: 50970624 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 296 ms_handle_reset con 0x55c085548800 session 0x55c08543a780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 296 ms_handle_reset con 0x55c08701c000 session 0x55c087132b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8457540 data_alloc: 251658240 data_used: 30093312
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 157663232 unmapped: 51445760 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 162914304 unmapped: 46194688 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 296 ms_handle_reset con 0x55c0856fc800 session 0x55c085c49860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 158867456 unmapped: 50241536 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 296 heartbeat osd_stat(store_statfs(0x4bfdd3000/0x0/0x4ffc00000, data 0x3af20c76/0x3aaea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 163299328 unmapped: 45809664 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 3.508838654s of 10.044199944s, submitted: 118
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 159408128 unmapped: 49700864 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 9201544 data_alloc: 251658240 data_used: 33968128
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 159703040 unmapped: 49405952 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 45023232 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 164241408 unmapped: 44867584 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 296 ms_handle_reset con 0x55c087788400 session 0x55c08490d2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 296 ms_handle_reset con 0x55c087789000 session 0x55c0862cd0e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 296 ms_handle_reset con 0x55c087789000 session 0x55c087143c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 160161792 unmapped: 48947200 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 296 heartbeat osd_stat(store_statfs(0x4b79d4000/0x0/0x4ffc00000, data 0x43320c14/0x42ee9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 297 ms_handle_reset con 0x55c0871dac00 session 0x55c0873f0f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 297 ms_handle_reset con 0x55c0888d2c00 session 0x55c08634da40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 297 ms_handle_reset con 0x55c08774e800 session 0x55c085096b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 297 ms_handle_reset con 0x55c085548800 session 0x55c088381e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 50216960 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 297 ms_handle_reset con 0x55c0856fc800 session 0x55c085ddb0e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 297 ms_handle_reset con 0x55c085548800 session 0x55c0891181e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 297 ms_handle_reset con 0x55c0871dac00 session 0x55c085f13860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6928801 data_alloc: 251658240 data_used: 31862784
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 49872896 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 297 ms_handle_reset con 0x55c08774e800 session 0x55c0876ff680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 298 ms_handle_reset con 0x55c087789000 session 0x55c0852f5e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 298 heartbeat osd_stat(store_statfs(0x4cfa18000/0x0/0x4ffc00000, data 0x2b183282/0x2aea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 49872896 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 298 ms_handle_reset con 0x55c085548800 session 0x55c085dda780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 162373632 unmapped: 46735360 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 298 heartbeat osd_stat(store_statfs(0x4cf915000/0x0/0x4ffc00000, data 0x2b285e0d/0x2afa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 298 ms_handle_reset con 0x55c0856fc800 session 0x55c08543a960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 162390016 unmapped: 46718976 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.574116230s of 10.074519157s, submitted: 256
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 162365440 unmapped: 46743552 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6931485 data_alloc: 251658240 data_used: 35778560
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 300 ms_handle_reset con 0x55c0871dac00 session 0x55c0873c5860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 162373632 unmapped: 46735360 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 300 ms_handle_reset con 0x55c08774e800 session 0x55c08506c3c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 162373632 unmapped: 46735360 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 300 heartbeat osd_stat(store_statfs(0x4d03d1000/0x0/0x4ffc00000, data 0x2a3414e1/0x2a4ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 302 ms_handle_reset con 0x55c0888d2c00 session 0x55c0879d3a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 162398208 unmapped: 46710784 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 302 ms_handle_reset con 0x55c0888d2c00 session 0x55c085c485a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 161095680 unmapped: 48013312 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 161013760 unmapped: 48095232 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4873850 data_alloc: 251658240 data_used: 35082240
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 303 ms_handle_reset con 0x55c085548800 session 0x55c0876fe960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 161021952 unmapped: 48087040 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 303 ms_handle_reset con 0x55c0856fc800 session 0x55c08506cd20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 160882688 unmapped: 48226304 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 303 heartbeat osd_stat(store_statfs(0x4f77c8000/0x0/0x4ffc00000, data 0x2f4692e/0x30f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 160882688 unmapped: 48226304 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 160882688 unmapped: 48226304 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 160882688 unmapped: 48226304 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2615495 data_alloc: 251658240 data_used: 35094528
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 160882688 unmapped: 48226304 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 160882688 unmapped: 48226304 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 304 heartbeat osd_stat(store_statfs(0x4f77c5000/0x0/0x4ffc00000, data 0x2f483ed/0x30f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.278168678s of 13.166848183s, submitted: 285
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 160923648 unmapped: 48185344 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 306 ms_handle_reset con 0x55c0871dac00 session 0x55c085bb12c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 161972224 unmapped: 47136768 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 161972224 unmapped: 47136768 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 306 ms_handle_reset con 0x55c08774e800 session 0x55c08544ad20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2627614 data_alloc: 251658240 data_used: 35110912
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 161529856 unmapped: 47579136 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 306 heartbeat osd_stat(store_statfs(0x4f77b8000/0x0/0x4ffc00000, data 0x2f50a53/0x3105000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 306 ms_handle_reset con 0x55c085548800 session 0x55c0854b4d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 306 heartbeat osd_stat(store_statfs(0x4f77b8000/0x0/0x4ffc00000, data 0x2f50a53/0x3105000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 306 ms_handle_reset con 0x55c0871dac00 session 0x55c0879d2960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 306 ms_handle_reset con 0x55c0856fc800 session 0x55c0876d9a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 306 ms_handle_reset con 0x55c0888d2c00 session 0x55c089118f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 161579008 unmapped: 47529984 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 306 handle_osd_map epochs [306,307], i have 306, src has [1,307]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 307 handle_osd_map epochs [307,307], i have 307, src has [1,307]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 307 ms_handle_reset con 0x55c08774e800 session 0x55c0876feb40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 161595392 unmapped: 47513600 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 307 ms_handle_reset con 0x55c085548800 session 0x55c088381c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 307 ms_handle_reset con 0x55c0856fc800 session 0x55c087a8cd20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 307 heartbeat osd_stat(store_statfs(0x4f77b4000/0x0/0x4ffc00000, data 0x2f52632/0x3109000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 161669120 unmapped: 47439872 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 307 ms_handle_reset con 0x55c0871dac00 session 0x55c0854534a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 161669120 unmapped: 47439872 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 307 handle_osd_map epochs [307,308], i have 307, src has [1,308]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 308 ms_handle_reset con 0x55c0888d2c00 session 0x55c0862303c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 308 ms_handle_reset con 0x55c08701c000 session 0x55c0891183c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2634976 data_alloc: 251658240 data_used: 35131392
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 47464448 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 308 ms_handle_reset con 0x55c0856fc800 session 0x55c0876d94a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 308 ms_handle_reset con 0x55c085548800 session 0x55c0873bd2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 47464448 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 47464448 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.628931046s of 10.665970802s, submitted: 99
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 47448064 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 309 ms_handle_reset con 0x55c0888d2c00 session 0x55c0876d92c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 309 heartbeat osd_stat(store_statfs(0x4f77b3000/0x0/0x4ffc00000, data 0x2f54371/0x310b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 163110912 unmapped: 45998080 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 309 ms_handle_reset con 0x55c0871dac00 session 0x55c085c48780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 309 ms_handle_reset con 0x55c087788400 session 0x55c0873c54a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2650330 data_alloc: 251658240 data_used: 37253120
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 309 ms_handle_reset con 0x55c085548800 session 0x55c0873c2960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 309 ms_handle_reset con 0x55c0856fc800 session 0x55c085dda960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 163192832 unmapped: 45916160 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 309 ms_handle_reset con 0x55c085458800 session 0x55c0879d32c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 309 ms_handle_reset con 0x55c08545c000 session 0x55c08543a5a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 163192832 unmapped: 45916160 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 309 ms_handle_reset con 0x55c0871dac00 session 0x55c087704b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 309 ms_handle_reset con 0x55c0871dac00 session 0x55c0871434a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 309 ms_handle_reset con 0x55c085548800 session 0x55c0873bc780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 310 ms_handle_reset con 0x55c0856fc800 session 0x55c0873bfe00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 310 ms_handle_reset con 0x55c08545c000 session 0x55c085f70d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 310 handle_osd_map epochs [310,311], i have 310, src has [1,311]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 311 ms_handle_reset con 0x55c0888d2c00 session 0x55c0854b4d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 311 ms_handle_reset con 0x55c085458800 session 0x55c085f703c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 163233792 unmapped: 45875200 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 311 heartbeat osd_stat(store_statfs(0x4f77f3000/0x0/0x4ffc00000, data 0x2e537cd/0x3012000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 311 ms_handle_reset con 0x55c08545c000 session 0x55c0876dc1e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 311 ms_handle_reset con 0x55c085548800 session 0x55c0854b41e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 163233792 unmapped: 45875200 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 311 ms_handle_reset con 0x55c0856fc800 session 0x55c0854b5680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 311 ms_handle_reset con 0x55c0871dac00 session 0x55c0854b4b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 311 ms_handle_reset con 0x55c0871dac00 session 0x55c0854532c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 311 ms_handle_reset con 0x55c085458800 session 0x55c085452b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 311 ms_handle_reset con 0x55c085548800 session 0x55c087253860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 311 ms_handle_reset con 0x55c0856fc800 session 0x55c085ed4b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 311 ms_handle_reset con 0x55c087287400 session 0x55c087253a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 163266560 unmapped: 45842432 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 311 heartbeat osd_stat(store_statfs(0x4f78ab000/0x0/0x4ffc00000, data 0x2e537dd/0x3013000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 312 ms_handle_reset con 0x55c087b0ec00 session 0x55c085ed5c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2654957 data_alloc: 251658240 data_used: 37158912
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 312 ms_handle_reset con 0x55c085458800 session 0x55c0854b8780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 312 ms_handle_reset con 0x55c085548800 session 0x55c087a8de00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 312 ms_handle_reset con 0x55c08545c000 session 0x55c085452960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 312 ms_handle_reset con 0x55c0856fc800 session 0x55c085dda000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 163323904 unmapped: 45785088 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 312 ms_handle_reset con 0x55c0856fc800 session 0x55c0854b45a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 163323904 unmapped: 45785088 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 312 handle_osd_map epochs [313,313], i have 313, src has [1,313]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 313 ms_handle_reset con 0x55c08545c000 session 0x55c087a8da40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 313 ms_handle_reset con 0x55c085458800 session 0x55c0852f4000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 313 ms_handle_reset con 0x55c087b0ec00 session 0x55c085f70d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 159588352 unmapped: 49520640 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 313 ms_handle_reset con 0x55c0871dac00 session 0x55c0873bc780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 313 ms_handle_reset con 0x55c085458800 session 0x55c0876d94a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.313092232s of 10.013728142s, submitted: 219
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 314 ms_handle_reset con 0x55c08545c000 session 0x55c0873c2960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 314 ms_handle_reset con 0x55c085548800 session 0x55c087a8cf00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 159612928 unmapped: 49496064 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 314 ms_handle_reset con 0x55c0856fc800 session 0x55c08543be00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 314 ms_handle_reset con 0x55c087b0ec00 session 0x55c0854523c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 314 ms_handle_reset con 0x55c087b0ec00 session 0x55c0879d30e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 49922048 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 314 heartbeat osd_stat(store_statfs(0x4f840c000/0x0/0x4ffc00000, data 0x22f1cc2/0x24b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 314 ms_handle_reset con 0x55c087171c00 session 0x55c0871425a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 314 ms_handle_reset con 0x55c087b11c00 session 0x55c0862cc780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2515570 data_alloc: 234881024 data_used: 27119616
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 314 ms_handle_reset con 0x55c085458800 session 0x55c0873c3860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 314 ms_handle_reset con 0x55c085548800 session 0x55c086230f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 152379392 unmapped: 56729600 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 314 ms_handle_reset con 0x55c08545c000 session 0x55c087a8c780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 314 ms_handle_reset con 0x55c085548800 session 0x55c0873f05a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 314 heartbeat osd_stat(store_statfs(0x4f897c000/0x0/0x4ffc00000, data 0x1971c60/0x1b31000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 152395776 unmapped: 56713216 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 315 ms_handle_reset con 0x55c085458800 session 0x55c0873f12c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 152412160 unmapped: 56696832 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 316 ms_handle_reset con 0x55c087b0ec00 session 0x55c0852f4780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 317 ms_handle_reset con 0x55c087171c00 session 0x55c0879d2960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151388160 unmapped: 57720832 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 317 ms_handle_reset con 0x55c0856fc800 session 0x55c085bb12c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 317 ms_handle_reset con 0x55c085458800 session 0x55c0879d3c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 317 ms_handle_reset con 0x55c087b11c00 session 0x55c0876d9860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151339008 unmapped: 57769984 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2394060 data_alloc: 234881024 data_used: 15069184
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 318 ms_handle_reset con 0x55c08545c000 session 0x55c087a8c960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151314432 unmapped: 57794560 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 318 heartbeat osd_stat(store_statfs(0x4f8b75000/0x0/0x4ffc00000, data 0x1777832/0x1938000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [0,1])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 318 ms_handle_reset con 0x55c087b0ec00 session 0x55c08836d860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 318 ms_handle_reset con 0x55c085548800 session 0x55c0876fe1e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151322624 unmapped: 57786368 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 318 ms_handle_reset con 0x55c085458800 session 0x55c087a8d2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 319 ms_handle_reset con 0x55c08545c000 session 0x55c087a8d4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151322624 unmapped: 57786368 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.865044594s of 10.577739716s, submitted: 206
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 319 ms_handle_reset con 0x55c0856fc800 session 0x55c087a8d0e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151339008 unmapped: 57769984 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 319 ms_handle_reset con 0x55c087b11c00 session 0x55c089118960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 152387584 unmapped: 56721408 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 319 ms_handle_reset con 0x55c085458800 session 0x55c0862303c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 319 heartbeat osd_stat(store_statfs(0x4f8b72000/0x0/0x4ffc00000, data 0x177936b/0x193c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 319 ms_handle_reset con 0x55c08545c000 session 0x55c085c48d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2404247 data_alloc: 234881024 data_used: 15081472
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 319 ms_handle_reset con 0x55c0856fc800 session 0x55c085c49a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 319 ms_handle_reset con 0x55c087b11c00 session 0x55c0879d3a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 152387584 unmapped: 56721408 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 319 handle_osd_map epochs [319,320], i have 319, src has [1,320]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 152395776 unmapped: 56713216 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 320 ms_handle_reset con 0x55c087287400 session 0x55c0871421e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 320 handle_osd_map epochs [320,321], i have 320, src has [1,321]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 321 ms_handle_reset con 0x55c085548800 session 0x55c085c48780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151519232 unmapped: 57589760 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 321 ms_handle_reset con 0x55c087287400 session 0x55c0871323c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151519232 unmapped: 57589760 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 321 ms_handle_reset con 0x55c08545c000 session 0x55c087133c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 321 ms_handle_reset con 0x55c087b11c00 session 0x55c0873c5860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 321 handle_osd_map epochs [322,322], i have 321, src has [1,322]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151527424 unmapped: 57581568 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 322 ms_handle_reset con 0x55c0856fc800 session 0x55c0873bc3c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 322 ms_handle_reset con 0x55c085458800 session 0x55c0862cc5a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 322 ms_handle_reset con 0x55c08545c000 session 0x55c08544ba40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2415530 data_alloc: 234881024 data_used: 15097856
Dec  2 06:38:42 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4097978202' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 322 ms_handle_reset con 0x55c085548800 session 0x55c0873be960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151126016 unmapped: 57982976 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 322 heartbeat osd_stat(store_statfs(0x4f8b69000/0x0/0x4ffc00000, data 0x177e4f8/0x1944000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 322 ms_handle_reset con 0x55c087287400 session 0x55c088381860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 322 ms_handle_reset con 0x55c087b11c00 session 0x55c0871290e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 322 ms_handle_reset con 0x55c085458800 session 0x55c0876d81e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151183360 unmapped: 57925632 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 322 ms_handle_reset con 0x55c08545c000 session 0x55c088380000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151191552 unmapped: 57917440 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 323 ms_handle_reset con 0x55c085548800 session 0x55c0873c5a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151199744 unmapped: 57909248 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151199744 unmapped: 57909248 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 323 heartbeat osd_stat(store_statfs(0x4f8b68000/0x0/0x4ffc00000, data 0x177ff05/0x1945000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2414669 data_alloc: 234881024 data_used: 15101952
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151199744 unmapped: 57909248 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.834465981s of 12.171443939s, submitted: 169
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 323 heartbeat osd_stat(store_statfs(0x4f8b68000/0x0/0x4ffc00000, data 0x177ff05/0x1945000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 323 handle_osd_map epochs [324,324], i have 324, src has [1,324]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151207936 unmapped: 57901056 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 324 ms_handle_reset con 0x55c087287400 session 0x55c0852f54a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 324 heartbeat osd_stat(store_statfs(0x4f8b65000/0x0/0x4ffc00000, data 0x1781a9e/0x1948000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151216128 unmapped: 57892864 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 325 ms_handle_reset con 0x55c087b0fc00 session 0x55c0852f5c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151216128 unmapped: 57892864 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 325 heartbeat osd_stat(store_statfs(0x4f8b60000/0x0/0x4ffc00000, data 0x1783511/0x194c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 325 ms_handle_reset con 0x55c08545c000 session 0x55c087a8d4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151224320 unmapped: 57884672 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2427207 data_alloc: 234881024 data_used: 15114240
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 325 heartbeat osd_stat(store_statfs(0x4f8b60000/0x0/0x4ffc00000, data 0x1783921/0x194e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 325 handle_osd_map epochs [325,326], i have 325, src has [1,326]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 326 ms_handle_reset con 0x55c085458800 session 0x55c085dda960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 326 ms_handle_reset con 0x55c085548800 session 0x55c0876fe1e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151232512 unmapped: 57876480 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151232512 unmapped: 57876480 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151232512 unmapped: 57876480 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 326 ms_handle_reset con 0x55c087287400 session 0x55c085ddb680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 326 ms_handle_reset con 0x55c087b0fc00 session 0x55c085ddb4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 326 ms_handle_reset con 0x55c085458800 session 0x55c0876d92c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151232512 unmapped: 57876480 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 326 handle_osd_map epochs [326,327], i have 326, src has [1,327]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 327 ms_handle_reset con 0x55c08545c000 session 0x55c0876d9680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151257088 unmapped: 57851904 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 327 ms_handle_reset con 0x55c085548800 session 0x55c0876d8960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 327 ms_handle_reset con 0x55c087287400 session 0x55c0876d9e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 327 ms_handle_reset con 0x55c087689c00 session 0x55c0876d8d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2435756 data_alloc: 234881024 data_used: 15126528
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 327 heartbeat osd_stat(store_statfs(0x4f8b58000/0x0/0x4ffc00000, data 0x178723f/0x1955000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151265280 unmapped: 57843712 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 327 ms_handle_reset con 0x55c08545c000 session 0x55c0879d2960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 327 ms_handle_reset con 0x55c085548800 session 0x55c0852f4780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151273472 unmapped: 57835520 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 327 heartbeat osd_stat(store_statfs(0x4f8b58000/0x0/0x4ffc00000, data 0x178723f/0x1955000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.724626541s of 11.135920525s, submitted: 39
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 328 ms_handle_reset con 0x55c085458800 session 0x55c0876d83c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 328 handle_osd_map epochs [328,329], i have 328, src has [1,329]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 329 ms_handle_reset con 0x55c087170800 session 0x55c08543a960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151592960 unmapped: 57516032 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 329 ms_handle_reset con 0x55c087171000 session 0x55c0876ff680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 330 ms_handle_reset con 0x55c08a1a2000 session 0x55c0873f12c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151625728 unmapped: 57483264 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 330 ms_handle_reset con 0x55c087287400 session 0x55c0852f4780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 330 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x17cc07a/0x199e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151625728 unmapped: 57483264 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2453443 data_alloc: 234881024 data_used: 15122432
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 330 ms_handle_reset con 0x55c085458800 session 0x55c0876d8d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 330 ms_handle_reset con 0x55c08545c000 session 0x55c0876d9680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151633920 unmapped: 57475072 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 330 ms_handle_reset con 0x55c085548800 session 0x55c085dda960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151633920 unmapped: 57475072 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 330 handle_osd_map epochs [331,331], i have 330, src has [1,331]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 331 ms_handle_reset con 0x55c085458800 session 0x55c0852f54a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151650304 unmapped: 57458688 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 331 ms_handle_reset con 0x55c08545c000 session 0x55c08544ba40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 331 ms_handle_reset con 0x55c087287400 session 0x55c0873c5860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 331 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x17cc07a/0x199e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151658496 unmapped: 57450496 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151666688 unmapped: 57442304 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 331 ms_handle_reset con 0x55c08a1a2000 session 0x55c085c48780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2457024 data_alloc: 234881024 data_used: 15069184
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 331 ms_handle_reset con 0x55c087170800 session 0x55c085ddbe00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151650304 unmapped: 57458688 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 331 handle_osd_map epochs [332,332], i have 331, src has [1,332]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 332 ms_handle_reset con 0x55c087170000 session 0x55c0854b45a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151666688 unmapped: 57442304 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.795328140s of 10.495843887s, submitted: 153
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 332 ms_handle_reset con 0x55c08545c000 session 0x55c0876dd2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 332 handle_osd_map epochs [333,333], i have 332, src has [1,333]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151683072 unmapped: 57425920 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 333 ms_handle_reset con 0x55c087170800 session 0x55c0876dcd20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 333 handle_osd_map epochs [334,334], i have 333, src has [1,334]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 334 ms_handle_reset con 0x55c087287400 session 0x55c085c49860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151715840 unmapped: 57393152 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 334 ms_handle_reset con 0x55c08a1a2000 session 0x55c0854532c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 334 ms_handle_reset con 0x55c085458800 session 0x55c0862cc3c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 334 ms_handle_reset con 0x55c08a1a2000 session 0x55c0873bde00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 334 heartbeat osd_stat(store_statfs(0x4f8b01000/0x0/0x4ffc00000, data 0x17d13bb/0x19ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151715840 unmapped: 57393152 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 334 ms_handle_reset con 0x55c087170800 session 0x55c0879a8d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 334 ms_handle_reset con 0x55c087170000 session 0x55c085453680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 334 ms_handle_reset con 0x55c08545c000 session 0x55c0873bfe00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 334 ms_handle_reset con 0x55c085458800 session 0x55c08836de00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2480161 data_alloc: 234881024 data_used: 15106048
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151740416 unmapped: 57368576 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 334 ms_handle_reset con 0x55c087287400 session 0x55c087142b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 151740416 unmapped: 57368576 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 334 handle_osd_map epochs [335,335], i have 334, src has [1,335]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 335 ms_handle_reset con 0x55c08a3dbc00 session 0x55c086ee83c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 335 ms_handle_reset con 0x55c08a3dac00 session 0x55c088380780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 152788992 unmapped: 56320000 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 335 handle_osd_map epochs [335,336], i have 335, src has [1,336]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 336 ms_handle_reset con 0x55c08a3da800 session 0x55c086ee81e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 336 ms_handle_reset con 0x55c08a3da800 session 0x55c086ee90e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 336 ms_handle_reset con 0x55c08a1a2000 session 0x55c0879a9e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 336 heartbeat osd_stat(store_statfs(0x4f8af5000/0x0/0x4ffc00000, data 0x17d6ba9/0x19b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 336 ms_handle_reset con 0x55c08a3da000 session 0x55c0891181e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 152788992 unmapped: 56320000 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 336 ms_handle_reset con 0x55c087287400 session 0x55c085ddab40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 152797184 unmapped: 56311808 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 336 ms_handle_reset con 0x55c08a3dac00 session 0x55c0873c4000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 336 ms_handle_reset con 0x55c087170000 session 0x55c089118780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 336 ms_handle_reset con 0x55c087170800 session 0x55c085ed4d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2491977 data_alloc: 234881024 data_used: 15376384
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 336 ms_handle_reset con 0x55c08a3dbc00 session 0x55c086ee85a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 337 ms_handle_reset con 0x55c085458800 session 0x55c0879a81e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 337 ms_handle_reset con 0x55c08a1a2000 session 0x55c0879a9c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 152821760 unmapped: 56287232 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 337 ms_handle_reset con 0x55c087287400 session 0x55c087143c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 337 ms_handle_reset con 0x55c08a1a2000 session 0x55c0876dd680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 337 handle_osd_map epochs [338,338], i have 337, src has [1,338]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 338 ms_handle_reset con 0x55c085458800 session 0x55c0876d9860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 152846336 unmapped: 56262656 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.111531258s of 10.290333748s, submitted: 106
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 338 ms_handle_reset con 0x55c08a3dbc00 session 0x55c087b45680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 152862720 unmapped: 56246272 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 338 ms_handle_reset con 0x55c087170800 session 0x55c0879a9a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 338 ms_handle_reset con 0x55c085458800 session 0x55c0873bde00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 338 ms_handle_reset con 0x55c087287400 session 0x55c087b45a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 338 handle_osd_map epochs [338,339], i have 338, src has [1,339]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 152895488 unmapped: 56213504 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 339 ms_handle_reset con 0x55c08a3dbc00 session 0x55c087b44000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 339 heartbeat osd_stat(store_statfs(0x4f8b2d000/0x0/0x4ffc00000, data 0x179beea/0x1980000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 152903680 unmapped: 56205312 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 339 handle_osd_map epochs [340,340], i have 339, src has [1,340]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 340 ms_handle_reset con 0x55c08a3da000 session 0x55c085dda1e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 340 ms_handle_reset con 0x55c08a1a2000 session 0x55c0876dd2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 340 ms_handle_reset con 0x55c08a3da800 session 0x55c0862ce5a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2504774 data_alloc: 234881024 data_used: 15138816
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 340 ms_handle_reset con 0x55c087170000 session 0x55c088380b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 152936448 unmapped: 56172544 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 152936448 unmapped: 56172544 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 340 heartbeat osd_stat(store_statfs(0x4f8b28000/0x0/0x4ffc00000, data 0x179e15a/0x1984000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 340 ms_handle_reset con 0x55c087287400 session 0x55c0850972c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 340 ms_handle_reset con 0x55c08a3dbc00 session 0x55c08544b2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 340 handle_osd_map epochs [341,341], i have 340, src has [1,341]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 341 ms_handle_reset con 0x55c08a3da000 session 0x55c0862cf860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 341 handle_osd_map epochs [341,342], i have 341, src has [1,342]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 342 ms_handle_reset con 0x55c087b03800 session 0x55c0879a8b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153804800 unmapped: 55304192 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 342 ms_handle_reset con 0x55c085458800 session 0x55c0883803c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 342 ms_handle_reset con 0x55c087789c00 session 0x55c0854b5860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 342 ms_handle_reset con 0x55c08701e400 session 0x55c085bb0b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153804800 unmapped: 55304192 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 342 ms_handle_reset con 0x55c0888d3000 session 0x55c0876ff680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 342 ms_handle_reset con 0x55c085458800 session 0x55c0852f4d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153804800 unmapped: 55304192 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2515968 data_alloc: 234881024 data_used: 15867904
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153477120 unmapped: 55631872 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 342 ms_handle_reset con 0x55c08701e400 session 0x55c087b45e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 342 ms_handle_reset con 0x55c0888d3000 session 0x55c0862cf4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153501696 unmapped: 55607296 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 342 ms_handle_reset con 0x55c087789c00 session 0x55c087b44960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.726232529s of 10.054684639s, submitted: 149
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 342 handle_osd_map epochs [342,343], i have 342, src has [1,343]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 342 handle_osd_map epochs [343,343], i have 343, src has [1,343]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 342 handle_osd_map epochs [343,344], i have 343, src has [1,344]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 343 heartbeat osd_stat(store_statfs(0x4f8b27000/0x0/0x4ffc00000, data 0x17a14e7/0x1987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153657344 unmapped: 55451648 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 344 ms_handle_reset con 0x55c08558fc00 session 0x55c0873c30e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 344 ms_handle_reset con 0x55c085458800 session 0x55c085ed43c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 344 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x1d41b8b/0x1f2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153673728 unmapped: 55435264 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 345 ms_handle_reset con 0x55c08701e400 session 0x55c0862cfc20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 345 ms_handle_reset con 0x55c087b03800 session 0x55c0879a8960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 345 handle_osd_map epochs [345,346], i have 345, src has [1,346]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153681920 unmapped: 55427072 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 346 ms_handle_reset con 0x55c087789c00 session 0x55c085ddb0e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 346 ms_handle_reset con 0x55c0888d3000 session 0x55c0876d92c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 346 ms_handle_reset con 0x55c087688400 session 0x55c085452f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2573149 data_alloc: 234881024 data_used: 15876096
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153690112 unmapped: 55418880 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 346 ms_handle_reset con 0x55c0888d3000 session 0x55c0873bcf00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 346 heartbeat osd_stat(store_statfs(0x4f857f000/0x0/0x4ffc00000, data 0x1d45277/0x1f2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 346 handle_osd_map epochs [346,347], i have 346, src has [1,347]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153690112 unmapped: 55418880 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 347 ms_handle_reset con 0x55c085458800 session 0x55c0879d32c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153690112 unmapped: 55418880 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 347 ms_handle_reset con 0x55c087789c00 session 0x55c085c49860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153690112 unmapped: 55418880 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 347 handle_osd_map epochs [348,348], i have 347, src has [1,348]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 348 ms_handle_reset con 0x55c087b03800 session 0x55c088380780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153690112 unmapped: 55418880 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 348 handle_osd_map epochs [349,349], i have 348, src has [1,349]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 349 ms_handle_reset con 0x55c087ebe400 session 0x55c0876d9c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 349 ms_handle_reset con 0x55c087b03800 session 0x55c0876dd860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 349 ms_handle_reset con 0x55c08701e400 session 0x55c0879d30e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2589514 data_alloc: 234881024 data_used: 15900672
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153706496 unmapped: 55402496 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 349 ms_handle_reset con 0x55c085458800 session 0x55c085644d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 349 heartbeat osd_stat(store_statfs(0x4f8572000/0x0/0x4ffc00000, data 0x1d4a60c/0x1f3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153706496 unmapped: 55402496 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 349 ms_handle_reset con 0x55c087688400 session 0x55c0873f1a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 350 ms_handle_reset con 0x55c087688400 session 0x55c0879a85a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153714688 unmapped: 55394304 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 350 handle_osd_map epochs [351,351], i have 350, src has [1,351]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.811887741s of 10.432035446s, submitted: 128
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 351 ms_handle_reset con 0x55c085458800 session 0x55c087252f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153722880 unmapped: 55386112 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153722880 unmapped: 55386112 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 351 heartbeat osd_stat(store_statfs(0x4f856d000/0x0/0x4ffc00000, data 0x1d4dc7c/0x1f3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 351 ms_handle_reset con 0x55c087ebe400 session 0x55c08544b860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 351 ms_handle_reset con 0x55c087b03800 session 0x55c085dda000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2596332 data_alloc: 234881024 data_used: 15904768
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 351 ms_handle_reset con 0x55c087789c00 session 0x55c08506c5a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153731072 unmapped: 55377920 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 351 handle_osd_map epochs [352,352], i have 351, src has [1,352]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 352 ms_handle_reset con 0x55c0888d3000 session 0x55c085096960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 352 ms_handle_reset con 0x55c085458800 session 0x55c087b44b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 352 ms_handle_reset con 0x55c087688400 session 0x55c08544a780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 352 ms_handle_reset con 0x55c087b03800 session 0x55c085ddbe00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 352 ms_handle_reset con 0x55c08701e400 session 0x55c0854b4780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 154050560 unmapped: 55058432 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 352 heartbeat osd_stat(store_statfs(0x4f8546000/0x0/0x4ffc00000, data 0x1d73848/0x1f67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 352 ms_handle_reset con 0x55c0888d3000 session 0x55c085dda1e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 352 handle_osd_map epochs [352,353], i have 352, src has [1,353]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 353 ms_handle_reset con 0x55c087b03800 session 0x55c088381860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 154271744 unmapped: 54837248 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 353 ms_handle_reset con 0x55c087ebe400 session 0x55c0876fe3c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 353 handle_osd_map epochs [353,354], i have 353, src has [1,354]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 52830208 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 354 ms_handle_reset con 0x55c085458800 session 0x55c0876dd2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 354 ms_handle_reset con 0x55c087688400 session 0x55c085ddb4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 354 ms_handle_reset con 0x55c08949c000 session 0x55c087a8cb40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 354 ms_handle_reset con 0x55c085458800 session 0x55c085f710e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 354 ms_handle_reset con 0x55c087688400 session 0x55c087b45a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 156368896 unmapped: 52740096 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2644368 data_alloc: 234881024 data_used: 20672512
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 156368896 unmapped: 52740096 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 355 ms_handle_reset con 0x55c0888d3000 session 0x55c087b45680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 355 ms_handle_reset con 0x55c087b03800 session 0x55c0879d25a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 355 handle_osd_map epochs [356,356], i have 355, src has [1,356]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 356 ms_handle_reset con 0x55c08af21800 session 0x55c08836d2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 156401664 unmapped: 52707328 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 356 ms_handle_reset con 0x55c087ebe400 session 0x55c0879d21e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 356 ms_handle_reset con 0x55c085458800 session 0x55c087b45680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 356 heartbeat osd_stat(store_statfs(0x4f855d000/0x0/0x4ffc00000, data 0x1d569b1/0x1f4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 356 handle_osd_map epochs [356,357], i have 356, src has [1,357]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 156467200 unmapped: 52641792 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 357 ms_handle_reset con 0x55c087688400 session 0x55c0876dc1e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 357 ms_handle_reset con 0x55c087b03800 session 0x55c0862cfc20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.102361679s of 10.365777016s, submitted: 207
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 357 ms_handle_reset con 0x55c087688400 session 0x55c0891194a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 156467200 unmapped: 52641792 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 358 ms_handle_reset con 0x55c087b03800 session 0x55c0873bf860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 358 handle_osd_map epochs [359,359], i have 358, src has [1,359]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 156491776 unmapped: 52617216 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 359 ms_handle_reset con 0x55c085458800 session 0x55c0852f4780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 359 ms_handle_reset con 0x55c08af21800 session 0x55c085bb10e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 359 ms_handle_reset con 0x55c08949c000 session 0x55c0873c30e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 359 ms_handle_reset con 0x55c087eb8000 session 0x55c0879a9e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2580311 data_alloc: 234881024 data_used: 15937536
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 359 handle_osd_map epochs [359,360], i have 359, src has [1,360]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 360 ms_handle_reset con 0x55c0888d3000 session 0x55c085f710e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 155181056 unmapped: 53927936 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 360 ms_handle_reset con 0x55c085458800 session 0x55c0871323c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 360 ms_handle_reset con 0x55c087ebe400 session 0x55c0862303c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 360 heartbeat osd_stat(store_statfs(0x4f8af2000/0x0/0x4ffc00000, data 0x17c09b3/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 155181056 unmapped: 53927936 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 360 handle_osd_map epochs [360,361], i have 360, src has [1,361]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 361 heartbeat osd_stat(store_statfs(0x4f8af2000/0x0/0x4ffc00000, data 0x17c09b3/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 155181056 unmapped: 53927936 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 361 ms_handle_reset con 0x55c087688400 session 0x55c0883810e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 361 ms_handle_reset con 0x55c085458800 session 0x55c086ee94a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 361 ms_handle_reset con 0x55c087eb8000 session 0x55c08544a960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 361 ms_handle_reset con 0x55c087ebe400 session 0x55c087142b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153100288 unmapped: 56008704 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 361 ms_handle_reset con 0x55c0888d3000 session 0x55c0873bfa40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 361 handle_osd_map epochs [362,362], i have 361, src has [1,362]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153124864 unmapped: 55984128 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 362 ms_handle_reset con 0x55c08af21800 session 0x55c08836c780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 362 ms_handle_reset con 0x55c085458800 session 0x55c088380d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 362 ms_handle_reset con 0x55c087eb8000 session 0x55c0854b4b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2590299 data_alloc: 234881024 data_used: 15958016
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 363 ms_handle_reset con 0x55c087b03800 session 0x55c085ddad20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153124864 unmapped: 55984128 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153124864 unmapped: 55984128 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 363 ms_handle_reset con 0x55c087ec0800 session 0x55c087a8cb40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 363 handle_osd_map epochs [363,364], i have 363, src has [1,364]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 364 ms_handle_reset con 0x55c087b0b800 session 0x55c0854b9a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153067520 unmapped: 56041472 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 364 ms_handle_reset con 0x55c087ebe400 session 0x55c0873bc960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 364 heartbeat osd_stat(store_statfs(0x4f8ae4000/0x0/0x4ffc00000, data 0x17c7770/0x19c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [0,0,0,3,2])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 364 ms_handle_reset con 0x55c085458800 session 0x55c0883805a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 364 ms_handle_reset con 0x55c089fac800 session 0x55c0876dd2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 364 handle_osd_map epochs [365,365], i have 364, src has [1,365]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.770161629s of 10.409769058s, submitted: 158
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 365 ms_handle_reset con 0x55c087b03800 session 0x55c085ed43c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 55574528 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 365 ms_handle_reset con 0x55c0888d3000 session 0x55c08543b2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 365 heartbeat osd_stat(store_statfs(0x4f7d31000/0x0/0x4ffc00000, data 0x2578325/0x277a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 55574528 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2708984 data_alloc: 234881024 data_used: 15970304
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 365 ms_handle_reset con 0x55c085458800 session 0x55c0879a9860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 365 handle_osd_map epochs [366,366], i have 365, src has [1,366]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153575424 unmapped: 55533568 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 366 ms_handle_reset con 0x55c0888d3000 session 0x55c085ddb680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 366 heartbeat osd_stat(store_statfs(0x4f7d34000/0x0/0x4ffc00000, data 0x2578325/0x277a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153583616 unmapped: 55525376 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 366 handle_osd_map epochs [366,367], i have 366, src has [1,367]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 367 ms_handle_reset con 0x55c087b03800 session 0x55c085ddb680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153608192 unmapped: 55500800 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 367 handle_osd_map epochs [367,368], i have 367, src has [1,368]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 368 handle_osd_map epochs [368,368], i have 368, src has [1,368]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 368 ms_handle_reset con 0x55c089fac800 session 0x55c08544a960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 368 ms_handle_reset con 0x55c087ebe400 session 0x55c0879a8f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 368 ms_handle_reset con 0x55c085458800 session 0x55c085f71a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153632768 unmapped: 55476224 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 368 ms_handle_reset con 0x55c087eb8000 session 0x55c0873c30e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153632768 unmapped: 55476224 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 368 ms_handle_reset con 0x55c087b03800 session 0x55c0879d25a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718245 data_alloc: 234881024 data_used: 15986688
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153632768 unmapped: 55476224 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 368 heartbeat osd_stat(store_statfs(0x4f7d2b000/0x0/0x4ffc00000, data 0x257dd21/0x2782000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 153632768 unmapped: 55476224 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 368 handle_osd_map epochs [368,369], i have 368, src has [1,369]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 369 handle_osd_map epochs [369,370], i have 369, src has [1,370]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 154697728 unmapped: 54411264 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 154697728 unmapped: 54411264 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.519774437s of 10.709050179s, submitted: 98
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 370 heartbeat osd_stat(store_statfs(0x4f6b84000/0x0/0x4ffc00000, data 0x2581405/0x2788000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 155746304 unmapped: 53362688 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 370 ms_handle_reset con 0x55c0888d3000 session 0x55c0876feb40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2726517 data_alloc: 234881024 data_used: 15986688
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 155746304 unmapped: 53362688 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 370 ms_handle_reset con 0x55c089fac800 session 0x55c089118780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 370 ms_handle_reset con 0x55c087b03800 session 0x55c0854525a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 155746304 unmapped: 53362688 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 370 handle_osd_map epochs [370,371], i have 370, src has [1,371]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 155746304 unmapped: 53362688 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 372 ms_handle_reset con 0x55c0888d3000 session 0x55c0876fe3c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 372 heartbeat osd_stat(store_statfs(0x4f6b80000/0x0/0x4ffc00000, data 0x2583010/0x278d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 158597120 unmapped: 50511872 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 372 handle_osd_map epochs [372,373], i have 372, src has [1,373]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 373 ms_handle_reset con 0x55c087ec0800 session 0x55c0879a8b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 373 ms_handle_reset con 0x55c087b04c00 session 0x55c086ee90e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 373 ms_handle_reset con 0x55c085458800 session 0x55c0862ce1e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 164995072 unmapped: 44113920 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2841975 data_alloc: 251658240 data_used: 30261248
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 44081152 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 373 handle_osd_map epochs [373,374], i have 373, src has [1,374]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 374 ms_handle_reset con 0x55c085458800 session 0x55c0876dde00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 374 ms_handle_reset con 0x55c087b03800 session 0x55c086ee9e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 44081152 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 374 ms_handle_reset con 0x55c087b04c00 session 0x55c0873c4960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 374 heartbeat osd_stat(store_statfs(0x4f6b78000/0x0/0x4ffc00000, data 0x258868b/0x2795000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 165076992 unmapped: 44032000 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 165076992 unmapped: 44032000 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 375 ms_handle_reset con 0x55c0888d3000 session 0x55c0876ffc20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 375 ms_handle_reset con 0x55c087ec0800 session 0x55c0879d2b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 375 ms_handle_reset con 0x55c085458800 session 0x55c0883803c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 165076992 unmapped: 44032000 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 375 handle_osd_map epochs [376,376], i have 375, src has [1,376]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.180048943s of 10.711064339s, submitted: 71
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 376 ms_handle_reset con 0x55c087b03800 session 0x55c0854b45a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 376 ms_handle_reset con 0x55c087b04c00 session 0x55c085644d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 376 ms_handle_reset con 0x55c087688000 session 0x55c0876dd680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2848155 data_alloc: 251658240 data_used: 30257152
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 376 heartbeat osd_stat(store_statfs(0x4f6764000/0x0/0x4ffc00000, data 0x258a10a/0x2798000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 165109760 unmapped: 43999232 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 376 ms_handle_reset con 0x55c0888d3000 session 0x55c088380780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 376 ms_handle_reset con 0x55c085458800 session 0x55c085bb0b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 165117952 unmapped: 43991040 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 165117952 unmapped: 43991040 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 376 handle_osd_map epochs [378,378], i have 376, src has [1,378]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 376 handle_osd_map epochs [377,378], i have 376, src has [1,378]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 378 ms_handle_reset con 0x55c087688000 session 0x55c0879d3680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 165158912 unmapped: 43950080 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 378 handle_osd_map epochs [379,379], i have 378, src has [1,379]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 379 heartbeat osd_stat(store_statfs(0x4f675c000/0x0/0x4ffc00000, data 0x258f303/0x27a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 379 ms_handle_reset con 0x55c087b04c00 session 0x55c087142f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 379 ms_handle_reset con 0x55c087b03800 session 0x55c085ed5680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 379 ms_handle_reset con 0x55c087afdc00 session 0x55c085dda000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 379 ms_handle_reset con 0x55c087ebfc00 session 0x55c08506cf00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 175718400 unmapped: 33390592 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2936583 data_alloc: 251658240 data_used: 31870976
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 379 heartbeat osd_stat(store_statfs(0x4f5e66000/0x0/0x4ffc00000, data 0x2e7e9fc/0x3091000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 175849472 unmapped: 33259520 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 177741824 unmapped: 31367168 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 379 handle_osd_map epochs [379,380], i have 379, src has [1,380]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 379 handle_osd_map epochs [380,380], i have 380, src has [1,380]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 380 ms_handle_reset con 0x55c085458800 session 0x55c08506c5a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 380 ms_handle_reset con 0x55c087688000 session 0x55c0883805a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 176193536 unmapped: 32915456 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 380 handle_osd_map epochs [381,381], i have 380, src has [1,381]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 381 ms_handle_reset con 0x55c087b03800 session 0x55c085ddb2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 381 ms_handle_reset con 0x55c087afdc00 session 0x55c085ed43c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 176218112 unmapped: 32890880 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f5e28000/0x0/0x4ffc00000, data 0x2ebfc9f/0x30d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 381 ms_handle_reset con 0x55c087688000 session 0x55c0879a8960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 176218112 unmapped: 32890880 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.829242706s of 10.097941399s, submitted: 241
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 381 handle_osd_map epochs [381,382], i have 381, src has [1,382]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2948973 data_alloc: 251658240 data_used: 32063488
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 382 ms_handle_reset con 0x55c087b03800 session 0x55c087252f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 176226304 unmapped: 32882688 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 382 handle_osd_map epochs [382,383], i have 382, src has [1,383]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 383 ms_handle_reset con 0x55c085458800 session 0x55c0879d2000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 383 ms_handle_reset con 0x55c087ebfc00 session 0x55c085452960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 177274880 unmapped: 31834112 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 177274880 unmapped: 31834112 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 383 heartbeat osd_stat(store_statfs(0x4f5e21000/0x0/0x4ffc00000, data 0x2ec6457/0x30db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 177274880 unmapped: 31834112 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 177274880 unmapped: 31834112 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2952435 data_alloc: 251658240 data_used: 32059392
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 383 heartbeat osd_stat(store_statfs(0x4f5e21000/0x0/0x4ffc00000, data 0x2ec6457/0x30db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 177274880 unmapped: 31834112 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 177274880 unmapped: 31834112 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 384 ms_handle_reset con 0x55c087b04c00 session 0x55c0879a94a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 384 handle_osd_map epochs [384,385], i have 384, src has [1,385]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 177258496 unmapped: 31850496 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f5e1f000/0x0/0x4ffc00000, data 0x2ec7f4a/0x30de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 177258496 unmapped: 31850496 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 31842304 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 385 ms_handle_reset con 0x55c087688000 session 0x55c08543a5a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.803730011s of 10.196865082s, submitted: 92
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2968114 data_alloc: 251658240 data_used: 32079872
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 386 ms_handle_reset con 0x55c087b03800 session 0x55c087a8da40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 177274880 unmapped: 31834112 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 386 handle_osd_map epochs [387,387], i have 387, src has [1,387]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 387 ms_handle_reset con 0x55c087b04c00 session 0x55c085ddab40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 387 ms_handle_reset con 0x55c085458800 session 0x55c087a8d4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 177274880 unmapped: 31834112 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 387 heartbeat osd_stat(store_statfs(0x4f5e11000/0x0/0x4ffc00000, data 0x2ecd203/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 177307648 unmapped: 31801344 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 388 ms_handle_reset con 0x55c087ebfc00 session 0x55c0873c3e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 388 heartbeat osd_stat(store_statfs(0x4f5e11000/0x0/0x4ffc00000, data 0x2eced62/0x30ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 177340416 unmapped: 31768576 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 389 ms_handle_reset con 0x55c085458800 session 0x55c0852f4d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 389 ms_handle_reset con 0x55c087688000 session 0x55c0854b9680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 177389568 unmapped: 31719424 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2974449 data_alloc: 251658240 data_used: 32079872
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f5e0d000/0x0/0x4ffc00000, data 0x2ed093f/0x30ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 177422336 unmapped: 31686656 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 177422336 unmapped: 31686656 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f5e0f000/0x0/0x4ffc00000, data 0x2ed08dd/0x30ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 177422336 unmapped: 31686656 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 177422336 unmapped: 31686656 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 389 handle_osd_map epochs [389,390], i have 389, src has [1,390]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 177446912 unmapped: 31662080 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2978887 data_alloc: 251658240 data_used: 32088064
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 390 ms_handle_reset con 0x55c087b03800 session 0x55c0876dc3c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 177446912 unmapped: 31662080 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 177446912 unmapped: 31662080 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.615262985s of 11.844739914s, submitted: 67
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 177455104 unmapped: 31653888 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 391 heartbeat osd_stat(store_statfs(0x4f5e08000/0x0/0x4ffc00000, data 0x2ed40a9/0x30f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178503680 unmapped: 30605312 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178503680 unmapped: 30605312 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2985331 data_alloc: 251658240 data_used: 32088064
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178503680 unmapped: 30605312 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f5e05000/0x0/0x4ffc00000, data 0x2ed5b44/0x30f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178536448 unmapped: 30572544 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178552832 unmapped: 30556160 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178552832 unmapped: 30556160 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178552832 unmapped: 30556160 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f5e02000/0x0/0x4ffc00000, data 0x3410b44/0x30fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3029859 data_alloc: 251658240 data_used: 32092160
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 ms_handle_reset con 0x55c089faf400 session 0x55c0876fed20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 ms_handle_reset con 0x55c087b04c00 session 0x55c08836c3c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f5e02000/0x0/0x4ffc00000, data 0x3410b44/0x30fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178552832 unmapped: 30556160 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178585600 unmapped: 30523392 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178585600 unmapped: 30523392 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178585600 unmapped: 30523392 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178585600 unmapped: 30523392 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.707919121s of 12.835602760s, submitted: 30
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 ms_handle_reset con 0x55c085458800 session 0x55c0876ff860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3033315 data_alloc: 251658240 data_used: 33144832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 ms_handle_reset con 0x55c087688000 session 0x55c086ee8960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f5e02000/0x0/0x4ffc00000, data 0x3410b44/0x30fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 179200000 unmapped: 29908992 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 ms_handle_reset con 0x55c087b03800 session 0x55c0873bde00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 ms_handle_reset con 0x55c089faf400 session 0x55c087a8c5a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 179200000 unmapped: 29908992 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 ms_handle_reset con 0x55c087ec0400 session 0x55c0883814a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 ms_handle_reset con 0x55c085458800 session 0x55c0879d2960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178315264 unmapped: 30793728 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178315264 unmapped: 30793728 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f5dd8000/0x0/0x4ffc00000, data 0x343ab44/0x3126000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178380800 unmapped: 30728192 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3038109 data_alloc: 251658240 data_used: 33292288
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178438144 unmapped: 30670848 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178438144 unmapped: 30670848 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178454528 unmapped: 30654464 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f5dbe000/0x0/0x4ffc00000, data 0x3481b44/0x3140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178487296 unmapped: 30621696 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178487296 unmapped: 30621696 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3056889 data_alloc: 251658240 data_used: 33292288
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178487296 unmapped: 30621696 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f5dbe000/0x0/0x4ffc00000, data 0x3481b44/0x3140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178520064 unmapped: 30588928 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f5dbe000/0x0/0x4ffc00000, data 0x3481b44/0x3140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178552832 unmapped: 30556160 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178552832 unmapped: 30556160 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f5dbe000/0x0/0x4ffc00000, data 0x3481b44/0x3140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f5dbe000/0x0/0x4ffc00000, data 0x3481b44/0x3140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178552832 unmapped: 30556160 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3056889 data_alloc: 251658240 data_used: 33292288
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.743170738s of 15.787551880s, submitted: 21
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 181821440 unmapped: 27287552 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 181977088 unmapped: 27131904 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182108160 unmapped: 27000832 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182108160 unmapped: 27000832 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f5ab3000/0x0/0x4ffc00000, data 0x378cb44/0x344b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182108160 unmapped: 27000832 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3106836 data_alloc: 251658240 data_used: 36405248
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182108160 unmapped: 27000832 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182108160 unmapped: 27000832 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 ms_handle_reset con 0x55c089faf400 session 0x55c085ed5680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 30146560 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 30146560 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f30ba000/0x0/0x4ffc00000, data 0x6185b44/0x5e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 30146560 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3393548 data_alloc: 251658240 data_used: 36405248
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 30146560 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 30146560 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 30146560 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 30146560 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f30ba000/0x0/0x4ffc00000, data 0x6185b44/0x5e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 30146560 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3393548 data_alloc: 251658240 data_used: 36405248
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 30146560 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 30146560 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 30146560 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 30146560 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.688219070s of 19.226505280s, submitted: 14
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 ms_handle_reset con 0x55c0871db000 session 0x55c0876dd680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 179134464 unmapped: 29974528 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f30ba000/0x0/0x4ffc00000, data 0x6185b44/0x5e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3398293 data_alloc: 251658240 data_used: 36405248
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 179134464 unmapped: 29974528 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f3095000/0x0/0x4ffc00000, data 0x61a9b67/0x5e69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 179167232 unmapped: 29941760 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 179167232 unmapped: 29941760 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 180158464 unmapped: 28950528 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 184696832 unmapped: 24412160 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f3095000/0x0/0x4ffc00000, data 0x61a9b67/0x5e69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3444373 data_alloc: 251658240 data_used: 42852352
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 ms_handle_reset con 0x55c087688000 session 0x55c0852f54a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 ms_handle_reset con 0x55c087b03800 session 0x55c0879a9c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 184729600 unmapped: 24379392 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 ms_handle_reset con 0x55c085458800 session 0x55c0873bfa40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 184729600 unmapped: 24379392 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 ms_handle_reset con 0x55c0871db000 session 0x55c087705860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 ms_handle_reset con 0x55c087688000 session 0x55c087a8c780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 ms_handle_reset con 0x55c087b03800 session 0x55c085097860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 ms_handle_reset con 0x55c089faf400 session 0x55c0876fe000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 ms_handle_reset con 0x55c085458800 session 0x55c0852f4780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 185319424 unmapped: 23789568 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 185319424 unmapped: 23789568 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 ms_handle_reset con 0x55c0871db000 session 0x55c0873be960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 ms_handle_reset con 0x55c087688000 session 0x55c088381a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f2cf7000/0x0/0x4ffc00000, data 0x6547b67/0x6207000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 185319424 unmapped: 23789568 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3472818 data_alloc: 251658240 data_used: 42721280
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 185319424 unmapped: 23789568 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 ms_handle_reset con 0x55c087b03800 session 0x55c085645860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.297840118s of 11.516427040s, submitted: 47
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 185319424 unmapped: 23789568 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 393 ms_handle_reset con 0x55c08548b400 session 0x55c0873bf4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 393 heartbeat osd_stat(store_statfs(0x4f2cf7000/0x0/0x4ffc00000, data 0x60136d6/0x6206000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 393 ms_handle_reset con 0x55c085458800 session 0x55c088380f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 185360384 unmapped: 23748608 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 393 heartbeat osd_stat(store_statfs(0x4f2cf7000/0x0/0x4ffc00000, data 0x60136d6/0x6206000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 393 ms_handle_reset con 0x55c087eb8000 session 0x55c08836cf00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 393 ms_handle_reset con 0x55c087eba000 session 0x55c0873f1a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 393 ms_handle_reset con 0x55c0871db000 session 0x55c084fcb4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 393 ms_handle_reset con 0x55c087688000 session 0x55c085f13680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 185393152 unmapped: 23715840 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 393 ms_handle_reset con 0x55c085458800 session 0x55c0876ffe00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 185417728 unmapped: 23691264 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3452873 data_alloc: 251658240 data_used: 42725376
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 185417728 unmapped: 23691264 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 393 ms_handle_reset con 0x55c087eba000 session 0x55c0862ce1e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 186793984 unmapped: 22315008 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 393 heartbeat osd_stat(store_statfs(0x4f4240000/0x0/0x4ffc00000, data 0x48e9708/0x4b0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 187867136 unmapped: 21241856 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 187940864 unmapped: 21168128 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 187891712 unmapped: 21217280 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 393 handle_osd_map epochs [393,394], i have 393, src has [1,394]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 394 heartbeat osd_stat(store_statfs(0x4f3ff5000/0x0/0x4ffc00000, data 0x4b34708/0x4d59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3279725 data_alloc: 251658240 data_used: 32411648
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 394 ms_handle_reset con 0x55c087b03800 session 0x55c0873c3e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 184098816 unmapped: 25010176 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 184098816 unmapped: 25010176 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.971442223s of 11.520406723s, submitted: 164
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 184098816 unmapped: 25010176 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 184098816 unmapped: 25010176 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 184098816 unmapped: 25010176 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 395 ms_handle_reset con 0x55c0895d0400 session 0x55c0854b45a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 395 ms_handle_reset con 0x55c087eb9400 session 0x55c0862cc3c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3281931 data_alloc: 251658240 data_used: 32428032
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 395 ms_handle_reset con 0x55c085458800 session 0x55c089118780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 184131584 unmapped: 24977408 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 395 heartbeat osd_stat(store_statfs(0x4f3858000/0x0/0x4ffc00000, data 0x547ed74/0x56a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 395 ms_handle_reset con 0x55c087b03800 session 0x55c0891181e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 184139776 unmapped: 24969216 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 395 ms_handle_reset con 0x55c087eba000 session 0x55c08506d680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 186376192 unmapped: 22732800 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 396 heartbeat osd_stat(store_statfs(0x4f305f000/0x0/0x4ffc00000, data 0x5c687b4/0x5e90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 396 ms_handle_reset con 0x55c0884ae800 session 0x55c0873bc960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188874752 unmapped: 20234240 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188350464 unmapped: 20758528 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3350319 data_alloc: 251658240 data_used: 33435648
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 396 heartbeat osd_stat(store_statfs(0x4f304a000/0x0/0x4ffc00000, data 0x5c837c4/0x5eac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188350464 unmapped: 20758528 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 396 ms_handle_reset con 0x55c087eb8400 session 0x55c0876ffc20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 396 ms_handle_reset con 0x55c087eb8400 session 0x55c087705e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 396 ms_handle_reset con 0x55c085458800 session 0x55c086231a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 396 ms_handle_reset con 0x55c087eba000 session 0x55c0879d2780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188358656 unmapped: 20750336 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 396 heartbeat osd_stat(store_statfs(0x4f2b26000/0x0/0x4ffc00000, data 0x61a6826/0x63d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188358656 unmapped: 20750336 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188358656 unmapped: 20750336 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.295144081s of 11.999600410s, submitted: 165
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188383232 unmapped: 20725760 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3397376 data_alloc: 251658240 data_used: 33435648
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188383232 unmapped: 20725760 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188383232 unmapped: 20725760 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 396 heartbeat osd_stat(store_statfs(0x4f2b26000/0x0/0x4ffc00000, data 0x61a6826/0x63d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188383232 unmapped: 20725760 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188383232 unmapped: 20725760 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188383232 unmapped: 20725760 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3397376 data_alloc: 251658240 data_used: 33435648
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188383232 unmapped: 20725760 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188383232 unmapped: 20725760 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 396 heartbeat osd_stat(store_statfs(0x4f2b26000/0x0/0x4ffc00000, data 0x61a6826/0x63d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188383232 unmapped: 20725760 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 396 heartbeat osd_stat(store_statfs(0x4f2b26000/0x0/0x4ffc00000, data 0x61a6826/0x63d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188383232 unmapped: 20725760 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 396 heartbeat osd_stat(store_statfs(0x4f2b26000/0x0/0x4ffc00000, data 0x61a6826/0x63d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188383232 unmapped: 20725760 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3397376 data_alloc: 251658240 data_used: 33435648
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188383232 unmapped: 20725760 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188383232 unmapped: 20725760 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 396 ms_handle_reset con 0x55c0884ae800 session 0x55c0873f1680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188383232 unmapped: 20725760 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 396 ms_handle_reset con 0x55c089fb3800 session 0x55c08544b2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188383232 unmapped: 20725760 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 396 heartbeat osd_stat(store_statfs(0x4f2b26000/0x0/0x4ffc00000, data 0x61a6826/0x63d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188407808 unmapped: 20701184 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 396 ms_handle_reset con 0x55c085458800 session 0x55c087252d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3405056 data_alloc: 251658240 data_used: 34521088
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 396 heartbeat osd_stat(store_statfs(0x4f2b26000/0x0/0x4ffc00000, data 0x61a6826/0x63d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 191594496 unmapped: 17514496 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 191594496 unmapped: 17514496 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 396 ms_handle_reset con 0x55c087eb8400 session 0x55c087704f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.771028519s of 17.778024673s, submitted: 1
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 396 ms_handle_reset con 0x55c087eba000 session 0x55c085644d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 396 heartbeat osd_stat(store_statfs(0x4f2b2d000/0x0/0x4ffc00000, data 0x61a6849/0x63d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 191619072 unmapped: 17489920 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 191619072 unmapped: 17489920 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 396 heartbeat osd_stat(store_statfs(0x4f2b2d000/0x0/0x4ffc00000, data 0x61a6849/0x63d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 191619072 unmapped: 17489920 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3429317 data_alloc: 251658240 data_used: 38715392
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 396 heartbeat osd_stat(store_statfs(0x4f2b2d000/0x0/0x4ffc00000, data 0x61a6849/0x63d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 191619072 unmapped: 17489920 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 191627264 unmapped: 17481728 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 191627264 unmapped: 17481728 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 191627264 unmapped: 17481728 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 191627264 unmapped: 17481728 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3440539 data_alloc: 251658240 data_used: 38825984
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 191627264 unmapped: 17481728 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 397 heartbeat osd_stat(store_statfs(0x4f2b16000/0x0/0x4ffc00000, data 0x61ba428/0x63e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 397 heartbeat osd_stat(store_statfs(0x4f2b16000/0x0/0x4ffc00000, data 0x61ba428/0x63e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,1])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 191725568 unmapped: 17383424 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193740800 unmapped: 15368192 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.583783150s of 10.902227402s, submitted: 78
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 398 heartbeat osd_stat(store_statfs(0x4f2481000/0x0/0x4ffc00000, data 0x684dfa5/0x6a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193863680 unmapped: 15245312 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 398 handle_osd_map epochs [398,399], i have 398, src has [1,399]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193871872 unmapped: 15237120 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3520799 data_alloc: 251658240 data_used: 38985728
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193871872 unmapped: 15237120 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 ms_handle_reset con 0x55c089fb1800 session 0x55c0873c54a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193871872 unmapped: 15237120 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 ms_handle_reset con 0x55c087eb8c00 session 0x55c085dda1e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193871872 unmapped: 15237120 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 heartbeat osd_stat(store_statfs(0x4f2466000/0x0/0x4ffc00000, data 0x686cb22/0x6a8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193208320 unmapped: 15900672 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193355776 unmapped: 15753216 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3530095 data_alloc: 251658240 data_used: 39923712
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193355776 unmapped: 15753216 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193355776 unmapped: 15753216 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 ms_handle_reset con 0x55c089fafc00 session 0x55c085f710e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 heartbeat osd_stat(store_statfs(0x4f2472000/0x0/0x4ffc00000, data 0x686cb22/0x6a8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 ms_handle_reset con 0x55c087789000 session 0x55c0873c23c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 ms_handle_reset con 0x55c087eb8400 session 0x55c087b45a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193355776 unmapped: 15753216 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193388544 unmapped: 15720448 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 ms_handle_reset con 0x55c085458800 session 0x55c0852f54a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193388544 unmapped: 15720448 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3527323 data_alloc: 251658240 data_used: 39968768
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.126725197s of 12.249805450s, submitted: 47
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 ms_handle_reset con 0x55c087eba000 session 0x55c0879a8f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193388544 unmapped: 15720448 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193388544 unmapped: 15720448 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 heartbeat osd_stat(store_statfs(0x4f246f000/0x0/0x4ffc00000, data 0x686db84/0x6a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193388544 unmapped: 15720448 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193388544 unmapped: 15720448 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 heartbeat osd_stat(store_statfs(0x4f246f000/0x0/0x4ffc00000, data 0x686db84/0x6a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193388544 unmapped: 15720448 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3531672 data_alloc: 251658240 data_used: 40001536
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193626112 unmapped: 15482880 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 ms_handle_reset con 0x55c085458800 session 0x55c086ee90e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193626112 unmapped: 15482880 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193626112 unmapped: 15482880 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 ms_handle_reset con 0x55c087789000 session 0x55c085f123c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 ms_handle_reset con 0x55c087eb8400 session 0x55c0879d2d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 heartbeat osd_stat(store_statfs(0x4f2464000/0x0/0x4ffc00000, data 0x6879b84/0x6a9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 ms_handle_reset con 0x55c089fafc00 session 0x55c085dda960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193658880 unmapped: 15450112 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 ms_handle_reset con 0x55c089fb1800 session 0x55c087143c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 ms_handle_reset con 0x55c085458800 session 0x55c087705a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193658880 unmapped: 15450112 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3535346 data_alloc: 251658240 data_used: 40001536
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193658880 unmapped: 15450112 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 heartbeat osd_stat(store_statfs(0x4f2462000/0x0/0x4ffc00000, data 0x6879ba4/0x6a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193658880 unmapped: 15450112 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193708032 unmapped: 15400960 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193708032 unmapped: 15400960 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 ms_handle_reset con 0x55c087789000 session 0x55c0883805a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 heartbeat osd_stat(store_statfs(0x4f2462000/0x0/0x4ffc00000, data 0x6879ba4/0x6a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193708032 unmapped: 15400960 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3536626 data_alloc: 251658240 data_used: 40091648
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 ms_handle_reset con 0x55c087b04000 session 0x55c08506cd20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.142742157s of 15.204904556s, submitted: 17
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 ms_handle_reset con 0x55c087b0d800 session 0x55c087132780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 21K writes, 85K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 21K writes, 7462 syncs, 2.83 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 43K keys, 10K commit groups, 1.0 writes per commit group, ingest: 29.18 MB, 0.05 MB/s#012Interval WAL: 10K writes, 4367 syncs, 2.37 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 heartbeat osd_stat(store_statfs(0x4f2463000/0x0/0x4ffc00000, data 0x6879b94/0x6a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193708032 unmapped: 15400960 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 heartbeat osd_stat(store_statfs(0x4f2463000/0x0/0x4ffc00000, data 0x6879b94/0x6a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193798144 unmapped: 15310848 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 ms_handle_reset con 0x55c0884ae800 session 0x55c08543a5a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 ms_handle_reset con 0x55c089fad400 session 0x55c0879a9c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 ms_handle_reset con 0x55c085458800 session 0x55c0873f10e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 ms_handle_reset con 0x55c087789000 session 0x55c087a8da40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193798144 unmapped: 15310848 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 ms_handle_reset con 0x55c087b04000 session 0x55c085452960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193798144 unmapped: 15310848 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 heartbeat osd_stat(store_statfs(0x4f246f000/0x0/0x4ffc00000, data 0x686db0f/0x6a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 ms_handle_reset con 0x55c087b0d800 session 0x55c08544b680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193798144 unmapped: 15310848 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3536196 data_alloc: 251658240 data_used: 40747008
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193798144 unmapped: 15310848 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193814528 unmapped: 15294464 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 400 ms_handle_reset con 0x55c087b0d800 session 0x55c0873c3860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 400 heartbeat osd_stat(store_statfs(0x4f246d000/0x0/0x4ffc00000, data 0x686f6e0/0x6a90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 193814528 unmapped: 15294464 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 401 ms_handle_reset con 0x55c085458800 session 0x55c08490d2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 401 ms_handle_reset con 0x55c087789000 session 0x55c085452780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 401 heartbeat osd_stat(store_statfs(0x4f50f1000/0x0/0x4ffc00000, data 0x32d42b1/0x34fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 187916288 unmapped: 21192704 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 402 ms_handle_reset con 0x55c087b04000 session 0x55c0872525a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 187940864 unmapped: 21168128 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3106004 data_alloc: 251658240 data_used: 31076352
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 402 ms_handle_reset con 0x55c089fad400 session 0x55c0876dc000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 187940864 unmapped: 21168128 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.406373024s of 10.912814140s, submitted: 80
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 402 ms_handle_reset con 0x55c0871db000 session 0x55c085ed5a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 402 ms_handle_reset con 0x55c087eb8000 session 0x55c085ed54a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f5a1a000/0x0/0x4ffc00000, data 0x32b1e20/0x34e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 187965440 unmapped: 21143552 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 402 ms_handle_reset con 0x55c089fad400 session 0x55c085f71a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182665216 unmapped: 26443776 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182665216 unmapped: 26443776 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182050816 unmapped: 27058176 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 402 ms_handle_reset con 0x55c085458800 session 0x55c0891190e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 402 ms_handle_reset con 0x55c087789000 session 0x55c08836c000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2938158 data_alloc: 234881024 data_used: 23085056
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182116352 unmapped: 26992640 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f690a000/0x0/0x4ffc00000, data 0x23c3dee/0x25f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182116352 unmapped: 26992640 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182116352 unmapped: 26992640 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 181985280 unmapped: 27123712 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f6906000/0x0/0x4ffc00000, data 0x23c5851/0x25f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182001664 unmapped: 27107328 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2941024 data_alloc: 234881024 data_used: 23093248
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182001664 unmapped: 27107328 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182001664 unmapped: 27107328 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.795311928s of 10.955400467s, submitted: 63
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182272000 unmapped: 26836992 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 403 ms_handle_reset con 0x55c087789000 session 0x55c085f132c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182272000 unmapped: 26836992 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182272000 unmapped: 26836992 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2942926 data_alloc: 234881024 data_used: 23093248
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f68c6000/0x0/0x4ffc00000, data 0x2406851/0x2638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182272000 unmapped: 26836992 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182272000 unmapped: 26836992 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182280192 unmapped: 26828800 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182353920 unmapped: 26755072 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182353920 unmapped: 26755072 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943490 data_alloc: 234881024 data_used: 23089152
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182353920 unmapped: 26755072 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f68be000/0x0/0x4ffc00000, data 0x240b851/0x263d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182353920 unmapped: 26755072 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.464027405s of 10.490324974s, submitted: 5
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 ms_handle_reset con 0x55c085458800 session 0x55c085ddab40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182378496 unmapped: 26730496 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182378496 unmapped: 26730496 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 ms_handle_reset con 0x55c087eb8000 session 0x55c087a8d0e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 ms_handle_reset con 0x55c0871db000 session 0x55c087b45680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182378496 unmapped: 26730496 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2950404 data_alloc: 234881024 data_used: 23449600
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f68bc000/0x0/0x4ffc00000, data 0x240d3de/0x2641000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182378496 unmapped: 26730496 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 ms_handle_reset con 0x55c089fad400 session 0x55c0876dd860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 186621952 unmapped: 22487040 heap: 209108992 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 ms_handle_reset con 0x55c089fad400 session 0x55c085ddaf00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182435840 unmapped: 30875648 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182435840 unmapped: 30875648 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182435840 unmapped: 30875648 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f58f9000/0x0/0x4ffc00000, data 0x34833de/0x3605000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3067626 data_alloc: 234881024 data_used: 22925312
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182435840 unmapped: 30875648 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182435840 unmapped: 30875648 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f58f9000/0x0/0x4ffc00000, data 0x34833de/0x3605000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182435840 unmapped: 30875648 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182435840 unmapped: 30875648 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182435840 unmapped: 30875648 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3067626 data_alloc: 234881024 data_used: 22925312
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 ms_handle_reset con 0x55c085458800 session 0x55c0873c30e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182435840 unmapped: 30875648 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182452224 unmapped: 30859264 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 ms_handle_reset con 0x55c087eb8000 session 0x55c087705860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f58f9000/0x0/0x4ffc00000, data 0x34833de/0x3605000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182468608 unmapped: 30842880 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 ms_handle_reset con 0x55c087b04000 session 0x55c08836d860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182468608 unmapped: 30842880 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f58f9000/0x0/0x4ffc00000, data 0x34833de/0x3605000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 ms_handle_reset con 0x55c087b0d800 session 0x55c0871281e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.345941544s of 16.533685684s, submitted: 11
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 ms_handle_reset con 0x55c087b0d800 session 0x55c0854b85a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182771712 unmapped: 30539776 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3072235 data_alloc: 234881024 data_used: 22941696
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182796288 unmapped: 30515200 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182796288 unmapped: 30515200 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182796288 unmapped: 30515200 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182804480 unmapped: 30507008 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f58d5000/0x0/0x4ffc00000, data 0x34a73de/0x3629000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182804480 unmapped: 30507008 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3080875 data_alloc: 234881024 data_used: 24055808
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182804480 unmapped: 30507008 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182804480 unmapped: 30507008 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182804480 unmapped: 30507008 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 186490880 unmapped: 26820608 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f55c0000/0x0/0x4ffc00000, data 0x37bc3de/0x393e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 186490880 unmapped: 26820608 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3117973 data_alloc: 234881024 data_used: 24834048
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 186490880 unmapped: 26820608 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.837846756s of 12.906675339s, submitted: 9
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 183296000 unmapped: 30015488 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 183296000 unmapped: 30015488 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f55bf000/0x0/0x4ffc00000, data 0x382b3de/0x393f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 187473920 unmapped: 25837568 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 190832640 unmapped: 22478848 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3143535 data_alloc: 234881024 data_used: 25034752
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 190939136 unmapped: 22372352 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f4412000/0x0/0x4ffc00000, data 0x47483de/0x485c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 190971904 unmapped: 22339584 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 190971904 unmapped: 22339584 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 190971904 unmapped: 22339584 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 190971904 unmapped: 22339584 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247975 data_alloc: 234881024 data_used: 25030656
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 190971904 unmapped: 22339584 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f4412000/0x0/0x4ffc00000, data 0x47483de/0x485c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 190971904 unmapped: 22339584 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 190971904 unmapped: 22339584 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.298577309s of 11.627500534s, submitted: 68
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188448768 unmapped: 24862720 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 ms_handle_reset con 0x55c085458800 session 0x55c08634d680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 ms_handle_reset con 0x55c087b04000 session 0x55c0873bc780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 ms_handle_reset con 0x55c087eb8000 session 0x55c0852f5e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 24911872 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3226619 data_alloc: 234881024 data_used: 25100288
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 24911872 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f46c5000/0x0/0x4ffc00000, data 0x47253de/0x4839000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 24911872 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f46c5000/0x0/0x4ffc00000, data 0x47253de/0x4839000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 24911872 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f46c5000/0x0/0x4ffc00000, data 0x47253de/0x4839000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 24911872 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 24911872 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f46c5000/0x0/0x4ffc00000, data 0x47253de/0x4839000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3226619 data_alloc: 234881024 data_used: 25100288
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 24911872 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 24911872 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f46c5000/0x0/0x4ffc00000, data 0x47253de/0x4839000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 24911872 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 24911872 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.447495461s of 11.523307800s, submitted: 23
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188407808 unmapped: 24903680 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3226619 data_alloc: 234881024 data_used: 25100288
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188424192 unmapped: 24887296 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f46c5000/0x0/0x4ffc00000, data 0x47253de/0x4839000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188456960 unmapped: 24854528 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 ms_handle_reset con 0x55c0871db000 session 0x55c0876d85a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 ms_handle_reset con 0x55c087789000 session 0x55c085ddaf00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 ms_handle_reset con 0x55c085458800 session 0x55c0873bc3c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188456960 unmapped: 24854528 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188456960 unmapped: 24854528 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188456960 unmapped: 24854528 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3227899 data_alloc: 234881024 data_used: 25227264
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 ms_handle_reset con 0x55c087b04000 session 0x55c087a8cd20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188456960 unmapped: 24854528 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 ms_handle_reset con 0x55c087b0d800 session 0x55c085c485a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 ms_handle_reset con 0x55c087eb8000 session 0x55c085c48960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 ms_handle_reset con 0x55c087eb8000 session 0x55c085096b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188456960 unmapped: 24854528 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 405 ms_handle_reset con 0x55c085458800 session 0x55c0850972c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 405 heartbeat osd_stat(store_statfs(0x4f4aac000/0x0/0x4ffc00000, data 0x42cef9f/0x4451000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188456960 unmapped: 24854528 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 405 ms_handle_reset con 0x55c087eb8400 session 0x55c085bb0780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 405 ms_handle_reset con 0x55c089fafc00 session 0x55c0876fe5a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188465152 unmapped: 24846336 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 405 ms_handle_reset con 0x55c087b04000 session 0x55c0873bfa40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 405 heartbeat osd_stat(store_statfs(0x4f4ab0000/0x0/0x4ffc00000, data 0x4218f9f/0x444d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188465152 unmapped: 24846336 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3189488 data_alloc: 234881024 data_used: 25100288
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188465152 unmapped: 24846336 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 405 heartbeat osd_stat(store_statfs(0x4f4ab1000/0x0/0x4ffc00000, data 0x4218f8f/0x444c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188465152 unmapped: 24846336 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 405 heartbeat osd_stat(store_statfs(0x4f4ab1000/0x0/0x4ffc00000, data 0x4218f8f/0x444c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 405 heartbeat osd_stat(store_statfs(0x4f4ab1000/0x0/0x4ffc00000, data 0x4218f8f/0x444c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 188465152 unmapped: 24846336 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 405 ms_handle_reset con 0x55c087eb8000 session 0x55c087b445a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.049304008s of 13.405330658s, submitted: 110
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 405 ms_handle_reset con 0x55c087eb8400 session 0x55c085dda000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 405 heartbeat osd_stat(store_statfs(0x4f4ab1000/0x0/0x4ffc00000, data 0x4218f8f/0x444c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182730752 unmapped: 30580736 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182730752 unmapped: 30580736 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037344 data_alloc: 234881024 data_used: 17805312
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182730752 unmapped: 30580736 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182738944 unmapped: 30572544 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182738944 unmapped: 30572544 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 406 heartbeat osd_stat(store_statfs(0x4f5671000/0x0/0x4ffc00000, data 0x3659980/0x388c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182738944 unmapped: 30572544 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182738944 unmapped: 30572544 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3041342 data_alloc: 234881024 data_used: 17813504
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182738944 unmapped: 30572544 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182738944 unmapped: 30572544 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 406 heartbeat osd_stat(store_statfs(0x4f5671000/0x0/0x4ffc00000, data 0x3659980/0x388c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182738944 unmapped: 30572544 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182738944 unmapped: 30572544 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 406 heartbeat osd_stat(store_statfs(0x4f5671000/0x0/0x4ffc00000, data 0x3659980/0x388c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182738944 unmapped: 30572544 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3050302 data_alloc: 234881024 data_used: 18223104
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182738944 unmapped: 30572544 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182738944 unmapped: 30572544 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182738944 unmapped: 30572544 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182738944 unmapped: 30572544 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.843030930s of 15.963332176s, submitted: 30
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 406 ms_handle_reset con 0x55c089fafc00 session 0x55c08634c3c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182738944 unmapped: 30572544 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051776 data_alloc: 234881024 data_used: 18223104
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 406 heartbeat osd_stat(store_statfs(0x4f5671000/0x0/0x4ffc00000, data 0x3659980/0x388c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182738944 unmapped: 30572544 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 406 ms_handle_reset con 0x55c089fad400 session 0x55c085452960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182738944 unmapped: 30572544 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 407 ms_handle_reset con 0x55c087ebb000 session 0x55c0879d2960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182755328 unmapped: 30556160 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 407 ms_handle_reset con 0x55c087b0a000 session 0x55c08490d2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 408 ms_handle_reset con 0x55c087b0d800 session 0x55c0883810e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182763520 unmapped: 30547968 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 409 heartbeat osd_stat(store_statfs(0x4f5669000/0x0/0x4ffc00000, data 0x365d08a/0x3893000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 409 ms_handle_reset con 0x55c087eb8000 session 0x55c085ddb680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182763520 unmapped: 30547968 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062017 data_alloc: 234881024 data_used: 18231296
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182763520 unmapped: 30547968 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182763520 unmapped: 30547968 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 409 ms_handle_reset con 0x55c087eb8400 session 0x55c087b45860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182763520 unmapped: 30547968 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 410 ms_handle_reset con 0x55c089fad400 session 0x55c0873f1680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182771712 unmapped: 30539776 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.898279190s of 10.019945145s, submitted: 29
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 411 ms_handle_reset con 0x55c089fafc00 session 0x55c0872525a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182771712 unmapped: 30539776 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3072410 data_alloc: 234881024 data_used: 18235392
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 411 heartbeat osd_stat(store_statfs(0x4f565f000/0x0/0x4ffc00000, data 0x3662441/0x389d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182771712 unmapped: 30539776 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 412 ms_handle_reset con 0x55c087b0a000 session 0x55c0876dc1e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182779904 unmapped: 30531584 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182779904 unmapped: 30531584 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182788096 unmapped: 30523392 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 413 ms_handle_reset con 0x55c087789000 session 0x55c0852f4780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 413 ms_handle_reset con 0x55c085458800 session 0x55c0873be3c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 413 heartbeat osd_stat(store_statfs(0x4f565c000/0x0/0x4ffc00000, data 0x36659dd/0x38a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 413 ms_handle_reset con 0x55c087b0d800 session 0x55c0871332c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182845440 unmapped: 30466048 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3086468 data_alloc: 234881024 data_used: 19910656
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 414 ms_handle_reset con 0x55c085458800 session 0x55c0873c3860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 414 ms_handle_reset con 0x55c087789000 session 0x55c08544a780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182861824 unmapped: 30449664 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 414 ms_handle_reset con 0x55c087b0a000 session 0x55c0862cd0e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182861824 unmapped: 30449664 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 415 ms_handle_reset con 0x55c089fafc00 session 0x55c0852f4000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f5658000/0x0/0x4ffc00000, data 0x3667576/0x38a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182861824 unmapped: 30449664 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 416 ms_handle_reset con 0x55c087eb8400 session 0x55c0862cde00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182870016 unmapped: 30441472 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 416 ms_handle_reset con 0x55c087eb8000 session 0x55c0876dd860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.060652733s of 10.184609413s, submitted: 74
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 417 ms_handle_reset con 0x55c085458800 session 0x55c087143c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182026240 unmapped: 31285248 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2867148 data_alloc: 234881024 data_used: 16289792
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 417 ms_handle_reset con 0x55c087789000 session 0x55c08543a000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182026240 unmapped: 31285248 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 418 ms_handle_reset con 0x55c089fafc00 session 0x55c08836c5a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 419 ms_handle_reset con 0x55c087b0a000 session 0x55c08544b680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 419 ms_handle_reset con 0x55c085458800 session 0x55c0891190e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182034432 unmapped: 31277056 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182034432 unmapped: 31277056 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 419 heartbeat osd_stat(store_statfs(0x4f7491000/0x0/0x4ffc00000, data 0x1826481/0x1a6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 419 ms_handle_reset con 0x55c087789000 session 0x55c0876d9e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182034432 unmapped: 31277056 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 420 ms_handle_reset con 0x55c087eb8000 session 0x55c0854532c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182042624 unmapped: 31268864 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2879910 data_alloc: 234881024 data_used: 16289792
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 420 ms_handle_reset con 0x55c089fafc00 session 0x55c088380780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 421 ms_handle_reset con 0x55c087b05800 session 0x55c0873bfa40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 181207040 unmapped: 32104448 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 421 heartbeat osd_stat(store_statfs(0x4f748c000/0x0/0x4ffc00000, data 0x182a06d/0x1a71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 181207040 unmapped: 32104448 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 422 ms_handle_reset con 0x55c085458800 session 0x55c0876fe5a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 422 ms_handle_reset con 0x55c087789000 session 0x55c0850972c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 181207040 unmapped: 32104448 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 181207040 unmapped: 32104448 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.914997101s of 10.013813019s, submitted: 37
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 423 ms_handle_reset con 0x55c087eb8000 session 0x55c085096b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 423 ms_handle_reset con 0x55c089fafc00 session 0x55c085c485a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 181207040 unmapped: 32104448 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2886082 data_alloc: 234881024 data_used: 16293888
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 424 ms_handle_reset con 0x55c08949c800 session 0x55c087a8cd20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 181207040 unmapped: 32104448 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 425 ms_handle_reset con 0x55c085458800 session 0x55c0852f5e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 425 heartbeat osd_stat(store_statfs(0x4f7074000/0x0/0x4ffc00000, data 0x182e9fc/0x1a78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 181141504 unmapped: 32169984 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 181141504 unmapped: 32169984 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 181141504 unmapped: 32169984 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 181141504 unmapped: 32169984 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2892350 data_alloc: 234881024 data_used: 16302080
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 425 handle_osd_map epochs [427,428], i have 425, src has [1,428]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 425 handle_osd_map epochs [426,428], i have 425, src has [1,428]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 181248000 unmapped: 32063488 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 428 handle_osd_map epochs [429,430], i have 428, src has [1,430]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 430 heartbeat osd_stat(store_statfs(0x4f7069000/0x0/0x4ffc00000, data 0x1835994/0x1a84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182321152 unmapped: 30990336 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 430 ms_handle_reset con 0x55c087789000 session 0x55c0854b85a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182321152 unmapped: 30990336 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182321152 unmapped: 30990336 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182321152 unmapped: 30990336 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2909476 data_alloc: 234881024 data_used: 16302080
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.984180450s of 11.566115379s, submitted: 84
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182345728 unmapped: 30965760 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 431 heartbeat osd_stat(store_statfs(0x4f7060000/0x0/0x4ffc00000, data 0x183ac09/0x1a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 432 ms_handle_reset con 0x55c087eb8000 session 0x55c0871281e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 432 heartbeat osd_stat(store_statfs(0x4f7060000/0x0/0x4ffc00000, data 0x183ac09/0x1a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 432 ms_handle_reset con 0x55c089fafc00 session 0x55c0871290e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182345728 unmapped: 30965760 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 432 heartbeat osd_stat(store_statfs(0x4f705c000/0x0/0x4ffc00000, data 0x183c85a/0x1a90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 432 heartbeat osd_stat(store_statfs(0x4f705c000/0x0/0x4ffc00000, data 0x183c85a/0x1a90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182345728 unmapped: 30965760 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182345728 unmapped: 30965760 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182345728 unmapped: 30965760 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2913248 data_alloc: 234881024 data_used: 16302080
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182345728 unmapped: 30965760 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 432 handle_osd_map epochs [432,433], i have 432, src has [1,433]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182362112 unmapped: 30949376 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 433 ms_handle_reset con 0x55c087282000 session 0x55c0854b4b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 433 heartbeat osd_stat(store_statfs(0x4f7059000/0x0/0x4ffc00000, data 0x183e35b/0x1a94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182362112 unmapped: 30949376 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182362112 unmapped: 30949376 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 433 heartbeat osd_stat(store_statfs(0x4f7059000/0x0/0x4ffc00000, data 0x183e35b/0x1a94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 433 handle_osd_map epochs [433,434], i have 433, src has [1,434]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 434 ms_handle_reset con 0x55c085458800 session 0x55c0879a92c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182362112 unmapped: 30949376 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2920305 data_alloc: 234881024 data_used: 16306176
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.950528145s of 10.117654800s, submitted: 75
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 434 ms_handle_reset con 0x55c087282000 session 0x55c085f70780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182378496 unmapped: 30932992 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 434 ms_handle_reset con 0x55c087789000 session 0x55c087132b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 434 ms_handle_reset con 0x55c087eb8000 session 0x55c085dda1e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 434 ms_handle_reset con 0x55c089fafc00 session 0x55c0873f1c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182386688 unmapped: 30924800 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 435 ms_handle_reset con 0x55c085458800 session 0x55c0879d2f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 435 ms_handle_reset con 0x55c087282000 session 0x55c085645860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182394880 unmapped: 30916608 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 435 ms_handle_reset con 0x55c087789000 session 0x55c087132b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 435 ms_handle_reset con 0x55c087eb8000 session 0x55c0854b4b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 435 heartbeat osd_stat(store_statfs(0x4f7051000/0x0/0x4ffc00000, data 0x18419c9/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182403072 unmapped: 30908416 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 435 heartbeat osd_stat(store_statfs(0x4f7051000/0x0/0x4ffc00000, data 0x18419c9/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182411264 unmapped: 30900224 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 435 ms_handle_reset con 0x55c087b06c00 session 0x55c0873bfa40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2930760 data_alloc: 234881024 data_used: 16310272
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 435 ms_handle_reset con 0x55c085458800 session 0x55c0876d9e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182427648 unmapped: 30883840 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 435 ms_handle_reset con 0x55c087282000 session 0x55c08836c5a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182444032 unmapped: 30867456 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182444032 unmapped: 30867456 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 435 heartbeat osd_stat(store_statfs(0x4f7054000/0x0/0x4ffc00000, data 0x1841957/0x1a9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 435 ms_handle_reset con 0x55c087789000 session 0x55c0873c3860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182444032 unmapped: 30867456 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 436 ms_handle_reset con 0x55c087eb8000 session 0x55c0854532c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182435840 unmapped: 30875648 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2933565 data_alloc: 234881024 data_used: 16338944
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 436 ms_handle_reset con 0x55c087a14c00 session 0x55c0891190e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182435840 unmapped: 30875648 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f8091000/0x0/0x4ffc00000, data 0x18434c6/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182435840 unmapped: 30875648 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f8091000/0x0/0x4ffc00000, data 0x18434c6/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.405010223s of 11.743053436s, submitted: 96
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f8051000/0x0/0x4ffc00000, data 0x1883529/0x1add000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182714368 unmapped: 30597120 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 436 ms_handle_reset con 0x55c085458800 session 0x55c0873bf4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182722560 unmapped: 30588928 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182722560 unmapped: 30588928 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2938625 data_alloc: 234881024 data_used: 16334848
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f8051000/0x0/0x4ffc00000, data 0x1883529/0x1add000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 436 ms_handle_reset con 0x55c087282000 session 0x55c0879d2d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182722560 unmapped: 30588928 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 437 ms_handle_reset con 0x55c087789000 session 0x55c087705a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182747136 unmapped: 30564352 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 437 handle_osd_map epochs [438,439], i have 437, src has [1,439]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182796288 unmapped: 30515200 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 439 ms_handle_reset con 0x55c087eb8000 session 0x55c0862cf4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182796288 unmapped: 30515200 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 439 ms_handle_reset con 0x55c087afec00 session 0x55c085ed5680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182804480 unmapped: 30507008 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956090 data_alloc: 234881024 data_used: 16351232
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 439 heartbeat osd_stat(store_statfs(0x4f8044000/0x0/0x4ffc00000, data 0x1888777/0x1ae9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182804480 unmapped: 30507008 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 439 handle_osd_map epochs [439,440], i have 439, src has [1,440]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 440 ms_handle_reset con 0x55c085458800 session 0x55c085ed43c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182820864 unmapped: 30490624 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 440 ms_handle_reset con 0x55c087282000 session 0x55c0876ff0e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182829056 unmapped: 30482432 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.248563766s of 11.493333817s, submitted: 71
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 440 ms_handle_reset con 0x55c087789000 session 0x55c086231a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182837248 unmapped: 30474240 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 441 ms_handle_reset con 0x55c087eb8000 session 0x55c0879d23c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182837248 unmapped: 30474240 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2963082 data_alloc: 234881024 data_used: 16375808
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 441 ms_handle_reset con 0x55c08af20400 session 0x55c0891183c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 441 ms_handle_reset con 0x55c085458800 session 0x55c088381860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182878208 unmapped: 30433280 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 441 heartbeat osd_stat(store_statfs(0x4f803e000/0x0/0x4ffc00000, data 0x188bf05/0x1aef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 441 ms_handle_reset con 0x55c087282000 session 0x55c0876dd0e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 441 heartbeat osd_stat(store_statfs(0x4f8040000/0x0/0x4ffc00000, data 0x188be53/0x1aed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182878208 unmapped: 30433280 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 442 ms_handle_reset con 0x55c087789000 session 0x55c085bb0780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 442 ms_handle_reset con 0x55c087eb8000 session 0x55c0871332c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 442 ms_handle_reset con 0x55c0861af800 session 0x55c0854b8780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 442 ms_handle_reset con 0x55c085458800 session 0x55c0876ff2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 442 ms_handle_reset con 0x55c087282000 session 0x55c085453680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 442 ms_handle_reset con 0x55c087789000 session 0x55c0854b45a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 442 ms_handle_reset con 0x55c087eb8000 session 0x55c087a8cf00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 442 ms_handle_reset con 0x55c08a3db000 session 0x55c08506d680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 442 ms_handle_reset con 0x55c085458800 session 0x55c0862ccd20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182042624 unmapped: 31268864 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182042624 unmapped: 31268864 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182042624 unmapped: 31268864 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2983266 data_alloc: 234881024 data_used: 16371712
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182042624 unmapped: 31268864 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182042624 unmapped: 31268864 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 442 heartbeat osd_stat(store_statfs(0x4f7f1b000/0x0/0x4ffc00000, data 0x19aea86/0x1c12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 442 handle_osd_map epochs [443,443], i have 442, src has [1,443]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 443 ms_handle_reset con 0x55c087282000 session 0x55c085bb0b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f7f18000/0x0/0x4ffc00000, data 0x19b0505/0x1c15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 443 ms_handle_reset con 0x55c087789000 session 0x55c08836d2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182001664 unmapped: 31309824 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f7f18000/0x0/0x4ffc00000, data 0x19b0505/0x1c15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 443 ms_handle_reset con 0x55c087eb8000 session 0x55c087128960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 443 ms_handle_reset con 0x55c087b04400 session 0x55c0876dcb40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 443 ms_handle_reset con 0x55c085458800 session 0x55c0873f0f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182001664 unmapped: 31309824 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.797324181s of 10.212483406s, submitted: 110
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182001664 unmapped: 31309824 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993960 data_alloc: 234881024 data_used: 17031168
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f7f16000/0x0/0x4ffc00000, data 0x19b0577/0x1c17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 443 ms_handle_reset con 0x55c087eb8000 session 0x55c0879d25a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182001664 unmapped: 31309824 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 443 handle_osd_map epochs [443,444], i have 443, src has [1,444]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 183050240 unmapped: 30261248 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 444 ms_handle_reset con 0x55c087284400 session 0x55c0873bde00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 444 handle_osd_map epochs [445,445], i have 444, src has [1,445]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 445 ms_handle_reset con 0x55c086f82400 session 0x55c0876feb40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 445 ms_handle_reset con 0x55c086fb8000 session 0x55c089118780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 445 ms_handle_reset con 0x55c08a5ae000 session 0x55c0876fe000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 183074816 unmapped: 30236672 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 183074816 unmapped: 30236672 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 445 heartbeat osd_stat(store_statfs(0x4f7f0d000/0x0/0x4ffc00000, data 0x19b41ff/0x1c1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 445 heartbeat osd_stat(store_statfs(0x4f7f0d000/0x0/0x4ffc00000, data 0x19b41ff/0x1c1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 183074816 unmapped: 30236672 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3005072 data_alloc: 234881024 data_used: 17465344
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 445 handle_osd_map epochs [446,446], i have 445, src has [1,446]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 183074816 unmapped: 30236672 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 446 ms_handle_reset con 0x55c085458800 session 0x55c085dda960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 446 ms_handle_reset con 0x55c086f82400 session 0x55c086ee9a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 446 handle_osd_map epochs [446,447], i have 446, src has [1,447]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 183074816 unmapped: 30236672 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 448 ms_handle_reset con 0x55c087284400 session 0x55c087b45e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 183083008 unmapped: 30228480 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 448 ms_handle_reset con 0x55c087eb8000 session 0x55c0873bd2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 183083008 unmapped: 30228480 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 183083008 unmapped: 30228480 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013972 data_alloc: 234881024 data_used: 17481728
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f7f07000/0x0/0x4ffc00000, data 0x19b8ea0/0x1c25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.678957939s of 11.929112434s, submitted: 77
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 183091200 unmapped: 30220288 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f7f07000/0x0/0x4ffc00000, data 0x19b8ea0/0x1c25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 185171968 unmapped: 28139520 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182771712 unmapped: 30539776 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182771712 unmapped: 30539776 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182771712 unmapped: 30539776 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092754 data_alloc: 234881024 data_used: 17817600
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 448 ms_handle_reset con 0x55c085458800 session 0x55c086ee8f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 448 handle_osd_map epochs [449,449], i have 448, src has [1,449]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 449 ms_handle_reset con 0x55c086f82400 session 0x55c0876dc780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182804480 unmapped: 30507008 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 449 handle_osd_map epochs [450,450], i have 449, src has [1,450]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 450 ms_handle_reset con 0x55c087284400 session 0x55c0872523c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 450 heartbeat osd_stat(store_statfs(0x4f7614000/0x0/0x4ffc00000, data 0x22aba55/0x2519000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182804480 unmapped: 30507008 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 450 ms_handle_reset con 0x55c08a5ae000 session 0x55c089119e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182804480 unmapped: 30507008 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 450 heartbeat osd_stat(store_statfs(0x4f7610000/0x0/0x4ffc00000, data 0x22ad642/0x251c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182804480 unmapped: 30507008 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 450 ms_handle_reset con 0x55c085546400 session 0x55c087704d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 450 handle_osd_map epochs [451,451], i have 450, src has [1,451]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 451 ms_handle_reset con 0x55c086f82400 session 0x55c0873c5860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182829056 unmapped: 30482432 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3106634 data_alloc: 234881024 data_used: 17842176
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 451 heartbeat osd_stat(store_statfs(0x4f760a000/0x0/0x4ffc00000, data 0x22b1231/0x2523000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 451 handle_osd_map epochs [451,452], i have 451, src has [1,452]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 452 ms_handle_reset con 0x55c085546400 session 0x55c088380d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 452 ms_handle_reset con 0x55c085458800 session 0x55c08836c960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182829056 unmapped: 30482432 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.235145569s of 10.586992264s, submitted: 117
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 452 handle_osd_map epochs [453,453], i have 452, src has [1,453]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 453 ms_handle_reset con 0x55c087284400 session 0x55c0862cef00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182837248 unmapped: 30474240 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 453 ms_handle_reset con 0x55c08a5ae000 session 0x55c0862314a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182845440 unmapped: 30466048 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182845440 unmapped: 30466048 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 453 ms_handle_reset con 0x55c085546400 session 0x55c088380f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182845440 unmapped: 30466048 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3115488 data_alloc: 234881024 data_used: 18112512
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 453 heartbeat osd_stat(store_statfs(0x4f7605000/0x0/0x4ffc00000, data 0x22b491d/0x2528000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182845440 unmapped: 30466048 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 453 ms_handle_reset con 0x55c087284400 session 0x55c0871332c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 453 handle_osd_map epochs [454,454], i have 453, src has [1,454]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 454 ms_handle_reset con 0x55c085458800 session 0x55c087a8c1e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 454 ms_handle_reset con 0x55c087eb9000 session 0x55c0873bf860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182845440 unmapped: 30466048 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 454 handle_osd_map epochs [454,455], i have 454, src has [1,455]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 455 ms_handle_reset con 0x55c0895d0800 session 0x55c085bb0780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182845440 unmapped: 30466048 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 455 ms_handle_reset con 0x55c0895d0800 session 0x55c0873c4960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 455 ms_handle_reset con 0x55c087282000 session 0x55c0873bd4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 455 ms_handle_reset con 0x55c087789000 session 0x55c08544b2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 455 ms_handle_reset con 0x55c086f82400 session 0x55c088380000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 455 heartbeat osd_stat(store_statfs(0x4f75fa000/0x0/0x4ffc00000, data 0x22b9617/0x2533000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182484992 unmapped: 30826496 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 455 heartbeat osd_stat(store_statfs(0x4f75fa000/0x0/0x4ffc00000, data 0x22b9617/0x2533000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 455 ms_handle_reset con 0x55c085458800 session 0x55c08836da40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 455 heartbeat osd_stat(store_statfs(0x4f8010000/0x0/0x4ffc00000, data 0x18a45a5/0x1b1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 455 handle_osd_map epochs [455,456], i have 455, src has [1,456]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 456 ms_handle_reset con 0x55c085546400 session 0x55c088381860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182484992 unmapped: 30826496 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3036714 data_alloc: 234881024 data_used: 16723968
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 456 ms_handle_reset con 0x55c086f82400 session 0x55c08836c960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182484992 unmapped: 30826496 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 456 handle_osd_map epochs [457,457], i have 456, src has [1,457]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.194192886s of 10.175275803s, submitted: 120
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182484992 unmapped: 30826496 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 457 handle_osd_map epochs [457,458], i have 457, src has [1,458]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182493184 unmapped: 30818304 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 458 ms_handle_reset con 0x55c087282000 session 0x55c087704d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 458 ms_handle_reset con 0x55c087789000 session 0x55c0873bd2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 458 heartbeat osd_stat(store_statfs(0x4f800e000/0x0/0x4ffc00000, data 0x18a7b31/0x1b1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 458 ms_handle_reset con 0x55c0895d0800 session 0x55c087b45e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182501376 unmapped: 30810112 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 458 ms_handle_reset con 0x55c085546400 session 0x55c085dda960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 458 handle_osd_map epochs [458,459], i have 458, src has [1,459]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 459 ms_handle_reset con 0x55c086f82400 session 0x55c0879d23c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182509568 unmapped: 30801920 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3040417 data_alloc: 234881024 data_used: 16732160
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 459 ms_handle_reset con 0x55c087282000 session 0x55c0876feb40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 459 ms_handle_reset con 0x55c087789000 session 0x55c0873f0f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182534144 unmapped: 30777344 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 459 handle_osd_map epochs [460,460], i have 459, src has [1,460]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182542336 unmapped: 30769152 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 460 ms_handle_reset con 0x55c087284400 session 0x55c085ed4f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182542336 unmapped: 30769152 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 460 handle_osd_map epochs [460,461], i have 460, src has [1,461]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182542336 unmapped: 30769152 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 461 heartbeat osd_stat(store_statfs(0x4f8007000/0x0/0x4ffc00000, data 0x18ac9a1/0x1b26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182542336 unmapped: 30769152 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3048433 data_alloc: 234881024 data_used: 16740352
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182542336 unmapped: 30769152 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 461 handle_osd_map epochs [461,462], i have 461, src has [1,462]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f8003000/0x0/0x4ffc00000, data 0x18ae59e/0x1b29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182542336 unmapped: 30769152 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182542336 unmapped: 30769152 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182542336 unmapped: 30769152 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.331980705s of 13.127640724s, submitted: 114
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 462 ms_handle_reset con 0x55c085546400 session 0x55c0876d8960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182616064 unmapped: 30695424 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 462 ms_handle_reset con 0x55c086f82400 session 0x55c087705860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3047455 data_alloc: 234881024 data_used: 16678912
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182616064 unmapped: 30695424 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f8002000/0x0/0x4ffc00000, data 0x18b0071/0x1b2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 462 handle_osd_map epochs [463,463], i have 462, src has [1,463]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 463 ms_handle_reset con 0x55c087282000 session 0x55c0873f1860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182616064 unmapped: 30695424 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182624256 unmapped: 30687232 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 463 heartbeat osd_stat(store_statfs(0x4f7ffd000/0x0/0x4ffc00000, data 0x18b1b1c/0x1b30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 463 heartbeat osd_stat(store_statfs(0x4f7ffd000/0x0/0x4ffc00000, data 0x18b1b1c/0x1b30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 463 handle_osd_map epochs [464,464], i have 463, src has [1,464]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 463 handle_osd_map epochs [464,464], i have 464, src has [1,464]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 464 ms_handle_reset con 0x55c087789000 session 0x55c087a8c5a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182624256 unmapped: 30687232 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 464 ms_handle_reset con 0x55c087eb9000 session 0x55c085f703c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 464 ms_handle_reset con 0x55c085546400 session 0x55c0879a8000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182624256 unmapped: 30687232 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3056415 data_alloc: 234881024 data_used: 16691200
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 464 handle_osd_map epochs [465,465], i have 464, src has [1,465]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 465 ms_handle_reset con 0x55c086f82400 session 0x55c0871423c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 465 ms_handle_reset con 0x55c087282000 session 0x55c0873bcf00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182624256 unmapped: 30687232 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182624256 unmapped: 30687232 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 465 ms_handle_reset con 0x55c087789000 session 0x55c08836d2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182632448 unmapped: 30679040 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182632448 unmapped: 30679040 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 465 handle_osd_map epochs [465,466], i have 465, src has [1,466]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 466 ms_handle_reset con 0x55c087eb9000 session 0x55c0876fe000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f7ff8000/0x0/0x4ffc00000, data 0x18b52bc/0x1b36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182632448 unmapped: 30679040 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3065400 data_alloc: 234881024 data_used: 16699392
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 466 ms_handle_reset con 0x55c085546400 session 0x55c0872523c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.049348831s of 11.164390564s, submitted: 46
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182640640 unmapped: 30670848 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f7ff3000/0x0/0x4ffc00000, data 0x18b6e55/0x1b39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 466 handle_osd_map epochs [466,467], i have 466, src has [1,467]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 467 ms_handle_reset con 0x55c086f82400 session 0x55c08544b2c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182648832 unmapped: 30662656 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 467 heartbeat osd_stat(store_statfs(0x4f7fef000/0x0/0x4ffc00000, data 0x18b8a36/0x1b3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 467 ms_handle_reset con 0x55c087282000 session 0x55c08543b4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 467 heartbeat osd_stat(store_statfs(0x4f7fef000/0x0/0x4ffc00000, data 0x18b8a36/0x1b3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 182648832 unmapped: 30662656 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 467 ms_handle_reset con 0x55c087789000 session 0x55c0879a8780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 183697408 unmapped: 29614080 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 183697408 unmapped: 29614080 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3068514 data_alloc: 234881024 data_used: 16699392
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 467 handle_osd_map epochs [468,468], i have 467, src has [1,468]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 468 ms_handle_reset con 0x55c08774f800 session 0x55c0879d2000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 184754176 unmapped: 28557312 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 468 handle_osd_map epochs [469,469], i have 468, src has [1,469]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 469 ms_handle_reset con 0x55c085546400 session 0x55c085bb10e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 184770560 unmapped: 28540928 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 469 handle_osd_map epochs [469,470], i have 469, src has [1,470]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 470 ms_handle_reset con 0x55c086f82400 session 0x55c08836cb40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 184778752 unmapped: 28532736 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 470 heartbeat osd_stat(store_statfs(0x4f7fe5000/0x0/0x4ffc00000, data 0x18bdbed/0x1b45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 470 ms_handle_reset con 0x55c087282000 session 0x55c0852f4000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 470 ms_handle_reset con 0x55c08774f800 session 0x55c087a8de00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 184795136 unmapped: 28516352 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 184795136 unmapped: 28516352 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3076045 data_alloc: 234881024 data_used: 16707584
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 470 ms_handle_reset con 0x55c087789000 session 0x55c0876ff4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 470 ms_handle_reset con 0x55c085546400 session 0x55c0871332c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 184795136 unmapped: 28516352 heap: 213311488 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.952826500s of 10.348698616s, submitted: 81
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 470 handle_osd_map epochs [470,471], i have 470, src has [1,471]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 471 ms_handle_reset con 0x55c086f82400 session 0x55c085bb0780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 184901632 unmapped: 45211648 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 471 ms_handle_reset con 0x55c087282000 session 0x55c087a8c780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 471 heartbeat osd_stat(store_statfs(0x4f444c000/0x0/0x4ffc00000, data 0x42b8720/0x4542000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 471 handle_osd_map epochs [471,472], i have 471, src has [1,472]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 472 ms_handle_reset con 0x55c08774f800 session 0x55c0873bf4a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 184909824 unmapped: 45203456 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 472 ms_handle_reset con 0x55c08a3da800 session 0x55c0852f54a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 184909824 unmapped: 45203456 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 472 handle_osd_map epochs [473,473], i have 472, src has [1,473]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 473 ms_handle_reset con 0x55c085546400 session 0x55c086ee9c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 184926208 unmapped: 45187072 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3384633 data_alloc: 234881024 data_used: 16728064
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 473 ms_handle_reset con 0x55c086f82400 session 0x55c0876d9860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 184926208 unmapped: 45187072 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f4447000/0x0/0x4ffc00000, data 0x42bbe38/0x4547000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 184926208 unmapped: 45187072 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 473 handle_osd_map epochs [474,474], i have 473, src has [1,474]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 184926208 unmapped: 45187072 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 474 heartbeat osd_stat(store_statfs(0x4f4443000/0x0/0x4ffc00000, data 0x42bd8b7/0x454a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 474 handle_osd_map epochs [474,475], i have 474, src has [1,475]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 184926208 unmapped: 45187072 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 184926208 unmapped: 45187072 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3391741 data_alloc: 234881024 data_used: 16736256
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 475 ms_handle_reset con 0x55c087282000 session 0x55c085ed4780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 475 heartbeat osd_stat(store_statfs(0x4f443f000/0x0/0x4ffc00000, data 0x42bf450/0x454d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 475 handle_osd_map epochs [475,476], i have 475, src has [1,476]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 184934400 unmapped: 45178880 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 476 handle_osd_map epochs [476,477], i have 476, src has [1,477]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.920105934s of 10.561671257s, submitted: 104
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 185032704 unmapped: 45080576 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 185032704 unmapped: 45080576 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 185032704 unmapped: 45080576 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 477 ms_handle_reset con 0x55c08774f800 session 0x55c0871332c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 185032704 unmapped: 45080576 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397337 data_alloc: 234881024 data_used: 16744448
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 477 handle_osd_map epochs [478,478], i have 477, src has [1,478]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 478 ms_handle_reset con 0x55c08a5af800 session 0x55c0873c3860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 185032704 unmapped: 45080576 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 478 heartbeat osd_stat(store_statfs(0x4f4436000/0x0/0x4ffc00000, data 0x42c46ab/0x4557000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 478 handle_osd_map epochs [478,479], i have 478, src has [1,479]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 185040896 unmapped: 45072384 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 479 ms_handle_reset con 0x55c085546400 session 0x55c08836c5a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 479 ms_handle_reset con 0x55c086f82400 session 0x55c0873bfa40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 185040896 unmapped: 45072384 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 479 handle_osd_map epochs [480,480], i have 479, src has [1,480]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 480 ms_handle_reset con 0x55c08774f800 session 0x55c086ee8960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 185442304 unmapped: 44670976 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 186712064 unmapped: 43401216 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3459699 data_alloc: 234881024 data_used: 20619264
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 480 ms_handle_reset con 0x55c0884afc00 session 0x55c0876d85a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 186712064 unmapped: 43401216 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 480 heartbeat osd_stat(store_statfs(0x4f4431000/0x0/0x4ffc00000, data 0x42c7c8d/0x455d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 480 handle_osd_map epochs [481,481], i have 480, src has [1,481]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.966030121s of 10.056623459s, submitted: 51
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 481 ms_handle_reset con 0x55c087eb7c00 session 0x55c085f70780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 186736640 unmapped: 43376640 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 186736640 unmapped: 43376640 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 481 ms_handle_reset con 0x55c085546400 session 0x55c0879d3a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 481 handle_osd_map epochs [482,482], i have 481, src has [1,482]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 482 ms_handle_reset con 0x55c086f82400 session 0x55c08506c5a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 186736640 unmapped: 43376640 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 482 heartbeat osd_stat(store_statfs(0x4f442a000/0x0/0x4ffc00000, data 0x42cb407/0x4563000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 482 ms_handle_reset con 0x55c08774f800 session 0x55c085ed5a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 482 ms_handle_reset con 0x55c087eb7c00 session 0x55c087132780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 186744832 unmapped: 43368448 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468444 data_alloc: 234881024 data_used: 20643840
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 482 heartbeat osd_stat(store_statfs(0x4f442b000/0x0/0x4ffc00000, data 0x42cb3f7/0x4562000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 482 ms_handle_reset con 0x55c0884afc00 session 0x55c087b44d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 186744832 unmapped: 43368448 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 482 handle_osd_map epochs [482,483], i have 482, src has [1,483]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 186744832 unmapped: 43368448 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 483 handle_osd_map epochs [484,484], i have 483, src has [1,484]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 186753024 unmapped: 43360256 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 484 ms_handle_reset con 0x55c085546400 session 0x55c0854b41e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 186753024 unmapped: 43360256 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 484 handle_osd_map epochs [485,485], i have 484, src has [1,485]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 197378048 unmapped: 32735232 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502414 data_alloc: 234881024 data_used: 20553728
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f2fd2000/0x0/0x4ffc00000, data 0x42d05f0/0x456b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 197378048 unmapped: 32735232 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.096795082s of 10.158328056s, submitted: 142
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 485 handle_osd_map epochs [485,486], i have 485, src has [1,486]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192880640 unmapped: 37232640 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 486 heartbeat osd_stat(store_statfs(0x4f29fb000/0x0/0x4ffc00000, data 0x4b5606f/0x4df2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 486 ms_handle_reset con 0x55c086f82400 session 0x55c0871290e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192651264 unmapped: 37462016 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 486 handle_osd_map epochs [487,487], i have 486, src has [1,487]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 487 ms_handle_reset con 0x55c08774f800 session 0x55c087128780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192651264 unmapped: 37462016 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192667648 unmapped: 37445632 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595191 data_alloc: 234881024 data_used: 20574208
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 487 ms_handle_reset con 0x55c087eb7c00 session 0x55c087129680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 487 handle_osd_map epochs [487,488], i have 487, src has [1,488]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 488 ms_handle_reset con 0x55c087ebf400 session 0x55c0854b4000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 488 heartbeat osd_stat(store_statfs(0x4f24b1000/0x0/0x4ffc00000, data 0x509c84b/0x533c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192684032 unmapped: 37429248 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 488 ms_handle_reset con 0x55c085546400 session 0x55c0854b50e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192692224 unmapped: 37421056 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192692224 unmapped: 37421056 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 488 ms_handle_reset con 0x55c087ebbc00 session 0x55c087a8de00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 488 ms_handle_reset con 0x55c087282000 session 0x55c087132b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192692224 unmapped: 37421056 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 488 ms_handle_reset con 0x55c086f82400 session 0x55c0852f4000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192692224 unmapped: 37421056 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3594245 data_alloc: 234881024 data_used: 20602880
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 488 heartbeat osd_stat(store_statfs(0x4f20a3000/0x0/0x4ffc00000, data 0x509c7e9/0x533b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192692224 unmapped: 37421056 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192692224 unmapped: 37421056 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192692224 unmapped: 37421056 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 488 handle_osd_map epochs [489,489], i have 488, src has [1,489]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.586940765s of 11.837867737s, submitted: 79
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 489 handle_osd_map epochs [490,490], i have 489, src has [1,490]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 490 ms_handle_reset con 0x55c08774f800 session 0x55c085bb10e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192708608 unmapped: 37404672 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 490 ms_handle_reset con 0x55c085546400 session 0x55c0872523c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192716800 unmapped: 37396480 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3605054 data_alloc: 234881024 data_used: 20606976
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f209a000/0x0/0x4ffc00000, data 0x509fe11/0x5342000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192716800 unmapped: 37396480 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 490 handle_osd_map epochs [491,491], i have 490, src has [1,491]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192716800 unmapped: 37396480 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 491 ms_handle_reset con 0x55c086f82400 session 0x55c0871423c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192716800 unmapped: 37396480 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 491 heartbeat osd_stat(store_statfs(0x4f2096000/0x0/0x4ffc00000, data 0x50a19aa/0x5345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 491 handle_osd_map epochs [492,492], i have 491, src has [1,492]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 491 handle_osd_map epochs [492,492], i have 492, src has [1,492]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192749568 unmapped: 37363712 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 492 heartbeat osd_stat(store_statfs(0x4f2094000/0x0/0x4ffc00000, data 0x50a358b/0x5348000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192749568 unmapped: 37363712 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3609098 data_alloc: 234881024 data_used: 20606976
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 492 heartbeat osd_stat(store_statfs(0x4f2096000/0x0/0x4ffc00000, data 0x50a358b/0x5348000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 492 ms_handle_reset con 0x55c087282000 session 0x55c085f703c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192749568 unmapped: 37363712 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 492 heartbeat osd_stat(store_statfs(0x4f2096000/0x0/0x4ffc00000, data 0x50a358b/0x5348000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 492 handle_osd_map epochs [492,493], i have 492, src has [1,493]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192757760 unmapped: 37355520 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 493 handle_osd_map epochs [493,494], i have 493, src has [1,494]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192765952 unmapped: 37347328 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 494 ms_handle_reset con 0x55c087ebbc00 session 0x55c087a8c5a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192765952 unmapped: 37347328 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 494 handle_osd_map epochs [495,495], i have 494, src has [1,495]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.927683830s of 11.083066940s, submitted: 49
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 495 ms_handle_reset con 0x55c087eb7c00 session 0x55c087705860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192765952 unmapped: 37347328 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3620052 data_alloc: 234881024 data_used: 20619264
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 495 ms_handle_reset con 0x55c085546400 session 0x55c0876d8960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192765952 unmapped: 37347328 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 495 handle_osd_map epochs [495,496], i have 495, src has [1,496]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 496 ms_handle_reset con 0x55c086f82400 session 0x55c085ed4f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f208b000/0x0/0x4ffc00000, data 0x50a8784/0x5351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 496 ms_handle_reset con 0x55c087282000 session 0x55c0876feb40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 496 ms_handle_reset con 0x55c087ebbc00 session 0x55c0854534a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192782336 unmapped: 37330944 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192782336 unmapped: 37330944 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 496 handle_osd_map epochs [497,497], i have 496, src has [1,497]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 497 ms_handle_reset con 0x55c087afb000 session 0x55c0854b4780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192798720 unmapped: 37314560 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 497 ms_handle_reset con 0x55c085546400 session 0x55c085f13c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 497 handle_osd_map epochs [498,498], i have 497, src has [1,498]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 498 ms_handle_reset con 0x55c086f82400 session 0x55c085f13860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192798720 unmapped: 37314560 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3633029 data_alloc: 234881024 data_used: 20635648
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192806912 unmapped: 37306368 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 498 ms_handle_reset con 0x55c087282000 session 0x55c08544ba40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 498 handle_osd_map epochs [498,499], i have 498, src has [1,499]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192806912 unmapped: 37306368 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 499 ms_handle_reset con 0x55c087ebbc00 session 0x55c08544af00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 499 heartbeat osd_stat(store_statfs(0x4f207f000/0x0/0x4ffc00000, data 0x50af40c/0x535e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192815104 unmapped: 37298176 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 499 handle_osd_map epochs [499,500], i have 499, src has [1,500]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 500 ms_handle_reset con 0x55c08701dc00 session 0x55c084fca3c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192823296 unmapped: 37289984 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 500 ms_handle_reset con 0x55c085546400 session 0x55c0873c3a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192823296 unmapped: 37289984 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.303433418s of 10.521296501s, submitted: 103
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3641962 data_alloc: 234881024 data_used: 20717568
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192823296 unmapped: 37289984 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 500 handle_osd_map epochs [501,501], i have 500, src has [1,501]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 501 ms_handle_reset con 0x55c086f82400 session 0x55c0873c25a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192823296 unmapped: 37289984 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 501 ms_handle_reset con 0x55c087282000 session 0x55c0891192c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 501 ms_handle_reset con 0x55c087ebbc00 session 0x55c089119a40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 501 heartbeat osd_stat(store_statfs(0x4f207a000/0x0/0x4ffc00000, data 0x50b2b86/0x5364000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192831488 unmapped: 37281792 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 501 ms_handle_reset con 0x55c08701bc00 session 0x55c085ed5e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192831488 unmapped: 37281792 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192831488 unmapped: 37281792 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643676 data_alloc: 234881024 data_used: 20725760
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 501 ms_handle_reset con 0x55c085546400 session 0x55c0879d2960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192831488 unmapped: 37281792 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 501 handle_osd_map epochs [502,502], i have 501, src has [1,502]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f2079000/0x0/0x4ffc00000, data 0x50b2b96/0x5365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 502 ms_handle_reset con 0x55c086f82400 session 0x55c0879d2b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 502 ms_handle_reset con 0x55c08701bc00 session 0x55c0876dd680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 192864256 unmapped: 37249024 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 502 handle_osd_map epochs [503,503], i have 502, src has [1,503]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 503 ms_handle_reset con 0x55c087282000 session 0x55c0876d90e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 503 ms_handle_reset con 0x55c087ebbc00 session 0x55c085ddbc20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195313664 unmapped: 34799616 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195313664 unmapped: 34799616 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f1c72000/0x0/0x4ffc00000, data 0x54b6348/0x576a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195313664 unmapped: 34799616 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3701220 data_alloc: 234881024 data_used: 24641536
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f1c72000/0x0/0x4ffc00000, data 0x54b6348/0x576a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195313664 unmapped: 34799616 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f1c72000/0x0/0x4ffc00000, data 0x54b6348/0x576a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195313664 unmapped: 34799616 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 503 handle_osd_map epochs [504,504], i have 503, src has [1,504]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.016871452s of 12.356490135s, submitted: 47
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 34455552 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 34455552 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 34455552 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733506 data_alloc: 234881024 data_used: 24641536
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 34455552 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 504 heartbeat osd_stat(store_statfs(0x4f1870000/0x0/0x4ffc00000, data 0x58b7dc7/0x5b6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 504 handle_osd_map epochs [504,505], i have 504, src has [1,505]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195706880 unmapped: 34406400 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195747840 unmapped: 34365440 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195747840 unmapped: 34365440 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195747840 unmapped: 34365440 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734160 data_alloc: 234881024 data_used: 24649728
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 505 heartbeat osd_stat(store_statfs(0x4f186e000/0x0/0x4ffc00000, data 0x58b982a/0x5b70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195747840 unmapped: 34365440 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 505 heartbeat osd_stat(store_statfs(0x4f186e000/0x0/0x4ffc00000, data 0x58b982a/0x5b70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195747840 unmapped: 34365440 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.906326294s of 10.008868217s, submitted: 44
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195747840 unmapped: 34365440 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195747840 unmapped: 34365440 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195747840 unmapped: 34365440 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3737040 data_alloc: 234881024 data_used: 25718784
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195747840 unmapped: 34365440 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 505 heartbeat osd_stat(store_statfs(0x4f186e000/0x0/0x4ffc00000, data 0x58b982a/0x5b70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195747840 unmapped: 34365440 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195747840 unmapped: 34365440 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195747840 unmapped: 34365440 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195747840 unmapped: 34365440 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3737040 data_alloc: 234881024 data_used: 25718784
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 505 heartbeat osd_stat(store_statfs(0x4f186e000/0x0/0x4ffc00000, data 0x58b982a/0x5b70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195747840 unmapped: 34365440 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 505 ms_handle_reset con 0x55c085ce3800 session 0x55c085dda960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 505 ms_handle_reset con 0x55c0855cb400 session 0x55c0854523c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 505 ms_handle_reset con 0x55c085546400 session 0x55c087b45e00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194543616 unmapped: 35569664 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194543616 unmapped: 35569664 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.907627106s of 10.979310036s, submitted: 6
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 505 ms_handle_reset con 0x55c086f82400 session 0x55c087704d20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 505 heartbeat osd_stat(store_statfs(0x4f186f000/0x0/0x4ffc00000, data 0x58b981a/0x5b6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194543616 unmapped: 35569664 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 505 ms_handle_reset con 0x55c08701bc00 session 0x55c088381860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 505 ms_handle_reset con 0x55c085546400 session 0x55c08836da40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194543616 unmapped: 35569664 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3736799 data_alloc: 234881024 data_used: 25718784
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 505 ms_handle_reset con 0x55c0855cb400 session 0x55c0877054a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194551808 unmapped: 35561472 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 505 ms_handle_reset con 0x55c085ce3800 session 0x55c0873f1c20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 505 ms_handle_reset con 0x55c086f82400 session 0x55c086ee8780
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 505 heartbeat osd_stat(store_statfs(0x4f186f000/0x0/0x4ffc00000, data 0x58b981a/0x5b6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194551808 unmapped: 35561472 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194551808 unmapped: 35561472 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 505 ms_handle_reset con 0x55c08701bc00 session 0x55c087b45680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194560000 unmapped: 35553280 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194560000 unmapped: 35553280 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3682207 data_alloc: 234881024 data_used: 25714688
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194560000 unmapped: 35553280 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194560000 unmapped: 35553280 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 505 heartbeat osd_stat(store_statfs(0x4f206f000/0x0/0x4ffc00000, data 0x50b981a/0x536f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194560000 unmapped: 35553280 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194560000 unmapped: 35553280 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 505 heartbeat osd_stat(store_statfs(0x4f206f000/0x0/0x4ffc00000, data 0x50b981a/0x536f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194560000 unmapped: 35553280 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3682207 data_alloc: 234881024 data_used: 25714688
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194560000 unmapped: 35553280 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 505 ms_handle_reset con 0x55c085459c00 session 0x55c0862305a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194560000 unmapped: 35553280 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194560000 unmapped: 35553280 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 505 heartbeat osd_stat(store_statfs(0x4f206f000/0x0/0x4ffc00000, data 0x50b981a/0x536f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194560000 unmapped: 35553280 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194560000 unmapped: 35553280 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3682207 data_alloc: 234881024 data_used: 25714688
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194560000 unmapped: 35553280 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194560000 unmapped: 35553280 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194560000 unmapped: 35553280 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 505 heartbeat osd_stat(store_statfs(0x4f206f000/0x0/0x4ffc00000, data 0x50b981a/0x536f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194560000 unmapped: 35553280 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194560000 unmapped: 35553280 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3682207 data_alloc: 234881024 data_used: 25714688
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194560000 unmapped: 35553280 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 505 heartbeat osd_stat(store_statfs(0x4f206f000/0x0/0x4ffc00000, data 0x50b981a/0x536f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194560000 unmapped: 35553280 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194560000 unmapped: 35553280 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194560000 unmapped: 35553280 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194560000 unmapped: 35553280 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3682207 data_alloc: 234881024 data_used: 25714688
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194560000 unmapped: 35553280 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 505 heartbeat osd_stat(store_statfs(0x4f206f000/0x0/0x4ffc00000, data 0x50b981a/0x536f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194568192 unmapped: 35545088 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.782690048s of 29.855028152s, submitted: 24
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194609152 unmapped: 35504128 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 505 ms_handle_reset con 0x55c0855cb400 session 0x55c0862cf0e0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194609152 unmapped: 35504128 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 505 handle_osd_map epochs [506,506], i have 505, src has [1,506]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194641920 unmapped: 35471360 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3689577 data_alloc: 234881024 data_used: 25722880
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 506 ms_handle_reset con 0x55c085ce3800 session 0x55c0850972c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 506 ms_handle_reset con 0x55c087282000 session 0x55c085dda000
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 506 ms_handle_reset con 0x55c086f82400 session 0x55c0852f52c0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194641920 unmapped: 35471360 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 506 heartbeat osd_stat(store_statfs(0x4f2068000/0x0/0x4ffc00000, data 0x50bb419/0x5375000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194641920 unmapped: 35471360 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 506 ms_handle_reset con 0x55c08a3db000 session 0x55c087b45680
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 506 ms_handle_reset con 0x55c0855cb400 session 0x55c088381860
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194658304 unmapped: 35454976 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 506 ms_handle_reset con 0x55c085ce3800 session 0x55c085ddbc20
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 506 ms_handle_reset con 0x55c087282000 session 0x55c08543be00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194691072 unmapped: 35422208 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 506 handle_osd_map epochs [507,507], i have 506, src has [1,507]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 507 ms_handle_reset con 0x55c08701fc00 session 0x55c0877054a0
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 507 ms_handle_reset con 0x55c086f82400 session 0x55c0879d2b40
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 507 ms_handle_reset con 0x55c0855cb400 session 0x55c085ed4f00
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 507 ms_handle_reset con 0x55c085ce3800 session 0x55c0876d8960
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194732032 unmapped: 35381248 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693177 data_alloc: 234881024 data_used: 25731072
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194732032 unmapped: 35381248 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194732032 unmapped: 35381248 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 507 heartbeat osd_stat(store_statfs(0x4f2068000/0x0/0x4ffc00000, data 0x50bcf68/0x5375000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194732032 unmapped: 35381248 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 507 heartbeat osd_stat(store_statfs(0x4f2068000/0x0/0x4ffc00000, data 0x50bcf68/0x5375000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 507 heartbeat osd_stat(store_statfs(0x4f2068000/0x0/0x4ffc00000, data 0x50bcf68/0x5375000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194732032 unmapped: 35381248 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194732032 unmapped: 35381248 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693177 data_alloc: 234881024 data_used: 25731072
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194732032 unmapped: 35381248 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 507 handle_osd_map epochs [507,508], i have 507, src has [1,508]
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.336934090s of 13.589159012s, submitted: 71
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f2068000/0x0/0x4ffc00000, data 0x50bcf68/0x5375000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194732032 unmapped: 35381248 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194732032 unmapped: 35381248 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194732032 unmapped: 35381248 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194732032 unmapped: 35381248 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696471 data_alloc: 234881024 data_used: 25739264
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f2065000/0x0/0x4ffc00000, data 0x50be9cb/0x5378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194732032 unmapped: 35381248 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194732032 unmapped: 35381248 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194732032 unmapped: 35381248 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194732032 unmapped: 35381248 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194732032 unmapped: 35381248 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696471 data_alloc: 234881024 data_used: 25739264
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194732032 unmapped: 35381248 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f2065000/0x0/0x4ffc00000, data 0x50be9cb/0x5378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194740224 unmapped: 35373056 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194740224 unmapped: 35373056 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194740224 unmapped: 35373056 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194740224 unmapped: 35373056 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696471 data_alloc: 234881024 data_used: 25739264
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194740224 unmapped: 35373056 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194740224 unmapped: 35373056 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f2065000/0x0/0x4ffc00000, data 0x50be9cb/0x5378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194748416 unmapped: 35364864 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f2065000/0x0/0x4ffc00000, data 0x50be9cb/0x5378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194748416 unmapped: 35364864 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194748416 unmapped: 35364864 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696471 data_alloc: 234881024 data_used: 25739264
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f2065000/0x0/0x4ffc00000, data 0x50be9cb/0x5378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194748416 unmapped: 35364864 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194748416 unmapped: 35364864 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f2065000/0x0/0x4ffc00000, data 0x50be9cb/0x5378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194748416 unmapped: 35364864 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194748416 unmapped: 35364864 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194748416 unmapped: 35364864 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696471 data_alloc: 234881024 data_used: 25739264
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194748416 unmapped: 35364864 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f2065000/0x0/0x4ffc00000, data 0x50be9cb/0x5378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194748416 unmapped: 35364864 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194748416 unmapped: 35364864 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194748416 unmapped: 35364864 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194748416 unmapped: 35364864 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696471 data_alloc: 234881024 data_used: 25739264
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f2065000/0x0/0x4ffc00000, data 0x50be9cb/0x5378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194748416 unmapped: 35364864 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194748416 unmapped: 35364864 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194748416 unmapped: 35364864 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194756608 unmapped: 35356672 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f2065000/0x0/0x4ffc00000, data 0x50be9cb/0x5378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194772992 unmapped: 35340288 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696471 data_alloc: 234881024 data_used: 25739264
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194772992 unmapped: 35340288 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194772992 unmapped: 35340288 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194772992 unmapped: 35340288 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f2065000/0x0/0x4ffc00000, data 0x50be9cb/0x5378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194772992 unmapped: 35340288 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194772992 unmapped: 35340288 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696471 data_alloc: 234881024 data_used: 25739264
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194772992 unmapped: 35340288 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194781184 unmapped: 35332096 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f2065000/0x0/0x4ffc00000, data 0x50be9cb/0x5378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194781184 unmapped: 35332096 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194781184 unmapped: 35332096 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f2065000/0x0/0x4ffc00000, data 0x50be9cb/0x5378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194781184 unmapped: 35332096 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696471 data_alloc: 234881024 data_used: 25739264
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194781184 unmapped: 35332096 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194781184 unmapped: 35332096 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194781184 unmapped: 35332096 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f2065000/0x0/0x4ffc00000, data 0x50be9cb/0x5378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194781184 unmapped: 35332096 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194797568 unmapped: 35315712 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696471 data_alloc: 234881024 data_used: 25739264
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194797568 unmapped: 35315712 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194797568 unmapped: 35315712 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f2065000/0x0/0x4ffc00000, data 0x50be9cb/0x5378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194797568 unmapped: 35315712 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194797568 unmapped: 35315712 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194797568 unmapped: 35315712 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696471 data_alloc: 234881024 data_used: 25739264
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f2065000/0x0/0x4ffc00000, data 0x50be9cb/0x5378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194797568 unmapped: 35315712 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194797568 unmapped: 35315712 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194805760 unmapped: 35307520 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194805760 unmapped: 35307520 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194805760 unmapped: 35307520 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696471 data_alloc: 234881024 data_used: 25739264
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f2065000/0x0/0x4ffc00000, data 0x50be9cb/0x5378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194805760 unmapped: 35307520 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194805760 unmapped: 35307520 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194805760 unmapped: 35307520 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f2065000/0x0/0x4ffc00000, data 0x50be9cb/0x5378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194805760 unmapped: 35307520 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194805760 unmapped: 35307520 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3696471 data_alloc: 234881024 data_used: 25739264
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194813952 unmapped: 35299328 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194813952 unmapped: 35299328 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f2065000/0x0/0x4ffc00000, data 0x50be9cb/0x5378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194813952 unmapped: 35299328 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194813952 unmapped: 35299328 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f2065000/0x0/0x4ffc00000, data 0x50be9cb/0x5378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194813952 unmapped: 35299328 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3696471 data_alloc: 234881024 data_used: 25739264
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194813952 unmapped: 35299328 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194813952 unmapped: 35299328 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194813952 unmapped: 35299328 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f2065000/0x0/0x4ffc00000, data 0x50be9cb/0x5378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194838528 unmapped: 35274752 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f2065000/0x0/0x4ffc00000, data 0x50be9cb/0x5378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194838528 unmapped: 35274752 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: bluestore.MempoolThread(0x55c083c07b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3696471 data_alloc: 234881024 data_used: 25739264
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f2065000/0x0/0x4ffc00000, data 0x50be9cb/0x5378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 194838528 unmapped: 35274752 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f2065000/0x0/0x4ffc00000, data 0x50be9cb/0x5378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,1] op hist [])
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: do_command 'config diff' '{prefix=config diff}'
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: do_command 'config show' '{prefix=config show}'
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195018752 unmapped: 35094528 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: do_command 'counter dump' '{prefix=counter dump}'
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: do_command 'counter schema' '{prefix=counter schema}'
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195059712 unmapped: 35053568 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: prioritycache tune_memory target: 4294967296 mapped: 195076096 unmapped: 35037184 heap: 230113280 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:42 np0005542249 ceph-osd[91055]: do_command 'log dump' '{prefix=log dump}'
Dec  2 06:38:43 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.19238 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  2 06:38:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec  2 06:38:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/566950234' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  2 06:38:43 np0005542249 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 06:38:43 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.19241 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  2 06:38:43 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  2 06:38:43 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1411559180' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  2 06:38:43 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.19245 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  2 06:38:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec  2 06:38:44 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1897952261' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  2 06:38:44 np0005542249 nova_compute[254900]: 2025-12-02 11:38:44.082 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:38:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader).osd e508 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  2 06:38:44 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.19249 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  2 06:38:44 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Dec  2 06:38:44 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3857222902' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec  2 06:38:44 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1977: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:38:45 np0005542249 ceph-mgr[75372]: log_channel(audit) log [DBG] : from='client.19257 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  2 06:38:45 np0005542249 ceph-mgr[75372]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  2 06:38:45 np0005542249 ceph-95bc4eaa-1a14-59bf-acf2-4b3da055547d-mgr-compute-0-ntxcvs[75368]: 2025-12-02T11:38:45.245+0000 7fc2b9048640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  2 06:38:45 np0005542249 nova_compute[254900]: 2025-12-02 11:38:45.245 254904 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  2 06:38:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Dec  2 06:38:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/9413553' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec  2 06:38:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Dec  2 06:38:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2503297750' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec  2 06:38:45 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Dec  2 06:38:45 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4195522661' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec  2 06:38:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Dec  2 06:38:46 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2497260731' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec  2 06:38:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Dec  2 06:38:46 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1362377552' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec  2 06:38:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Dec  2 06:38:46 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3899083179' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec  2 06:38:46 np0005542249 ceph-mgr[75372]: log_channel(cluster) log [DBG] : pgmap v1978: 321 pgs: 321 active+clean; 271 MiB data, 693 MiB used, 59 GiB / 60 GiB avail
Dec  2 06:38:46 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Dec  2 06:38:46 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2055352405' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec  2 06:38:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Dec  2 06:38:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/146287273' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec  2 06:38:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Dec  2 06:38:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3598701820' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 90423296 unmapped: 22028288 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 161 ms_handle_reset con 0x556dcec61800 session 0x556dcae10f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 162 ms_handle_reset con 0x556dcec61400 session 0x556dcd767860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 90456064 unmapped: 21995520 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.086647034s of 10.593711853s, submitted: 161
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 162 ms_handle_reset con 0x556dcec61800 session 0x556dcad621e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1036183 data_alloc: 218103808 data_used: 409600
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 162 ms_handle_reset con 0x556dca26dc00 session 0x556dcae0c780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 162 ms_handle_reset con 0x556dcc70a400 session 0x556dcaccab40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 89964544 unmapped: 22487040 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 162 ms_handle_reset con 0x556dcd334c00 session 0x556dccf4bc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 162 heartbeat osd_stat(store_statfs(0x4fb416000/0x0/0x4ffc00000, data 0x164ba9/0x258000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 163 ms_handle_reset con 0x556dcae66000 session 0x556dcd3661e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 89964544 unmapped: 22487040 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 163 ms_handle_reset con 0x556dcd334c00 session 0x556dcd26c000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 163 ms_handle_reset con 0x556dca26dc00 session 0x556dcabe3680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 89980928 unmapped: 22470656 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 163 ms_handle_reset con 0x556dcc70a400 session 0x556dcd70da40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 163 ms_handle_reset con 0x556dcec61400 session 0x556dcd70cd20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 163 ms_handle_reset con 0x556dcec61800 session 0x556dccf4b4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 163 ms_handle_reset con 0x556dca26dc00 session 0x556dcd1a23c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 90013696 unmapped: 22437888 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 164 ms_handle_reset con 0x556dcae66000 session 0x556dcd249680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 21364736 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 164 ms_handle_reset con 0x556dcc70a400 session 0x556dcc10dc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 164 ms_handle_reset con 0x556dcd334c00 session 0x556dcd26de00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 164 ms_handle_reset con 0x556dca26dc00 session 0x556dcd7665a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 164 handle_osd_map epochs [164,165], i have 164, src has [1,165]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046178 data_alloc: 218103808 data_used: 425984
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91103232 unmapped: 21348352 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91103232 unmapped: 21348352 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 165 heartbeat osd_stat(store_statfs(0x4fb40e000/0x0/0x4ffc00000, data 0x169dc8/0x25e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91103232 unmapped: 21348352 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 165 ms_handle_reset con 0x556dcae66000 session 0x556dcba32b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 165 ms_handle_reset con 0x556dcc70a400 session 0x556dccb7be00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 21323776 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 165 ms_handle_reset con 0x556dcec61800 session 0x556dcc8b5c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91144192 unmapped: 21307392 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045999 data_alloc: 218103808 data_used: 430080
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.820090294s of 10.186924934s, submitted: 134
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 166 heartbeat osd_stat(store_statfs(0x4fb40f000/0x0/0x4ffc00000, data 0x169e2a/0x25f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91144192 unmapped: 21307392 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 166 ms_handle_reset con 0x556dcec61c00 session 0x556dcabe2960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 166 ms_handle_reset con 0x556dca26dc00 session 0x556dcae114a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91152384 unmapped: 21299200 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91152384 unmapped: 21299200 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 166 heartbeat osd_stat(store_statfs(0x4fb40b000/0x0/0x4ffc00000, data 0x16b847/0x261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91152384 unmapped: 21299200 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91152384 unmapped: 21299200 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 166 heartbeat osd_stat(store_statfs(0x4fb40b000/0x0/0x4ffc00000, data 0x16b847/0x261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051365 data_alloc: 218103808 data_used: 438272
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91144192 unmapped: 21307392 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91144192 unmapped: 21307392 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91144192 unmapped: 21307392 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91144192 unmapped: 21307392 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91144192 unmapped: 21307392 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 167 heartbeat osd_stat(store_statfs(0x4fb409000/0x0/0x4ffc00000, data 0x16d2aa/0x264000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051365 data_alloc: 218103808 data_used: 438272
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91144192 unmapped: 21307392 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.940124512s of 10.984826088s, submitted: 74
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 167 ms_handle_reset con 0x556dcae66000 session 0x556dcd35ef00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 167 ms_handle_reset con 0x556dcc70a400 session 0x556dcd26d680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91152384 unmapped: 21299200 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91152384 unmapped: 21299200 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 167 heartbeat osd_stat(store_statfs(0x4fb409000/0x0/0x4ffc00000, data 0x16d30c/0x265000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 167 ms_handle_reset con 0x556dcec61800 session 0x556dcad67e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 21291008 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 167 heartbeat osd_stat(store_statfs(0x4fb408000/0x0/0x4ffc00000, data 0x16d31c/0x266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 21291008 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 167 ms_handle_reset con 0x556dcd37c800 session 0x556dcd767c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058165 data_alloc: 218103808 data_used: 438272
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 21291008 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 167 ms_handle_reset con 0x556dcc70a400 session 0x556dcc6ea960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 21291008 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 167 heartbeat osd_stat(store_statfs(0x4fb408000/0x0/0x4ffc00000, data 0x16d31c/0x266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 167 handle_osd_map epochs [167,168], i have 167, src has [1,168]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 168 heartbeat osd_stat(store_statfs(0x4fb404000/0x0/0x4ffc00000, data 0x16ee99/0x269000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 168 ms_handle_reset con 0x556dcec61800 session 0x556dcc6eba40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 21282816 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 168 ms_handle_reset con 0x556dcec60c00 session 0x556dcc6ebc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 169 ms_handle_reset con 0x556dcae66000 session 0x556dcc122b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 169 ms_handle_reset con 0x556dccbec800 session 0x556dcc6ea3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 169 ms_handle_reset con 0x556dcb58c400 session 0x556dcc638b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 169 ms_handle_reset con 0x556dcae66000 session 0x556dccf4b0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 21282816 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 170 ms_handle_reset con 0x556dccbec800 session 0x556dcc6ea5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 170 ms_handle_reset con 0x556dcec60c00 session 0x556dccf4c1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 170 ms_handle_reset con 0x556dcec61800 session 0x556dccf4c960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 170 ms_handle_reset con 0x556dccb29800 session 0x556dcad69680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 170 ms_handle_reset con 0x556dcae66000 session 0x556dcd767c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91226112 unmapped: 21225472 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 170 handle_osd_map epochs [170,171], i have 170, src has [1,171]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 170 handle_osd_map epochs [171,171], i have 171, src has [1,171]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 171 ms_handle_reset con 0x556dcc70a400 session 0x556dcd35e1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 171 ms_handle_reset con 0x556dca26dc00 session 0x556dcae0c000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 171 heartbeat osd_stat(store_statfs(0x4fb3fb000/0x0/0x4ffc00000, data 0x17419c/0x272000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074137 data_alloc: 218103808 data_used: 446464
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91242496 unmapped: 21209088 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 171 ms_handle_reset con 0x556dcb58c400 session 0x556dcd3fcd20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 171 ms_handle_reset con 0x556dccbec800 session 0x556dcc10c780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 171 ms_handle_reset con 0x556dca26dc00 session 0x556dcd3f1a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91258880 unmapped: 21192704 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.073549271s of 10.435540199s, submitted: 111
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 171 ms_handle_reset con 0x556dcae66000 session 0x556dcd1a3860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91283456 unmapped: 21168128 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 171 ms_handle_reset con 0x556dcb58c400 session 0x556dcd70dc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 171 ms_handle_reset con 0x556dcc70a400 session 0x556dcd6f03c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 171 ms_handle_reset con 0x556dcec60c00 session 0x556dcd21f860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91357184 unmapped: 21094400 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 171 ms_handle_reset con 0x556dca26dc00 session 0x556dcc6392c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 171 handle_osd_map epochs [172,172], i have 171, src has [1,172]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 172 ms_handle_reset con 0x556dcae66000 session 0x556dcd21e000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 172 ms_handle_reset con 0x556dcb58c400 session 0x556dcb50cd20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 172 ms_handle_reset con 0x556dccb29800 session 0x556dcd1a23c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91381760 unmapped: 21069824 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078266 data_alloc: 218103808 data_used: 466944
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 172 handle_osd_map epochs [173,173], i have 172, src has [1,173]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 173 ms_handle_reset con 0x556dcae66000 session 0x556dcae0c000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91389952 unmapped: 21061632 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 173 heartbeat osd_stat(store_statfs(0x4fb3f4000/0x0/0x4ffc00000, data 0x177798/0x278000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 173 handle_osd_map epochs [173,174], i have 173, src has [1,174]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 174 ms_handle_reset con 0x556dca26dc00 session 0x556dcba323c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 174 ms_handle_reset con 0x556dcb58c400 session 0x556dcae112c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91406336 unmapped: 21045248 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 174 heartbeat osd_stat(store_statfs(0x4fb3f3000/0x0/0x4ffc00000, data 0x179307/0x27a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 91381760 unmapped: 21069824 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 174 handle_osd_map epochs [175,175], i have 174, src has [1,175]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 175 heartbeat osd_stat(store_statfs(0x4fb3f3000/0x0/0x4ffc00000, data 0x179307/0x27a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 175 ms_handle_reset con 0x556dcec60c00 session 0x556dcb5265a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92471296 unmapped: 19980288 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 175 ms_handle_reset con 0x556dcc70a400 session 0x556dca8a03c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92471296 unmapped: 19980288 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 175 ms_handle_reset con 0x556dca26dc00 session 0x556dcd767e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 175 handle_osd_map epochs [175,176], i have 175, src has [1,176]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 176 ms_handle_reset con 0x556dcae66000 session 0x556dccf4c1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087822 data_alloc: 218103808 data_used: 475136
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 176 ms_handle_reset con 0x556dcb58c400 session 0x556dcd68af00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92479488 unmapped: 19972096 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 176 ms_handle_reset con 0x556dcec60c00 session 0x556dcc6385a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 176 ms_handle_reset con 0x556dccbec800 session 0x556dcd21fc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92487680 unmapped: 19963904 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.135383606s of 10.558283806s, submitted: 131
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 176 heartbeat osd_stat(store_statfs(0x4fb3eb000/0x0/0x4ffc00000, data 0x17cba5/0x282000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 176 ms_handle_reset con 0x556dca26dc00 session 0x556dcd767c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92495872 unmapped: 19955712 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 176 ms_handle_reset con 0x556dcb58c400 session 0x556dcacd6b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92512256 unmapped: 19939328 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 176 heartbeat osd_stat(store_statfs(0x4fb3eb000/0x0/0x4ffc00000, data 0x17cc07/0x283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 176 ms_handle_reset con 0x556dcec60c00 session 0x556dcb50d680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 176 handle_osd_map epochs [176,177], i have 176, src has [1,177]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 177 ms_handle_reset con 0x556dca26c400 session 0x556dcb50d4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 19881984 heap: 112451584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 177 ms_handle_reset con 0x556dca26d400 session 0x556dcd21f2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 177 handle_osd_map epochs [178,178], i have 177, src has [1,178]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 178 ms_handle_reset con 0x556dca26d400 session 0x556dcd21f0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 178 ms_handle_reset con 0x556dca26c400 session 0x556dcacd7e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100756 data_alloc: 218103808 data_used: 495616
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 178 ms_handle_reset con 0x556dcae66000 session 0x556dcc10d680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 95805440 unmapped: 20979712 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 178 ms_handle_reset con 0x556dca26dc00 session 0x556dcd70cb40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 178 ms_handle_reset con 0x556dcb58c400 session 0x556dcc10c5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 24109056 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 178 handle_osd_map epochs [178,179], i have 178, src has [1,179]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 179 ms_handle_reset con 0x556dca26d400 session 0x556dcb50cd20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 24059904 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 179 ms_handle_reset con 0x556dca26c400 session 0x556dcd26cd20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 179 ms_handle_reset con 0x556dca26dc00 session 0x556dcb50d680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 179 heartbeat osd_stat(store_statfs(0x4fb3e4000/0x0/0x4ffc00000, data 0x181ea6/0x289000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 24059904 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 179 handle_osd_map epochs [180,180], i have 179, src has [1,180]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 180 ms_handle_reset con 0x556dcae66000 session 0x556dca8a0960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 24035328 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 180 ms_handle_reset con 0x556dcb58c400 session 0x556dccf4d2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107757 data_alloc: 218103808 data_used: 507904
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 24035328 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 24035328 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 180 ms_handle_reset con 0x556dca26c400 session 0x556dcc122960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.515147209s of 10.343353271s, submitted: 134
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 180 ms_handle_reset con 0x556dca26d400 session 0x556dcd70cf00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92766208 unmapped: 24018944 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 180 heartbeat osd_stat(store_statfs(0x4fb3e0000/0x0/0x4ffc00000, data 0x183a31/0x28b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 180 ms_handle_reset con 0x556dca26dc00 session 0x556dcd8670e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92766208 unmapped: 24018944 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 180 ms_handle_reset con 0x556dcae66000 session 0x556dcad665a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 180 handle_osd_map epochs [180,181], i have 180, src has [1,181]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92766208 unmapped: 24018944 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 181 ms_handle_reset con 0x556dcec60c00 session 0x556dcae0f0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 181 heartbeat osd_stat(store_statfs(0x4fb3de000/0x0/0x4ffc00000, data 0x185648/0x28f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 181 handle_osd_map epochs [182,182], i have 181, src has [1,182]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115393 data_alloc: 218103808 data_used: 520192
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92774400 unmapped: 24010752 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92774400 unmapped: 24010752 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 182 heartbeat osd_stat(store_statfs(0x4fb3da000/0x0/0x4ffc00000, data 0x1870e3/0x292000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 182 ms_handle_reset con 0x556dca26c400 session 0x556dcd26dc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 182 handle_osd_map epochs [183,183], i have 182, src has [1,183]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 183 ms_handle_reset con 0x556dca26d400 session 0x556dcaccb680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92798976 unmapped: 23986176 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 183 ms_handle_reset con 0x556dca26dc00 session 0x556dcc8b4960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 183 ms_handle_reset con 0x556dcae66000 session 0x556dcad67a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92807168 unmapped: 23977984 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92807168 unmapped: 23977984 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117303 data_alloc: 218103808 data_used: 528384
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92807168 unmapped: 23977984 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 183 ms_handle_reset con 0x556dcc70a000 session 0x556dcb50dc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92807168 unmapped: 23977984 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 183 heartbeat osd_stat(store_statfs(0x4fb3d9000/0x0/0x4ffc00000, data 0x188c52/0x294000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92807168 unmapped: 23977984 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.920736313s of 11.034669876s, submitted: 58
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 183 ms_handle_reset con 0x556dca26c400 session 0x556dcd68ba40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92823552 unmapped: 23961600 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 183 heartbeat osd_stat(store_statfs(0x4fb3da000/0x0/0x4ffc00000, data 0x188c52/0x294000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 183 ms_handle_reset con 0x556dca26d400 session 0x556dcaca9860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 183 ms_handle_reset con 0x556dca26dc00 session 0x556dcc6ea5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92790784 unmapped: 23994368 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 183 handle_osd_map epochs [184,184], i have 183, src has [1,184]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 184 ms_handle_reset con 0x556dcae66000 session 0x556dcd766b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120597 data_alloc: 218103808 data_used: 536576
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92790784 unmapped: 23994368 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 184 heartbeat osd_stat(store_statfs(0x4fb3d6000/0x0/0x4ffc00000, data 0x18a807/0x297000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92790784 unmapped: 23994368 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 184 handle_osd_map epochs [185,185], i have 184, src has [1,185]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 184 heartbeat osd_stat(store_statfs(0x4fb3d6000/0x0/0x4ffc00000, data 0x18a807/0x297000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,1])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 184 handle_osd_map epochs [185,185], i have 185, src has [1,185]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 184 handle_osd_map epochs [185,185], i have 185, src has [1,185]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 185 ms_handle_reset con 0x556dcc70c800 session 0x556dca905c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92790784 unmapped: 23994368 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 185 ms_handle_reset con 0x556dca26c400 session 0x556dcd35eb40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 185 ms_handle_reset con 0x556dca26d400 session 0x556dcd21f860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 185 handle_osd_map epochs [186,186], i have 185, src has [1,186]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 186 ms_handle_reset con 0x556dca26dc00 session 0x556dcb57de00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92790784 unmapped: 23994368 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 186 handle_osd_map epochs [186,187], i have 186, src has [1,187]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 187 ms_handle_reset con 0x556dcae66000 session 0x556dcba32b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 187 ms_handle_reset con 0x556dccb29800 session 0x556dcd35e3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92807168 unmapped: 23977984 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 187 ms_handle_reset con 0x556dca26c400 session 0x556dcd35ef00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129519 data_alloc: 218103808 data_used: 536576
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92807168 unmapped: 23977984 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 187 ms_handle_reset con 0x556dca26d400 session 0x556dcd7665a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92807168 unmapped: 23977984 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 187 heartbeat osd_stat(store_statfs(0x4fb3cd000/0x0/0x4ffc00000, data 0x18fb26/0x2a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92807168 unmapped: 23977984 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 187 handle_osd_map epochs [188,188], i have 187, src has [1,188]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.990618706s of 10.109800339s, submitted: 44
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 188 ms_handle_reset con 0x556dca26dc00 session 0x556dccf4c3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92807168 unmapped: 23977984 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 188 ms_handle_reset con 0x556dcf0e0000 session 0x556dcc6385a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 188 ms_handle_reset con 0x556dcae66000 session 0x556dcad674a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 23969792 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 188 handle_osd_map epochs [188,189], i have 188, src has [1,189]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138725 data_alloc: 218103808 data_used: 536576
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 23969792 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 189 ms_handle_reset con 0x556dca26c400 session 0x556dcd21e000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 189 ms_handle_reset con 0x556dca26d400 session 0x556dca8a03c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 189 ms_handle_reset con 0x556dca26dc00 session 0x556dccb7b0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 189 heartbeat osd_stat(store_statfs(0x4fb3c4000/0x0/0x4ffc00000, data 0x1931a4/0x2a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 189 ms_handle_reset con 0x556dcf0e0000 session 0x556dccb7ba40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92839936 unmapped: 23945216 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 189 ms_handle_reset con 0x556dcf0e0800 session 0x556dcae112c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 189 handle_osd_map epochs [189,190], i have 189, src has [1,190]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 190 ms_handle_reset con 0x556dcf0e0800 session 0x556dcae0c000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 190 heartbeat osd_stat(store_statfs(0x4fb3c0000/0x0/0x4ffc00000, data 0x194d75/0x2ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92848128 unmapped: 23937024 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 190 ms_handle_reset con 0x556dcf0e0400 session 0x556dcae0d680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92848128 unmapped: 23937024 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 190 ms_handle_reset con 0x556dca26c400 session 0x556dcd68b4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 190 ms_handle_reset con 0x556dca26d400 session 0x556dcd7670e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92856320 unmapped: 23928832 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 190 ms_handle_reset con 0x556dca26dc00 session 0x556dcd249680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144824 data_alloc: 218103808 data_used: 540672
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92864512 unmapped: 23920640 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 190 heartbeat osd_stat(store_statfs(0x4fb3c2000/0x0/0x4ffc00000, data 0x194d75/0x2ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 92864512 unmapped: 23920640 heap: 116785152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 190 ms_handle_reset con 0x556dca26c400 session 0x556dcc10d2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 190 ms_handle_reset con 0x556dca26d400 session 0x556dcd26dc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 190 ms_handle_reset con 0x556dca26dc00 session 0x556dcb9c6000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 93790208 unmapped: 24616960 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 93798400 unmapped: 24608768 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 190 heartbeat osd_stat(store_statfs(0x4fae51000/0x0/0x4ffc00000, data 0x707d03/0x81d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 93798400 unmapped: 24608768 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190723 data_alloc: 218103808 data_used: 540672
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 93798400 unmapped: 24608768 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 190 handle_osd_map epochs [191,191], i have 190, src has [1,191]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.476690292s of 12.918676376s, submitted: 65
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 ms_handle_reset con 0x556dcf0e0400 session 0x556dcd35f680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 93798400 unmapped: 24608768 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 ms_handle_reset con 0x556dcf0e0800 session 0x556dcae10000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 ms_handle_reset con 0x556dca26c400 session 0x556dcacd61e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 93798400 unmapped: 24608768 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 ms_handle_reset con 0x556dca26d400 session 0x556dccf4c000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 ms_handle_reset con 0x556dca26dc00 session 0x556dcc6ea960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 93372416 unmapped: 25034752 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 heartbeat osd_stat(store_statfs(0x4fae21000/0x0/0x4ffc00000, data 0x7337fb/0x84d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 ms_handle_reset con 0x556dcf0e0c00 session 0x556dcd21e780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 93380608 unmapped: 25026560 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 ms_handle_reset con 0x556dcf0e1000 session 0x556dcd3661e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229352 data_alloc: 218103808 data_used: 4354048
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 94601216 unmapped: 23805952 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 ms_handle_reset con 0x556dca26c400 session 0x556dcd837c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 94617600 unmapped: 23789568 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 ms_handle_reset con 0x556dca26d400 session 0x556dcd836960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 heartbeat osd_stat(store_statfs(0x4fae21000/0x0/0x4ffc00000, data 0x7337fb/0x84d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 94617600 unmapped: 23789568 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 ms_handle_reset con 0x556dca26dc00 session 0x556dcd8374a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 ms_handle_reset con 0x556dcf0e0c00 session 0x556dcd836d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 94633984 unmapped: 23773184 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 ms_handle_reset con 0x556dcf0e1000 session 0x556dcd8365a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 ms_handle_reset con 0x556dca26c400 session 0x556dcd3fcd20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 23756800 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 ms_handle_reset con 0x556dca26d400 session 0x556dcd3fd2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1243095 data_alloc: 218103808 data_used: 5967872
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 94658560 unmapped: 23748608 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 94658560 unmapped: 23748608 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 heartbeat osd_stat(store_statfs(0x4fae22000/0x0/0x4ffc00000, data 0x733799/0x84c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.409967422s of 10.536918640s, submitted: 48
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 ms_handle_reset con 0x556dca26dc00 session 0x556dcd3fdc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 ms_handle_reset con 0x556dcf0e0c00 session 0x556dcd3fc1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 23732224 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 23732224 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 23732224 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342544 data_alloc: 218103808 data_used: 6074368
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 16662528 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 heartbeat osd_stat(store_statfs(0x4fa20c000/0x0/0x4ffc00000, data 0x1341799/0x145a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 102973440 unmapped: 15433728 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 ms_handle_reset con 0x556dcf0e1800 session 0x556dcc747a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 99278848 unmapped: 19128320 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 heartbeat osd_stat(store_statfs(0x4fa170000/0x0/0x4ffc00000, data 0x13e5799/0x14fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 99278848 unmapped: 19128320 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 99278848 unmapped: 19128320 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 ms_handle_reset con 0x556dca26c400 session 0x556dcc7465a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 ms_handle_reset con 0x556dca26d400 session 0x556dcc7472c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1357012 data_alloc: 218103808 data_used: 6336512
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 99295232 unmapped: 19111936 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 99303424 unmapped: 19103744 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 ms_handle_reset con 0x556dca26dc00 session 0x556dcc6385a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 heartbeat osd_stat(store_statfs(0x4fa170000/0x0/0x4ffc00000, data 0x13e5799/0x14fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.060540199s of 10.439014435s, submitted: 128
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 99581952 unmapped: 18825216 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 ms_handle_reset con 0x556dcf0e0c00 session 0x556dcabe3680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 18833408 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 ms_handle_reset con 0x556dcf0e1c00 session 0x556dcae0c780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 99557376 unmapped: 18849792 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 heartbeat osd_stat(store_statfs(0x4fa14b000/0x0/0x4ffc00000, data 0x140a799/0x1523000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1357021 data_alloc: 218103808 data_used: 6340608
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 ms_handle_reset con 0x556dca26d400 session 0x556dcae0d2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 98287616 unmapped: 20119552 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 98287616 unmapped: 20119552 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 ms_handle_reset con 0x556dcf0e0c00 session 0x556dcad67a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 191 handle_osd_map epochs [192,192], i have 191, src has [1,192]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 192 ms_handle_reset con 0x556dccbdec00 session 0x556dcd3fc3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 98328576 unmapped: 20078592 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 192 handle_osd_map epochs [192,193], i have 192, src has [1,193]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 193 ms_handle_reset con 0x556dcd91f000 session 0x556dcd35ef00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 98385920 unmapped: 20021248 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 193 ms_handle_reset con 0x556dca26dc00 session 0x556dcd68b2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 193 handle_osd_map epochs [194,194], i have 193, src has [1,194]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 98418688 unmapped: 19988480 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 194 ms_handle_reset con 0x556dca26d400 session 0x556dcc899c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1371002 data_alloc: 218103808 data_used: 6356992
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 98418688 unmapped: 19988480 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 194 heartbeat osd_stat(store_statfs(0x4fa13b000/0x0/0x4ffc00000, data 0x1414ac6/0x1532000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 194 ms_handle_reset con 0x556dca26c400 session 0x556dcba32b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 103211008 unmapped: 15196160 heap: 118407168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 194 ms_handle_reset con 0x556dccbdec00 session 0x556dcd766b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 194 ms_handle_reset con 0x556dcf0e0000 session 0x556dcae112c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.860937119s of 10.190710068s, submitted: 90
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 194 ms_handle_reset con 0x556dcf0e0400 session 0x556dcb50da40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 20422656 heap: 121077760 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 194 ms_handle_reset con 0x556dca26c400 session 0x556dcd367a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 96813056 unmapped: 24264704 heap: 121077760 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 194 ms_handle_reset con 0x556dca26d400 session 0x556dcd3fcb40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 194 ms_handle_reset con 0x556dccbdec00 session 0x556dcd846b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 96821248 unmapped: 24256512 heap: 121077760 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 194 heartbeat osd_stat(store_statfs(0x4faade000/0x0/0x4ffc00000, data 0xa73a93/0xb8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 194 handle_osd_map epochs [195,195], i have 194, src has [1,195]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260749 data_alloc: 218103808 data_used: 581632
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 195 ms_handle_reset con 0x556dcf0e0000 session 0x556dcd26a780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 95821824 unmapped: 25255936 heap: 121077760 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 195 ms_handle_reset con 0x556dcd91f000 session 0x556dcba32d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 195 ms_handle_reset con 0x556dca26c400 session 0x556dcad670e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 95821824 unmapped: 25255936 heap: 121077760 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 195 ms_handle_reset con 0x556dca26d400 session 0x556dcd2490e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 195 ms_handle_reset con 0x556dccbdec00 session 0x556dcaca8f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 96124928 unmapped: 24952832 heap: 121077760 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 195 handle_osd_map epochs [196,196], i have 195, src has [1,196]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 196 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd68b2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 196 ms_handle_reset con 0x556dcf0e0000 session 0x556dcae0e960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 196 ms_handle_reset con 0x556dca26c400 session 0x556dcb50c780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 196 ms_handle_reset con 0x556dcacd5400 session 0x556dcd68b4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 96821248 unmapped: 24256512 heap: 121077760 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 196 ms_handle_reset con 0x556dca26d400 session 0x556dcd1a2f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 196 ms_handle_reset con 0x556dccbdec00 session 0x556dcad62d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 196 heartbeat osd_stat(store_statfs(0x4fa6a3000/0x0/0x4ffc00000, data 0xa9b0e9/0xbba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 101376000 unmapped: 19701760 heap: 121077760 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 196 heartbeat osd_stat(store_statfs(0x4fa2e1000/0x0/0x4ffc00000, data 0xe5d0e9/0xf7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1368183 data_alloc: 234881024 data_used: 9469952
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 101490688 unmapped: 19587072 heap: 121077760 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 196 handle_osd_map epochs [196,197], i have 196, src has [1,197]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 197 ms_handle_reset con 0x556dce83c000 session 0x556dccf4a3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 197 ms_handle_reset con 0x556dcacd4c00 session 0x556dcad67a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 197 ms_handle_reset con 0x556dcf0e0c00 session 0x556dcd26c960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 97533952 unmapped: 23543808 heap: 121077760 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 197 ms_handle_reset con 0x556dca26d400 session 0x556dcd70c3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 197 ms_handle_reset con 0x556dca26c400 session 0x556dcd1a34a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 197 ms_handle_reset con 0x556dcacd5400 session 0x556dcd70da40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 197 heartbeat osd_stat(store_statfs(0x4fabdb000/0x0/0x4ffc00000, data 0x562c50/0x681000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 96952320 unmapped: 24125440 heap: 121077760 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 197 ms_handle_reset con 0x556dcacd5400 session 0x556dcad63e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.108415604s of 10.539045334s, submitted: 144
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 197 ms_handle_reset con 0x556dca26c400 session 0x556dcd26a780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 96952320 unmapped: 24125440 heap: 121077760 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 197 ms_handle_reset con 0x556dca26d400 session 0x556dcc899c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 96952320 unmapped: 24125440 heap: 121077760 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 197 handle_osd_map epochs [198,198], i have 197, src has [1,198]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239969 data_alloc: 218103808 data_used: 585728
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 198 ms_handle_reset con 0x556dcacd4c00 session 0x556dcba323c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 198 ms_handle_reset con 0x556dcf0e0c00 session 0x556dcba332c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 96952320 unmapped: 24125440 heap: 121077760 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 96952320 unmapped: 24125440 heap: 121077760 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 198 heartbeat osd_stat(store_statfs(0x4fabd7000/0x0/0x4ffc00000, data 0x5646d3/0x686000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 198 ms_handle_reset con 0x556dca26c400 session 0x556dcacd70e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 198 ms_handle_reset con 0x556dca26d400 session 0x556dcc6eba40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 198 ms_handle_reset con 0x556dcacd4c00 session 0x556dcacd61e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 98271232 unmapped: 22806528 heap: 121077760 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 198 ms_handle_reset con 0x556dcacd5400 session 0x556dcd3fc960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 198 ms_handle_reset con 0x556dccbdec00 session 0x556dcd766d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 98287616 unmapped: 22790144 heap: 121077760 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 198 heartbeat osd_stat(store_statfs(0x4fa79c000/0x0/0x4ffc00000, data 0x99f735/0xac2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 198 handle_osd_map epochs [198,199], i have 198, src has [1,199]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 199 ms_handle_reset con 0x556dca26d400 session 0x556dcae112c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 199 ms_handle_reset con 0x556dcacd4c00 session 0x556dcae10780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 199 ms_handle_reset con 0x556dcacd5400 session 0x556dcae114a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 98320384 unmapped: 22757376 heap: 121077760 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 199 ms_handle_reset con 0x556dcb58d400 session 0x556dcc747860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 199 ms_handle_reset con 0x556dcec61800 session 0x556dcd21f4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 199 ms_handle_reset con 0x556dca26c400 session 0x556dcabe2960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa79c000/0x0/0x4ffc00000, data 0x99f735/0xac2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,0,0,1])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 199 ms_handle_reset con 0x556dccbe3800 session 0x556dcc746b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 199 ms_handle_reset con 0x556dcec61800 session 0x556dcd1a3c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 199 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd1a32c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 199 ms_handle_reset con 0x556dcacd5400 session 0x556dcd1a3860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320044 data_alloc: 218103808 data_used: 602112
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 199 ms_handle_reset con 0x556dcacd5400 session 0x556dcbab0000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 98426880 unmapped: 25804800 heap: 124231680 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 199 ms_handle_reset con 0x556dca26c400 session 0x556dcae0e000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 199 handle_osd_map epochs [200,200], i have 199, src has [1,200]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 200 ms_handle_reset con 0x556dcacd4c00 session 0x556dcad670e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 99745792 unmapped: 24485888 heap: 124231680 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 200 ms_handle_reset con 0x556dccbe3800 session 0x556dcd35eb40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 200 handle_osd_map epochs [200,201], i have 200, src has [1,201]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 201 ms_handle_reset con 0x556dcec61800 session 0x556dcba32d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 201 ms_handle_reset con 0x556dca26c400 session 0x556dcba323c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 201 ms_handle_reset con 0x556dca26d400 session 0x556dcc8992c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 201 ms_handle_reset con 0x556dcacd4c00 session 0x556dcac45e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 99762176 unmapped: 24469504 heap: 124231680 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 201 ms_handle_reset con 0x556dccbe3800 session 0x556dcba32b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.053807259s of 10.203743935s, submitted: 152
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 201 ms_handle_reset con 0x556dcacd5400 session 0x556dcad69680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 201 ms_handle_reset con 0x556dca26c400 session 0x556dcc6383c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 201 heartbeat osd_stat(store_statfs(0x4f9eeb000/0x0/0x4ffc00000, data 0x124805e/0x1371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 201 ms_handle_reset con 0x556dca26d400 session 0x556dcd767e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 99762176 unmapped: 24469504 heap: 124231680 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 201 ms_handle_reset con 0x556dcacd4c00 session 0x556dcbab0f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 201 ms_handle_reset con 0x556dccbe3800 session 0x556dcc6ea3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 24322048 heap: 124231680 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 201 ms_handle_reset con 0x556dcd334c00 session 0x556dcc899c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405775 data_alloc: 218103808 data_used: 618496
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 201 heartbeat osd_stat(store_statfs(0x4f9ec7000/0x0/0x4ffc00000, data 0x126c091/0x1397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 100081664 unmapped: 24150016 heap: 124231680 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 202 ms_handle_reset con 0x556dcf0e1000 session 0x556dcb57c780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 202 ms_handle_reset con 0x556dca26c400 session 0x556dcc8b4960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 202 ms_handle_reset con 0x556dcb58d400 session 0x556dcd68a780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 100122624 unmapped: 24109056 heap: 124231680 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 202 heartbeat osd_stat(store_statfs(0x4f9bc2000/0x0/0x4ffc00000, data 0x156dc70/0x169b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 100122624 unmapped: 24109056 heap: 124231680 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 202 ms_handle_reset con 0x556dcec60000 session 0x556dcd70c3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 202 ms_handle_reset con 0x556dcd334c00 session 0x556dccf4a780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 202 handle_osd_map epochs [202,203], i have 202, src has [1,203]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 202 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 203 ms_handle_reset con 0x556dca26d400 session 0x556dcc746960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 97705984 unmapped: 26525696 heap: 124231680 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 203 ms_handle_reset con 0x556dca26c400 session 0x556dccf4b0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 203 ms_handle_reset con 0x556dcd334c00 session 0x556dcc6eaf00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 203 handle_osd_map epochs [204,204], i have 203, src has [1,204]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 97755136 unmapped: 26476544 heap: 124231680 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 204 ms_handle_reset con 0x556dcb58d400 session 0x556dcad63680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 204 ms_handle_reset con 0x556dcec60000 session 0x556dcd7663c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 204 ms_handle_reset con 0x556dca26c400 session 0x556dccb7b680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1318553 data_alloc: 218103808 data_used: 630784
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 204 ms_handle_reset con 0x556dca26d400 session 0x556dcab66000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 97763328 unmapped: 26468352 heap: 124231680 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 204 handle_osd_map epochs [205,205], i have 204, src has [1,205]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 205 ms_handle_reset con 0x556dcb58d400 session 0x556dcd836780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 97779712 unmapped: 26451968 heap: 124231680 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 205 ms_handle_reset con 0x556dcd334c00 session 0x556dcd26dc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 205 ms_handle_reset con 0x556dcf0e1000 session 0x556dcabe2f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 26443776 heap: 124231680 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.569068909s of 10.175397873s, submitted: 168
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 205 ms_handle_reset con 0x556dca26d400 session 0x556dcd847a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 205 heartbeat osd_stat(store_statfs(0x4fabc2000/0x0/0x4ffc00000, data 0x570872/0x69b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 205 ms_handle_reset con 0x556dca26c400 session 0x556dcc65ad20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 26402816 heap: 132620288 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 205 ms_handle_reset con 0x556dcd334c00 session 0x556dcd846f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 205 handle_osd_map epochs [206,206], i have 205, src has [1,206]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 43155456 heap: 141017088 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 206 ms_handle_reset con 0x556dcacd4c00 session 0x556dcc8b5c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 206 ms_handle_reset con 0x556dccbe3800 session 0x556dcaccad20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1605055 data_alloc: 218103808 data_used: 643072
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 206 ms_handle_reset con 0x556dccbe3800 session 0x556dcd248d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 206 ms_handle_reset con 0x556dca26c400 session 0x556dcb50d860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 42213376 heap: 141017088 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 207 ms_handle_reset con 0x556dcacd4c00 session 0x556dcae10000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 207 ms_handle_reset con 0x556dcd334c00 session 0x556dcc6eb4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 208 ms_handle_reset con 0x556dca26d400 session 0x556dcd26d2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 98910208 unmapped: 42106880 heap: 141017088 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 208 ms_handle_reset con 0x556dcec60400 session 0x556dccb7ab40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 208 heartbeat osd_stat(store_statfs(0x4f793c000/0x0/0x4ffc00000, data 0x37edb3f/0x3920000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 208 ms_handle_reset con 0x556dca26d400 session 0x556dcd26d860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 42033152 heap: 141017088 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 208 ms_handle_reset con 0x556dca26c400 session 0x556dccf4a5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 208 ms_handle_reset con 0x556dcacd4c00 session 0x556dcb9c61e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 99049472 unmapped: 41967616 heap: 141017088 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 208 ms_handle_reset con 0x556dccbe3800 session 0x556dcd26de00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 99164160 unmapped: 41852928 heap: 141017088 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 208 handle_osd_map epochs [209,209], i have 208, src has [1,209]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 209 ms_handle_reset con 0x556dca26d400 session 0x556dcb9c6f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 209 heartbeat osd_stat(store_statfs(0x4f5bb4000/0x0/0x4ffc00000, data 0x5575bb1/0x56aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 209 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd68bc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1926655 data_alloc: 218103808 data_used: 667648
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 209 handle_osd_map epochs [210,210], i have 209, src has [1,210]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 210 ms_handle_reset con 0x556dcec60400 session 0x556dcd8474a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 210 ms_handle_reset con 0x556dcd334c00 session 0x556dcc747a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 210 ms_handle_reset con 0x556dcf0e1400 session 0x556dcc747c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 99467264 unmapped: 41549824 heap: 141017088 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 210 ms_handle_reset con 0x556dca26c400 session 0x556dcbab1a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 99508224 unmapped: 41508864 heap: 141017088 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 210 ms_handle_reset con 0x556dca26d400 session 0x556dcae0c000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 210 handle_osd_map epochs [210,211], i have 210, src has [1,211]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 211 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd26d2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 99811328 unmapped: 41205760 heap: 141017088 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 211 ms_handle_reset con 0x556dcd334c00 session 0x556dcd68be00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 211 ms_handle_reset con 0x556dcec60400 session 0x556dcabe2f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.547702789s of 10.001538277s, submitted: 258
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 211 handle_osd_map epochs [211,212], i have 211, src has [1,212]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 212 handle_osd_map epochs [212,212], i have 212, src has [1,212]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 212 ms_handle_reset con 0x556dca26c400 session 0x556dcd35e780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 212 heartbeat osd_stat(store_statfs(0x4f1ba9000/0x0/0x4ffc00000, data 0x957adc4/0x96b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 99958784 unmapped: 41058304 heap: 141017088 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 212 handle_osd_map epochs [212,213], i have 212, src has [1,213]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 213 handle_osd_map epochs [213,213], i have 213, src has [1,213]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 213 ms_handle_reset con 0x556dcacd4c00 session 0x556dcad63680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 100122624 unmapped: 40894464 heap: 141017088 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 213 ms_handle_reset con 0x556dcd334c00 session 0x556dcd70c3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 213 ms_handle_reset con 0x556dca26d400 session 0x556dcd1a25a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2723029 data_alloc: 218103808 data_used: 684032
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 213 heartbeat osd_stat(store_statfs(0x4efba6000/0x0/0x4ffc00000, data 0xb57e4ae/0xb6b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 40697856 heap: 141017088 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 40632320 heap: 141017088 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 213 heartbeat osd_stat(store_statfs(0x4ed3a9000/0x0/0x4ffc00000, data 0xdd7e3da/0xdeb4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 213 handle_osd_map epochs [214,214], i have 213, src has [1,214]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 100614144 unmapped: 40402944 heap: 141017088 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 214 ms_handle_reset con 0x556dcf0e1400 session 0x556dca8a0960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 214 ms_handle_reset con 0x556dcf0e1400 session 0x556dcbab0780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 40280064 heap: 141017088 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 214 handle_osd_map epochs [215,215], i have 214, src has [1,215]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 215 ms_handle_reset con 0x556dca26c400 session 0x556dcd248960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 100966400 unmapped: 40050688 heap: 141017088 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 215 ms_handle_reset con 0x556dca26d400 session 0x556dcad670e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3333485 data_alloc: 218103808 data_used: 688128
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 39944192 heap: 141017088 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 215 heartbeat osd_stat(store_statfs(0x4e73a4000/0x0/0x4ffc00000, data 0x13d81ad0/0x13eba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 101203968 unmapped: 39813120 heap: 141017088 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 39600128 heap: 141017088 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.785984039s of 10.061553001s, submitted: 145
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 215 heartbeat osd_stat(store_statfs(0x4e63a4000/0x0/0x4ffc00000, data 0x14d81ad0/0x14eba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 215 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd3fc960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 215 ms_handle_reset con 0x556dcd334c00 session 0x556dcd3fd860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 215 ms_handle_reset con 0x556dca26c400 session 0x556dcd3fc1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 101638144 unmapped: 39378944 heap: 141017088 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 215 ms_handle_reset con 0x556dca26d400 session 0x556dcd3fc3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 215 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd3fcb40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 215 ms_handle_reset con 0x556dcf0e1400 session 0x556dcb50da40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 215 ms_handle_reset con 0x556dcb8efc00 session 0x556dcb50cd20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 215 ms_handle_reset con 0x556dca26c400 session 0x556dcd767e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 215 ms_handle_reset con 0x556dca26d400 session 0x556dcd7672c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 102604800 unmapped: 42614784 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 215 heartbeat osd_stat(store_statfs(0x4e27e0000/0x0/0x4ffc00000, data 0x18944ae0/0x18a7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4173896 data_alloc: 218103808 data_used: 692224
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 42459136 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 215 handle_osd_map epochs [216,216], i have 215, src has [1,216]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 216 ms_handle_reset con 0x556dcb58d400 session 0x556dcad62000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 216 ms_handle_reset con 0x556dcacd4c00 session 0x556dcc8985a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 216 ms_handle_reset con 0x556dcf0e1400 session 0x556dcd1a34a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 102776832 unmapped: 42442752 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 216 ms_handle_reset con 0x556dca26c400 session 0x556dcd70c780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 216 ms_handle_reset con 0x556dca26d400 session 0x556dcd847860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 216 handle_osd_map epochs [216,217], i have 216, src has [1,217]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 102809600 unmapped: 42409984 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 217 ms_handle_reset con 0x556dcae66000 session 0x556dcae0f860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 217 ms_handle_reset con 0x556dccb29800 session 0x556dcae0e780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 217 ms_handle_reset con 0x556dcc70c800 session 0x556dcae0e000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 217 ms_handle_reset con 0x556dcacd4c00 session 0x556dcabe30e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 217 ms_handle_reset con 0x556dca26c400 session 0x556dccb7a000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 217 ms_handle_reset con 0x556dca26d400 session 0x556dcd70d0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 217 ms_handle_reset con 0x556dcae66000 session 0x556dcad66d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 217 ms_handle_reset con 0x556dccb29800 session 0x556dcd1a32c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 104964096 unmapped: 40255488 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 217 ms_handle_reset con 0x556dcc70c800 session 0x556dcd8365a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 217 heartbeat osd_stat(store_statfs(0x4f989d000/0x0/0x4ffc00000, data 0x18822c8/0x19bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 217 ms_handle_reset con 0x556dcb58d400 session 0x556dcd7661e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 217 handle_osd_map epochs [217,218], i have 217, src has [1,218]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 42311680 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 218 ms_handle_reset con 0x556dca26c400 session 0x556dcd68a780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1477168 data_alloc: 218103808 data_used: 708608
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 42311680 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 218 heartbeat osd_stat(store_statfs(0x4fa460000/0x0/0x4ffc00000, data 0xcc0cd5/0xdfd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 42311680 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 218 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd26c960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 218 handle_osd_map epochs [218,219], i have 218, src has [1,219]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 42303488 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 219 ms_handle_reset con 0x556dca26d400 session 0x556dcd70c000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 219 ms_handle_reset con 0x556dca26c400 session 0x556dcd21ef00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 219 ms_handle_reset con 0x556dca26d400 session 0x556dcd68ba40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 42303488 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.767854691s of 10.822075844s, submitted: 192
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 219 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd3fc5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 102948864 unmapped: 42270720 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1488513 data_alloc: 218103808 data_used: 712704
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 102948864 unmapped: 42270720 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 220 ms_handle_reset con 0x556dcc70c800 session 0x556dcbab0960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 106463232 unmapped: 38756352 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 221 ms_handle_reset con 0x556dcdf3a000 session 0x556dccb7a5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 221 ms_handle_reset con 0x556dcdf3a000 session 0x556dcc746f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 221 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0xcc4492/0xe06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 106463232 unmapped: 38756352 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 221 ms_handle_reset con 0x556dcdf3ac00 session 0x556dcd767c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 38707200 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 222 ms_handle_reset con 0x556dca26c400 session 0x556dcc6ea1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 38682624 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 223 ms_handle_reset con 0x556dcdf3a400 session 0x556dcd766000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 223 ms_handle_reset con 0x556dca26d400 session 0x556dcc7461e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558704 data_alloc: 218103808 data_used: 8216576
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 38641664 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 223 ms_handle_reset con 0x556dcdf3ac00 session 0x556dcc6ea000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 223 ms_handle_reset con 0x556dcdf3a400 session 0x556dcc6ebc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 223 ms_handle_reset con 0x556dcdf3a000 session 0x556dcc6ea960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 38625280 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 223 ms_handle_reset con 0x556dca26c400 session 0x556dccf4d4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 223 heartbeat osd_stat(store_statfs(0x4fa44c000/0x0/0x4ffc00000, data 0xcc96ab/0xe0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 223 ms_handle_reset con 0x556dca26d400 session 0x556dccf4de00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 38625280 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 38625280 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.102532387s of 10.348859787s, submitted: 83
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 224 ms_handle_reset con 0x556dca26d400 session 0x556dccf4c5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 38608896 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1564632 data_alloc: 218103808 data_used: 8237056
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 38608896 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 224 ms_handle_reset con 0x556dca26c400 session 0x556dcae110e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 108822528 unmapped: 36397056 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 224 heartbeat osd_stat(store_statfs(0x4fa449000/0x0/0x4ffc00000, data 0xccb354/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 224 ms_handle_reset con 0x556dcdf3a000 session 0x556dcb682d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 29540352 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 224 ms_handle_reset con 0x556dcdf3a400 session 0x556dcc8990e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 12K writes, 47K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 12K writes, 3681 syncs, 3.39 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5692 writes, 19K keys, 5692 commit groups, 1.0 writes per commit group, ingest: 11.30 MB, 0.02 MB/s#012Interval WAL: 5692 writes, 2435 syncs, 2.34 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 115990528 unmapped: 29229056 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 224 ms_handle_reset con 0x556dcdf3ac00 session 0x556dcd846000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 224 ms_handle_reset con 0x556dcdf3ac00 session 0x556dcba332c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 29261824 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 224 ms_handle_reset con 0x556dca26c400 session 0x556dcd21e1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 224 ms_handle_reset con 0x556dca26d400 session 0x556dcd8470e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1671449 data_alloc: 218103808 data_used: 9314304
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 113623040 unmapped: 31596544 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 113623040 unmapped: 31596544 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 224 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x18b62e2/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 113623040 unmapped: 31596544 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 224 ms_handle_reset con 0x556dcdf3a000 session 0x556dcc8b4780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 224 ms_handle_reset con 0x556dcdf3a400 session 0x556dcd1a2d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 224 ms_handle_reset con 0x556dca26c400 session 0x556dcd68af00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 113623040 unmapped: 31596544 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 224 ms_handle_reset con 0x556dca6bd800 session 0x556dca0945a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 224 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x18b62e2/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: mgrc ms_handle_reset ms_handle_reset con 0x556dca101c00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2127781581
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2127781581,v1:192.168.122.100:6801/2127781581]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: mgrc handle_mgr_configure stats_period=5
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 31449088 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 224 handle_osd_map epochs [225,225], i have 224, src has [1,225]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.354948044s of 10.826604843s, submitted: 177
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 225 ms_handle_reset con 0x556dcb941800 session 0x556dcb9c65a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 225 ms_handle_reset con 0x556dca6bc000 session 0x556dcd184780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 225 ms_handle_reset con 0x556dcdf3a000 session 0x556dcc65a3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 225 ms_handle_reset con 0x556dcacd4c00 session 0x556dcad665a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1681545 data_alloc: 234881024 data_used: 9330688
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32653312 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 225 ms_handle_reset con 0x556dcd91ec00 session 0x556dcd366d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 225 ms_handle_reset con 0x556dcc70c800 session 0x556dcb50da40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 32620544 heap: 145219584 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 225 ms_handle_reset con 0x556dcd91ec00 session 0x556dcd248d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 40951808 heap: 157827072 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 225 ms_handle_reset con 0x556dcdf3a000 session 0x556dcae0f860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 116924416 unmapped: 40902656 heap: 157827072 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 225 heartbeat osd_stat(store_statfs(0x4f245d000/0x0/0x4ffc00000, data 0x88b7ec1/0x8a01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,0,2,3,2,3,1])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 225 ms_handle_reset con 0x556dcd91f000 session 0x556dccf4c1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 225 ms_handle_reset con 0x556dcefedc00 session 0x556dcae0fa40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 225 ms_handle_reset con 0x556dcefed800 session 0x556dca8a0000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 118398976 unmapped: 43630592 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 225 handle_osd_map epochs [226,226], i have 225, src has [1,226]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 226 ms_handle_reset con 0x556dcd91f000 session 0x556dcd1850e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3244609 data_alloc: 218103808 data_used: 9347072
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 47816704 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 227 ms_handle_reset con 0x556dcdf3a000 session 0x556dcd766000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 227 ms_handle_reset con 0x556dcd91ec00 session 0x556dcc7470e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 227 ms_handle_reset con 0x556dcefedc00 session 0x556dcc638b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 227 ms_handle_reset con 0x556dcd91e800 session 0x556dccf4c000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 118448128 unmapped: 43581440 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 227 ms_handle_reset con 0x556dcd91ec00 session 0x556dcc6eb2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 227 ms_handle_reset con 0x556dcefedc00 session 0x556dcd766960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 47726592 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 227 heartbeat osd_stat(store_statfs(0x4e712f000/0x0/0x4ffc00000, data 0x13bcdb9d/0x13d1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [0,0,0,0,0,0,1,1])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 227 handle_osd_map epochs [228,228], i have 227, src has [1,228]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 228 ms_handle_reset con 0x556dcd91f000 session 0x556dcc6eaf00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 228 ms_handle_reset con 0x556dcefed000 session 0x556dcc10c5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 114401280 unmapped: 47628288 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 228 handle_osd_map epochs [229,229], i have 228, src has [1,229]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 229 ms_handle_reset con 0x556dca26c400 session 0x556dcad68d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 229 ms_handle_reset con 0x556dcefed400 session 0x556dcc6eb860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 229 ms_handle_reset con 0x556dcc70c800 session 0x556dcc122960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 229 ms_handle_reset con 0x556dcdf3a000 session 0x556dcd7670e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 229 heartbeat osd_stat(store_statfs(0x4e3127000/0x0/0x4ffc00000, data 0x17bd12e9/0x17d25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 229 ms_handle_reset con 0x556dcd91ec00 session 0x556dcaca8f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 115548160 unmapped: 46481408 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 229 ms_handle_reset con 0x556dcefed000 session 0x556dcc8b5e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.792219639s of 10.019071579s, submitted: 174
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 229 ms_handle_reset con 0x556dcd91f000 session 0x556dcc6ea3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 229 ms_handle_reset con 0x556dcc70c800 session 0x556dcc8990e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4149399 data_alloc: 218103808 data_used: 9363456
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 115589120 unmapped: 46440448 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 229 ms_handle_reset con 0x556dcdf3a000 session 0x556dcd35e3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 229 handle_osd_map epochs [230,230], i have 229, src has [1,230]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 230 ms_handle_reset con 0x556dcd91ec00 session 0x556dccb7ab40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 230 heartbeat osd_stat(store_statfs(0x4f9438000/0x0/0x4ffc00000, data 0x18c0eaa/0x1a15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 47939584 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 230 ms_handle_reset con 0x556dcefed400 session 0x556dcd847a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 230 ms_handle_reset con 0x556dcd91f000 session 0x556dcb682d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 230 handle_osd_map epochs [231,231], i have 230, src has [1,231]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 231 ms_handle_reset con 0x556dcc70c800 session 0x556dcb50d2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 47931392 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 231 ms_handle_reset con 0x556dcefedc00 session 0x556dcacd7e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 231 heartbeat osd_stat(store_statfs(0x4f9437000/0x0/0x4ffc00000, data 0x18c2a35/0x1a17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 231 handle_osd_map epochs [231,232], i have 231, src has [1,232]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 47890432 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 232 ms_handle_reset con 0x556dcdf3a000 session 0x556dcb50c780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 232 ms_handle_reset con 0x556dcd91ec00 session 0x556dcc747e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 232 handle_osd_map epochs [233,233], i have 232, src has [1,233]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 47890432 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 233 ms_handle_reset con 0x556dcd91ec00 session 0x556dcb50d4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1817357 data_alloc: 234881024 data_used: 9379840
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 233 handle_osd_map epochs [233,234], i have 233, src has [1,234]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 47857664 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 234 ms_handle_reset con 0x556dcd91f000 session 0x556dcc746f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 234 ms_handle_reset con 0x556dcc70c800 session 0x556dcad634a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 234 ms_handle_reset con 0x556dcefedc00 session 0x556dcd21f4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 234 ms_handle_reset con 0x556dcdf3a000 session 0x556dcc747c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 47857664 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 234 ms_handle_reset con 0x556dcdf3a000 session 0x556dcad674a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 234 heartbeat osd_stat(store_statfs(0x4fa451000/0x0/0x4ffc00000, data 0x18c7884/0x1a1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 234 ms_handle_reset con 0x556dcacd4c00 session 0x556dcb527680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 47857664 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 47857664 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 234 ms_handle_reset con 0x556dcae66000 session 0x556dccb7b0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 234 ms_handle_reset con 0x556dcb58d400 session 0x556dcd1845a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 234 handle_osd_map epochs [235,235], i have 234, src has [1,235]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 235 ms_handle_reset con 0x556dcc70c800 session 0x556dcae11a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 108806144 unmapped: 53223424 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 235 ms_handle_reset con 0x556dcacd4c00 session 0x556dcae0de00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 235 heartbeat osd_stat(store_statfs(0x4fa44d000/0x0/0x4ffc00000, data 0x18c934b/0x1a20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.452179909s of 10.253409386s, submitted: 298
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 235 ms_handle_reset con 0x556dcae66000 session 0x556dca8a10e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1610836 data_alloc: 218103808 data_used: 823296
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 235 ms_handle_reset con 0x556dcb58d400 session 0x556dcae0e000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 235 ms_handle_reset con 0x556dcdf3a000 session 0x556dcc6ea000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 109166592 unmapped: 52862976 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 109166592 unmapped: 52862976 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 109166592 unmapped: 52862976 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 109166592 unmapped: 52862976 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 235 handle_osd_map epochs [236,236], i have 235, src has [1,236]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 109166592 unmapped: 52862976 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 236 handle_osd_map epochs [237,237], i have 236, src has [1,237]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704694 data_alloc: 218103808 data_used: 823296
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 109174784 unmapped: 52854784 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 237 heartbeat osd_stat(store_statfs(0x4fac14000/0x0/0x4ffc00000, data 0x1104846/0x1259000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 237 handle_osd_map epochs [238,238], i have 237, src has [1,238]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 238 heartbeat osd_stat(store_statfs(0x4fac10000/0x0/0x4ffc00000, data 0x1106417/0x125c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 109174784 unmapped: 52854784 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 238 handle_osd_map epochs [238,239], i have 238, src has [1,239]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 239 ms_handle_reset con 0x556dcd91ec00 session 0x556dcd7661e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 239 ms_handle_reset con 0x556dcacd4c00 session 0x556dcc899e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 109182976 unmapped: 52846592 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 109182976 unmapped: 52846592 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 239 ms_handle_reset con 0x556dcae66000 session 0x556dcd21ef00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 111124480 unmapped: 50905088 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 239 heartbeat osd_stat(store_statfs(0x4fac0d000/0x0/0x4ffc00000, data 0x1107f96/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 239 ms_handle_reset con 0x556dcb58d400 session 0x556dcd2490e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1805061 data_alloc: 218103808 data_used: 823296
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 111124480 unmapped: 50905088 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 111124480 unmapped: 50905088 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 111124480 unmapped: 50905088 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 111124480 unmapped: 50905088 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 239 handle_osd_map epochs [240,240], i have 239, src has [1,240]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.100010872s of 14.593472481s, submitted: 113
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 240 ms_handle_reset con 0x556dcdf3a000 session 0x556dccb7ad20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 50806784 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1809321 data_alloc: 218103808 data_used: 831488
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 240 heartbeat osd_stat(store_statfs(0x4f9ff6000/0x0/0x4ffc00000, data 0x1d1dadd/0x1e77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 50806784 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 50806784 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 45203456 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 240 handle_osd_map epochs [241,241], i have 240, src has [1,241]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 241 ms_handle_reset con 0x556dcefecc00 session 0x556dcd767860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 241 heartbeat osd_stat(store_statfs(0x4f9ff2000/0x0/0x4ffc00000, data 0x1d1f6bc/0x1e7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 45203456 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 242 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd847680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 242 ms_handle_reset con 0x556dcae66000 session 0x556dccf4c960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 242 ms_handle_reset con 0x556dcefedc00 session 0x556dcd8470e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 45178880 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1901524 data_alloc: 234881024 data_used: 12083200
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 45178880 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 242 ms_handle_reset con 0x556dcdf3a000 session 0x556dcb683860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 242 ms_handle_reset con 0x556dcefec400 session 0x556dcc6eb2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 242 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd8365a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 242 ms_handle_reset con 0x556dcae66000 session 0x556dcd837c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 242 handle_osd_map epochs [243,243], i have 242, src has [1,243]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 243 ms_handle_reset con 0x556dcdf3a000 session 0x556dcd8372c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 243 ms_handle_reset con 0x556dcefedc00 session 0x556dcd68be00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 243 ms_handle_reset con 0x556dcc70a000 session 0x556dcd68af00
Dec  2 06:38:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Dec  2 06:38:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1226494576' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 243 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd7674a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 243 ms_handle_reset con 0x556dcae66000 session 0x556dcd836960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 243 ms_handle_reset con 0x556dcefec800 session 0x556dcd837a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 243 ms_handle_reset con 0x556dcc70a000 session 0x556dcabe25a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 243 ms_handle_reset con 0x556dcb58d400 session 0x556dcd847e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 44482560 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 44482560 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 243 handle_osd_map epochs [243,244], i have 243, src has [1,244]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 244 ms_handle_reset con 0x556dcb58d400 session 0x556dcb9c7c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 244 ms_handle_reset con 0x556dcacd4c00 session 0x556dcb9c61e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 117547008 unmapped: 44482560 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 244 heartbeat osd_stat(store_statfs(0x4f946a000/0x0/0x4ffc00000, data 0x28a29b6/0x2a03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 44384256 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 244 ms_handle_reset con 0x556dcae66000 session 0x556dccf4b0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2002450 data_alloc: 234881024 data_used: 12095488
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 244 ms_handle_reset con 0x556dcc70a000 session 0x556dccf4a5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 44384256 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 244 ms_handle_reset con 0x556dcefec800 session 0x556dcd767680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.458865166s of 11.806189537s, submitted: 98
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 244 ms_handle_reset con 0x556dcefec800 session 0x556dcd7663c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 44032000 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 122175488 unmapped: 39854080 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 244 ms_handle_reset con 0x556dcb58d400 session 0x556dcd846000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135766016 unmapped: 26263552 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 245 heartbeat osd_stat(store_statfs(0x4f8cfd000/0x0/0x4ffc00000, data 0x30079c6/0x3169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 245 ms_handle_reset con 0x556dcc70a000 session 0x556dcae0e5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 245 ms_handle_reset con 0x556dcdf3a000 session 0x556dcc638b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135962624 unmapped: 26066944 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2166982 data_alloc: 234881024 data_used: 23748608
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135962624 unmapped: 26066944 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135962624 unmapped: 26066944 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 245 heartbeat osd_stat(store_statfs(0x4f8c5c000/0x0/0x4ffc00000, data 0x30ae461/0x3211000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 26034176 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 26034176 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 26034176 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2164962 data_alloc: 234881024 data_used: 23752704
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 245 heartbeat osd_stat(store_statfs(0x4f8c3b000/0x0/0x4ffc00000, data 0x30d0461/0x3233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 26034176 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 26034176 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 245 heartbeat osd_stat(store_statfs(0x4f8c3b000/0x0/0x4ffc00000, data 0x30d0461/0x3233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 245 heartbeat osd_stat(store_statfs(0x4f8c3b000/0x0/0x4ffc00000, data 0x30d0461/0x3233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 26034176 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 26034176 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.859674454s of 13.375189781s, submitted: 135
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 246 ms_handle_reset con 0x556dcec61800 session 0x556dccf4da40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 26034176 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2169456 data_alloc: 234881024 data_used: 23769088
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 139042816 unmapped: 22986752 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 246 handle_osd_map epochs [246,247], i have 246, src has [1,247]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 247 ms_handle_reset con 0x556dcec61800 session 0x556dcc746000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 138559488 unmapped: 23470080 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 247 handle_osd_map epochs [248,248], i have 247, src has [1,248]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 248 ms_handle_reset con 0x556dcb58d400 session 0x556dcd249c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 22339584 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 248 heartbeat osd_stat(store_statfs(0x4f80db000/0x0/0x4ffc00000, data 0x3c1cbbd/0x3d83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 248 ms_handle_reset con 0x556dcc70a000 session 0x556dcae0e000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 22339584 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 248 ms_handle_reset con 0x556dcdf3a000 session 0x556dcd248960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 248 handle_osd_map epochs [249,249], i have 248, src has [1,249]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 139714560 unmapped: 22315008 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 249 ms_handle_reset con 0x556dcefec800 session 0x556dcc10c780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2285612 data_alloc: 234881024 data_used: 25120768
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 249 ms_handle_reset con 0x556dcefec800 session 0x556dcd767a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 139714560 unmapped: 22315008 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 250 ms_handle_reset con 0x556dcb58d400 session 0x556dca8a1860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 250 ms_handle_reset con 0x556dcc70a000 session 0x556dcd847a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 139714560 unmapped: 22315008 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 250 heartbeat osd_stat(store_statfs(0x4f80e0000/0x0/0x4ffc00000, data 0x3c24e8a/0x3d8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 139714560 unmapped: 22315008 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 139722752 unmapped: 22306816 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 251 ms_handle_reset con 0x556dcefedc00 session 0x556dcc122960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 139722752 unmapped: 22306816 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.089836121s of 10.173209190s, submitted: 214
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 251 ms_handle_reset con 0x556dcd91f000 session 0x556dcad621e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2290512 data_alloc: 234881024 data_used: 25133056
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 251 heartbeat osd_stat(store_statfs(0x4f80bc000/0x0/0x4ffc00000, data 0x3c47a5b/0x3db2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [0,0,0,0,1])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 131391488 unmapped: 30638080 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 251 ms_handle_reset con 0x556dcb58d400 session 0x556dcd847860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 251 ms_handle_reset con 0x556dcc70a000 session 0x556dcd26d860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 131399680 unmapped: 30629888 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 251 heartbeat osd_stat(store_statfs(0x4f9168000/0x0/0x4ffc00000, data 0x28219f9/0x298b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 131399680 unmapped: 30629888 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 131399680 unmapped: 30629888 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 252 ms_handle_reset con 0x556dcd91f000 session 0x556dcaccad20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 31293440 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 252 ms_handle_reset con 0x556dcefec800 session 0x556dcd70d0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 252 heartbeat osd_stat(store_statfs(0x4f9168000/0x0/0x4ffc00000, data 0x28219f9/0x298b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1970744 data_alloc: 234881024 data_used: 12967936
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 31277056 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 253 ms_handle_reset con 0x556dcefedc00 session 0x556dcd766b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 253 ms_handle_reset con 0x556dcacd4000 session 0x556dcb9c6780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 31277056 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 253 heartbeat osd_stat(store_statfs(0x4fa038000/0x0/0x4ffc00000, data 0x1cc7fd9/0x1e34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 253 ms_handle_reset con 0x556dcc70a000 session 0x556dcd7665a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 31277056 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 253 ms_handle_reset con 0x556dcefec800 session 0x556dccf4de00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 253 ms_handle_reset con 0x556dcdf3a000 session 0x556dca8a03c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 253 ms_handle_reset con 0x556dcec61800 session 0x556dcc6eb860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 253 heartbeat osd_stat(store_statfs(0x4fa038000/0x0/0x4ffc00000, data 0x1cc7fd9/0x1e34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 253 ms_handle_reset con 0x556dcec60000 session 0x556dcc6ea3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 253 handle_osd_map epochs [253,254], i have 253, src has [1,254]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 254 ms_handle_reset con 0x556dcc70a000 session 0x556dcd1a25a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 254 ms_handle_reset con 0x556dcd91f000 session 0x556dcb50d4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 254 ms_handle_reset con 0x556dcdf3a000 session 0x556dcb50d2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 254 ms_handle_reset con 0x556dcec61800 session 0x556dcb5263c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 254 ms_handle_reset con 0x556dcefec800 session 0x556dcaccad20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 254 ms_handle_reset con 0x556dcc70a000 session 0x556dcc122960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 31809536 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 255 ms_handle_reset con 0x556dcd91f000 session 0x556dcd767a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 255 heartbeat osd_stat(store_statfs(0x4f9680000/0x0/0x4ffc00000, data 0x267dbb9/0x27ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.333127022s of 10.037677765s, submitted: 99
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 31809536 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2064201 data_alloc: 234881024 data_used: 12967936
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 31809536 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 31809536 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 31809536 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 130228224 unmapped: 31801344 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 255 handle_osd_map epochs [255,256], i have 255, src has [1,256]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 130318336 unmapped: 31711232 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 heartbeat osd_stat(store_statfs(0x4f967b000/0x0/0x4ffc00000, data 0x26827b4/0x27f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2087819 data_alloc: 234881024 data_used: 15917056
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 133775360 unmapped: 28254208 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 133775360 unmapped: 28254208 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dced71c00 session 0x556dcba321e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 heartbeat osd_stat(store_statfs(0x4f9678000/0x0/0x4ffc00000, data 0x2684217/0x27f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcd3db800 session 0x556dcd7670e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcd3da000 session 0x556dcabe25a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcc70a000 session 0x556dcd249c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcd3db800 session 0x556dcd70c000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 134504448 unmapped: 27525120 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcd91f000 session 0x556dcc6390e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dced71c00 session 0x556dcb9c6b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcd3dbc00 session 0x556dcacd61e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcc70a000 session 0x556dcc1232c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 134504448 unmapped: 27525120 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 heartbeat osd_stat(store_statfs(0x4f8c42000/0x0/0x4ffc00000, data 0x30b9227/0x322c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 134504448 unmapped: 27525120 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dced70000 session 0x556dcad62780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2212656 data_alloc: 234881024 data_used: 21745664
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcd3db800 session 0x556dcd366000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 134504448 unmapped: 27525120 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.182168961s of 11.570444107s, submitted: 135
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcacd4c00 session 0x556dcb50d860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcae66000 session 0x556dcb57dc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcacd4c00 session 0x556dcc65b2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 heartbeat osd_stat(store_statfs(0x4f8c42000/0x0/0x4ffc00000, data 0x30b9227/0x322c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [0,0,0,1])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 129613824 unmapped: 32415744 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcc70a000 session 0x556dcd8361e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcd3db800 session 0x556dcd1a30e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dced70000 session 0x556dcd837e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 129638400 unmapped: 32391168 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 129638400 unmapped: 32391168 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 129449984 unmapped: 32579584 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2014818 data_alloc: 234881024 data_used: 17649664
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 129449984 unmapped: 32579584 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 heartbeat osd_stat(store_statfs(0x4fa350000/0x0/0x4ffc00000, data 0x19ac217/0x1b1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [1,1,0,0,5])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcd3dac00 session 0x556dccf4a3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135454720 unmapped: 26574848 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcacd4c00 session 0x556dccf4c5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 134586368 unmapped: 27443200 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 134594560 unmapped: 27435008 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 134594560 unmapped: 27435008 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2079282 data_alloc: 234881024 data_used: 18907136
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 134594560 unmapped: 27435008 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 heartbeat osd_stat(store_statfs(0x4f9ba6000/0x0/0x4ffc00000, data 0x21561a4/0x22c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 134594560 unmapped: 27435008 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 134594560 unmapped: 27435008 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.723993301s of 12.214981079s, submitted: 146
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 134889472 unmapped: 27140096 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 heartbeat osd_stat(store_statfs(0x4f9ba6000/0x0/0x4ffc00000, data 0x21561a4/0x22c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 134889472 unmapped: 27140096 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2086342 data_alloc: 234881024 data_used: 18944000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 139640832 unmapped: 22388736 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 heartbeat osd_stat(store_statfs(0x4f92ef000/0x0/0x4ffc00000, data 0x2a0f1a4/0x2b7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 140689408 unmapped: 21340160 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 140926976 unmapped: 21102592 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 140926976 unmapped: 21102592 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 140926976 unmapped: 21102592 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2155742 data_alloc: 234881024 data_used: 19697664
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 140926976 unmapped: 21102592 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 heartbeat osd_stat(store_statfs(0x4f80cf000/0x0/0x4ffc00000, data 0x2a8e1a4/0x2bfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 140926976 unmapped: 21102592 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcc70a000 session 0x556dcb50d680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcd3db800 session 0x556dcd70cd20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dced70000 session 0x556dcd70cf00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 140951552 unmapped: 21078016 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcd3da800 session 0x556dcd70d860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd21e780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 21061632 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.099256516s of 10.514301300s, submitted: 88
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcc70a000 session 0x556dcd21f860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcd3da800 session 0x556dcd21f4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 21061632 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcd3db800 session 0x556dcb50d680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2165633 data_alloc: 234881024 data_used: 19697664
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dced70000 session 0x556dccf4a3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 21061632 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd8361e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcc70a000 session 0x556dcc65b2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 21061632 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 heartbeat osd_stat(store_statfs(0x4f80a9000/0x0/0x4ffc00000, data 0x2ab1288/0x2c25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 21061632 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcd3da800 session 0x556dcad62780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcd3db800 session 0x556dcc1232c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 140271616 unmapped: 21757952 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcd3da400 session 0x556dcd70c000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd7670e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 heartbeat osd_stat(store_statfs(0x4f7c92000/0x0/0x4ffc00000, data 0x2abb1b4/0x2c2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 142458880 unmapped: 19570688 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2160758 data_alloc: 234881024 data_used: 19697664
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 142458880 unmapped: 19570688 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcd3da800 session 0x556dccf4de00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 142458880 unmapped: 19570688 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 142458880 unmapped: 19570688 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcc70a000 session 0x556dccf4b860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 heartbeat osd_stat(store_statfs(0x4f7c93000/0x0/0x4ffc00000, data 0x2abb1a4/0x2c2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 141410304 unmapped: 20619264 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.264255524s of 10.495730400s, submitted: 60
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcdf3a000 session 0x556dcd248960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135585792 unmapped: 26443776 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcd3db800 session 0x556dcae0f0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1961809 data_alloc: 234881024 data_used: 11046912
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 26451968 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 26451968 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 26451968 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 heartbeat osd_stat(store_statfs(0x4f8e14000/0x0/0x4ffc00000, data 0x193b195/0x1aaa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 26451968 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcd3db800 session 0x556dcae0e780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 ms_handle_reset con 0x556dcacd4c00 session 0x556dcae0e5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 26451968 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1962129 data_alloc: 234881024 data_used: 11055104
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 26451968 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 26451968 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 heartbeat osd_stat(store_statfs(0x4f8e14000/0x0/0x4ffc00000, data 0x193b195/0x1aaa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 256 handle_osd_map epochs [256,257], i have 256, src has [1,257]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 257 ms_handle_reset con 0x556dcd3da800 session 0x556dcd846b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135585792 unmapped: 26443776 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135585792 unmapped: 26443776 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.001266479s of 10.143853188s, submitted: 41
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 257 ms_handle_reset con 0x556dcec60c00 session 0x556dcaca94a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 258 ms_handle_reset con 0x556dcdf3a000 session 0x556dcba323c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135585792 unmapped: 26443776 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1971493 data_alloc: 234881024 data_used: 11071488
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 258 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd3661e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 258 handle_osd_map epochs [258,259], i have 258, src has [1,259]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 259 ms_handle_reset con 0x556dcd3da800 session 0x556dcaccb860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135610368 unmapped: 26419200 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 26386432 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 260 ms_handle_reset con 0x556dcec60c00 session 0x556dcacd70e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 260 ms_handle_reset con 0x556dcec60400 session 0x556dcd35e000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 260 heartbeat osd_stat(store_statfs(0x4f8e02000/0x0/0x4ffc00000, data 0x1946fa7/0x1abb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 260 handle_osd_map epochs [260,261], i have 260, src has [1,261]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 261 ms_handle_reset con 0x556dcec61800 session 0x556dcd767860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 261 ms_handle_reset con 0x556dcb3d0c00 session 0x556dcd35eb40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 26386432 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 261 ms_handle_reset con 0x556dcd3db800 session 0x556dccf4c000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 261 ms_handle_reset con 0x556dcec61800 session 0x556dcc638b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 261 handle_osd_map epochs [261,262], i have 261, src has [1,262]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 262 ms_handle_reset con 0x556dcacd4c00 session 0x556dccf4cd20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 26386432 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 262 heartbeat osd_stat(store_statfs(0x4f8dfc000/0x0/0x4ffc00000, data 0x194a90d/0x1ac2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135643136 unmapped: 26386432 heap: 162029568 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1983067 data_alloc: 234881024 data_used: 11075584
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 262 ms_handle_reset con 0x556dcc70a000 session 0x556dcb50de00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 148357120 unmapped: 30466048 heap: 178823168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 262 ms_handle_reset con 0x556dcd91f000 session 0x556dcd21ef00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 262 ms_handle_reset con 0x556dced71c00 session 0x556dcc123a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 140001280 unmapped: 38821888 heap: 178823168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 262 heartbeat osd_stat(store_statfs(0x4f45fd000/0x0/0x4ffc00000, data 0x614a8fd/0x62c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 262 ms_handle_reset con 0x556dcc70bc00 session 0x556dccf4da40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 262 ms_handle_reset con 0x556dcc70b400 session 0x556dca8a0000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 127705088 unmapped: 51118080 heap: 178823168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 127729664 unmapped: 51093504 heap: 178823168 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 262 heartbeat osd_stat(store_statfs(0x4f1574000/0x0/0x4ffc00000, data 0x91d295f/0x934a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 263 ms_handle_reset con 0x556dcc70a000 session 0x556dcd846000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.301546097s of 10.023126602s, submitted: 165
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 263 heartbeat osd_stat(store_statfs(0x4f1574000/0x0/0x4ffc00000, data 0x91d295f/0x934a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,1,2,1])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 148824064 unmapped: 34201600 heap: 183025664 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345478 data_alloc: 218103808 data_used: 937984
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 132079616 unmapped: 50946048 heap: 183025664 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 263 ms_handle_reset con 0x556dcc70b400 session 0x556dcb50dc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 263 heartbeat osd_stat(store_statfs(0x4ebd71000/0x0/0x4ffc00000, data 0xe9d43c2/0xeb4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 128024576 unmapped: 55001088 heap: 183025664 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 140664832 unmapped: 42360832 heap: 183025664 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 263 heartbeat osd_stat(store_statfs(0x4e7571000/0x0/0x4ffc00000, data 0x131d43c2/0x1334d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,2])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 263 ms_handle_reset con 0x556dcec60c00 session 0x556dcd766b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 128163840 unmapped: 54861824 heap: 183025664 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 263 ms_handle_reset con 0x556dcd3da800 session 0x556dcad67a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 128163840 unmapped: 54861824 heap: 183025664 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4101198 data_alloc: 218103808 data_used: 933888
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 263 ms_handle_reset con 0x556dcd91f000 session 0x556dcad63680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 263 handle_osd_map epochs [263,264], i have 263, src has [1,264]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 264 ms_handle_reset con 0x556dced71c00 session 0x556dcd35e780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 129228800 unmapped: 53796864 heap: 183025664 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 264 ms_handle_reset con 0x556dcc70bc00 session 0x556dcc65ad20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 264 ms_handle_reset con 0x556dced71c00 session 0x556dcc747e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 127713280 unmapped: 55312384 heap: 183025664 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 265 ms_handle_reset con 0x556dcc70b400 session 0x556dcad67e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e556e000/0x0/0x4ffc00000, data 0x151d5f41/0x15350000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 265 ms_handle_reset con 0x556dcd3da800 session 0x556dcb6825a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 265 ms_handle_reset con 0x556dcec60c00 session 0x556dcd68a5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 265 ms_handle_reset con 0x556dcd91f000 session 0x556dcd68a780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 127721472 unmapped: 55304192 heap: 183025664 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 265 ms_handle_reset con 0x556dcc70bc00 session 0x556dcd68ab40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 265 ms_handle_reset con 0x556dcc70b400 session 0x556dcd68a1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 265 ms_handle_reset con 0x556dcd3da800 session 0x556dccb7af00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 265 ms_handle_reset con 0x556dced71c00 session 0x556dccb7a5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 127754240 unmapped: 55271424 heap: 183025664 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 265 ms_handle_reset con 0x556dced71c00 session 0x556dccb7a960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 265 heartbeat osd_stat(store_statfs(0x4e5569000/0x0/0x4ffc00000, data 0x151d7aea/0x15354000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.572204113s of 10.070801735s, submitted: 132
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 265 ms_handle_reset con 0x556dcc70b400 session 0x556dccf4a5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 265 ms_handle_reset con 0x556dcc70bc00 session 0x556dccb7be00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 127803392 unmapped: 55222272 heap: 183025664 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Dec  2 06:38:47 np0005542249 ceph-mon[75081]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4114934 data_alloc: 218103808 data_used: 966656
Dec  2 06:38:47 np0005542249 ceph-mon[75081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2508260374' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 266 ms_handle_reset con 0x556dcd91f000 session 0x556dcad670e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 55173120 heap: 183025664 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 266 ms_handle_reset con 0x556dcacd4c00 session 0x556dcacd7680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 266 heartbeat osd_stat(store_statfs(0x4e5569000/0x0/0x4ffc00000, data 0x151d9601/0x15354000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 266 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd847a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 267 ms_handle_reset con 0x556dcd3da800 session 0x556dcd68b0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 55132160 heap: 183025664 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 267 ms_handle_reset con 0x556dcc70b400 session 0x556dca904960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 55132160 heap: 183025664 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 267 ms_handle_reset con 0x556dcc70bc00 session 0x556dca8a1c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 267 handle_osd_map epochs [267,268], i have 267, src has [1,268]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 268 handle_osd_map epochs [268,268], i have 268, src has [1,268]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 268 ms_handle_reset con 0x556dced71c00 session 0x556dcc638f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 268 ms_handle_reset con 0x556dccd24000 session 0x556dccf4dc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 127942656 unmapped: 55083008 heap: 183025664 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 268 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd68af00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 269 ms_handle_reset con 0x556dcc70b400 session 0x556dcd68ad20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 269 ms_handle_reset con 0x556dcd91f000 session 0x556dcd847680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 269 heartbeat osd_stat(store_statfs(0x4e5564000/0x0/0x4ffc00000, data 0x151dcee5/0x15359000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 127950848 unmapped: 55074816 heap: 183025664 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4127501 data_alloc: 218103808 data_used: 974848
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 270 ms_handle_reset con 0x556dcc70bc00 session 0x556dcad62f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 136380416 unmapped: 46645248 heap: 183025664 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 271 ms_handle_reset con 0x556dcd3da800 session 0x556dcc7465a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 136880128 unmapped: 58753024 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 272 ms_handle_reset con 0x556dcc70b400 session 0x556dcd70c5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 141402112 unmapped: 54231040 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 272 ms_handle_reset con 0x556dcd91f000 session 0x556dcc6ea960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 272 ms_handle_reset con 0x556dccd24000 session 0x556dcb50d4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 129204224 unmapped: 66428928 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 272 handle_osd_map epochs [272,273], i have 272, src has [1,273]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.405568123s of 10.038644791s, submitted: 133
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 273 ms_handle_reset con 0x556dccd24800 session 0x556dca88f0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 273 ms_handle_reset con 0x556dccd24400 session 0x556dcabe25a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 129695744 unmapped: 65937408 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5549342 data_alloc: 218103808 data_used: 999424
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 273 heartbeat osd_stat(store_statfs(0x4d914f000/0x0/0x4ffc00000, data 0x215e5b82/0x2176e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 138215424 unmapped: 57417728 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 273 ms_handle_reset con 0x556dcc70b400 session 0x556dccf4dc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 134332416 unmapped: 61300736 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 273 ms_handle_reset con 0x556dccd24000 session 0x556dcd68b2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 273 heartbeat osd_stat(store_statfs(0x4d5d51000/0x0/0x4ffc00000, data 0x249e5b20/0x24b6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 138977280 unmapped: 56655872 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 273 ms_handle_reset con 0x556dcd91f000 session 0x556dcd847a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 274 ms_handle_reset con 0x556dcd3da800 session 0x556dcba33e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 60456960 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 274 handle_osd_map epochs [275,275], i have 275, src has [1,275]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 275 ms_handle_reset con 0x556dcd3da800 session 0x556dcd21fc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 275 ms_handle_reset con 0x556dccd24c00 session 0x556dcad670e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 275 ms_handle_reset con 0x556dcacd4c00 session 0x556dcacca960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 275 ms_handle_reset con 0x556dcc70bc00 session 0x556dcc7470e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 131145728 unmapped: 64487424 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 275 ms_handle_reset con 0x556dcc70b400 session 0x556dcb682d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6618809 data_alloc: 218103808 data_used: 1015808
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 275 ms_handle_reset con 0x556dcacd4c00 session 0x556dccb7ad20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 131170304 unmapped: 64462848 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 275 ms_handle_reset con 0x556dcc70bc00 session 0x556dcc6390e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 275 ms_handle_reset con 0x556dcd3da800 session 0x556dcaccad20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 275 heartbeat osd_stat(store_statfs(0x4cf54e000/0x0/0x4ffc00000, data 0x2b1e914a/0x2b36f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 275 ms_handle_reset con 0x556dccd24000 session 0x556dccb7ba40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 275 ms_handle_reset con 0x556dccd24c00 session 0x556dccf4be00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 131170304 unmapped: 64462848 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 276 ms_handle_reset con 0x556dccd24c00 session 0x556dcaca9e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 131211264 unmapped: 64421888 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 277 ms_handle_reset con 0x556dcacd4c00 session 0x556dcabe2f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 131219456 unmapped: 64413696 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.286699295s of 10.309048653s, submitted: 130
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 64380928 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 278 ms_handle_reset con 0x556dcc70bc00 session 0x556dccf4c3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 278 ms_handle_reset con 0x556dccd24000 session 0x556dcad674a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6625423 data_alloc: 218103808 data_used: 1015808
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 64380928 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 278 heartbeat osd_stat(store_statfs(0x4cf547000/0x0/0x4ffc00000, data 0x2b1ee639/0x2b377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 278 ms_handle_reset con 0x556dcd3da800 session 0x556dcc638f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 278 ms_handle_reset con 0x556dcacd4c00 session 0x556dcae114a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 131260416 unmapped: 64372736 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 131260416 unmapped: 64372736 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 278 ms_handle_reset con 0x556dcc70bc00 session 0x556dcc746000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 131260416 unmapped: 64372736 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 278 heartbeat osd_stat(store_statfs(0x4cf547000/0x0/0x4ffc00000, data 0x2b1ee639/0x2b377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 279 ms_handle_reset con 0x556dccd24000 session 0x556dcd3fcd20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 131284992 unmapped: 64348160 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6629917 data_alloc: 218103808 data_used: 1032192
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 131293184 unmapped: 64339968 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 279 ms_handle_reset con 0x556dccd24c00 session 0x556dcd68be00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 279 ms_handle_reset con 0x556dccd24400 session 0x556dcad68d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 280 ms_handle_reset con 0x556dccd24400 session 0x556dccb7b0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 280 ms_handle_reset con 0x556dcd91f000 session 0x556dccf4a000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 280 ms_handle_reset con 0x556dcacd4c00 session 0x556dcbab0000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 132186112 unmapped: 63447040 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 280 ms_handle_reset con 0x556dcc70bc00 session 0x556dcbab0f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 132202496 unmapped: 63430656 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 280 ms_handle_reset con 0x556dccd24000 session 0x556dcd767e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 281 ms_handle_reset con 0x556dcc70bc00 session 0x556dcd1a2960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 132227072 unmapped: 63406080 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 282 ms_handle_reset con 0x556dccd24400 session 0x556dcd26cd20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 132915200 unmapped: 62717952 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 282 ms_handle_reset con 0x556dce4f2000 session 0x556dcc7470e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.076129913s of 10.389759064s, submitted: 91
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 282 ms_handle_reset con 0x556dcd377800 session 0x556dcabe25a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6733661 data_alloc: 218103808 data_used: 1056768
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 282 heartbeat osd_stat(store_statfs(0x4cea54000/0x0/0x4ffc00000, data 0x2bcda44b/0x2be69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 282 ms_handle_reset con 0x556dcd91f000 session 0x556dcaccbc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 283 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd249c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 132284416 unmapped: 63348736 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 132284416 unmapped: 63348736 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 132284416 unmapped: 63348736 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 283 heartbeat osd_stat(store_statfs(0x4cea50000/0x0/0x4ffc00000, data 0x2bcdbfe4/0x2be6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 132284416 unmapped: 63348736 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 132284416 unmapped: 63348736 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 283 heartbeat osd_stat(store_statfs(0x4cea50000/0x0/0x4ffc00000, data 0x2bcdbfe4/0x2be6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6737467 data_alloc: 218103808 data_used: 1060864
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 132292608 unmapped: 63340544 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 132292608 unmapped: 63340544 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 283 heartbeat osd_stat(store_statfs(0x4cea50000/0x0/0x4ffc00000, data 0x2bcdbfe4/0x2be6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 283 heartbeat osd_stat(store_statfs(0x4cea50000/0x0/0x4ffc00000, data 0x2bcdbfe4/0x2be6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 132292608 unmapped: 63340544 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 132300800 unmapped: 63332352 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 132300800 unmapped: 63332352 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6739241 data_alloc: 218103808 data_used: 1060864
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 63356928 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dcc70bc00 session 0x556dcc6ea960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dccd24400 session 0x556dcd70c5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dcd377800 session 0x556dcc7465a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 heartbeat osd_stat(store_statfs(0x4cea4e000/0x0/0x4ffc00000, data 0x2bcdda47/0x2be6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dcd91f000 session 0x556dcad62f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.231557846s of 11.253371239s, submitted: 16
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dcd91f000 session 0x556dcd847680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dcacd4c00 session 0x556dcc6385a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 62193664 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 62193664 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ce204000/0x0/0x4ffc00000, data 0x2c527aa9/0x2c6ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 133447680 unmapped: 62185472 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dcc70bc00 session 0x556dcd766000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 133447680 unmapped: 62185472 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6803072 data_alloc: 218103808 data_used: 1060864
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dccd24400 session 0x556dcbab0960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 62177280 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 133447680 unmapped: 62185472 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 133447680 unmapped: 62185472 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dce233000 session 0x556dcd8470e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd846f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dcc70bc00 session 0x556dcc10dc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dcd91f000 session 0x556dccb7af00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 138346496 unmapped: 57286656 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dceaeac00 session 0x556dcd68ab40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dce4f3c00 session 0x556dcb6825a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dcacd4c00 session 0x556dcad67e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dcc70bc00 session 0x556dcc747e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dcd91f000 session 0x556dcc65ad20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ce203000/0x0/0x4ffc00000, data 0x2c527ab9/0x2c6bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135487488 unmapped: 60145664 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6918001 data_alloc: 234881024 data_used: 9756672
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135487488 unmapped: 60145664 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135495680 unmapped: 60137472 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dceaeac00 session 0x556dcad63680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135495680 unmapped: 60137472 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dcb940c00 session 0x556dcd766b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135495680 unmapped: 60137472 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dcacd4c00 session 0x556dcb50dc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dcc70bc00 session 0x556dcd846000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135495680 unmapped: 60137472 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dceaeac00 session 0x556dccf4cf00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 heartbeat osd_stat(store_statfs(0x4cdc31000/0x0/0x4ffc00000, data 0x2caf8ac9/0x2cc8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6918001 data_alloc: 234881024 data_used: 9756672
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dce232800 session 0x556dcae0fe00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135495680 unmapped: 60137472 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.235554695s of 15.493440628s, submitted: 48
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dca101c00 session 0x556dcc6381e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135495680 unmapped: 60137472 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 135659520 unmapped: 59973632 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 heartbeat osd_stat(store_statfs(0x4cdc30000/0x0/0x4ffc00000, data 0x2caf8b2b/0x2cc8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 138330112 unmapped: 57303040 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dca101c00 session 0x556dcd70da40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd21e1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dcc70bc00 session 0x556dca9045a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 139288576 unmapped: 56344576 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dce232800 session 0x556dcd847c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7001886 data_alloc: 234881024 data_used: 14626816
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 146989056 unmapped: 48644096 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ccab8000/0x0/0x4ffc00000, data 0x2e2d7ac9/0x2de02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 148357120 unmapped: 47276032 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 ms_handle_reset con 0x556dcc680800 session 0x556dcd7674a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 148357120 unmapped: 47276032 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 284 handle_osd_map epochs [284,285], i have 284, src has [1,285]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 285 ms_handle_reset con 0x556dcacd4c00 session 0x556dcbab0f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 149741568 unmapped: 45891584 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 285 ms_handle_reset con 0x556dca101c00 session 0x556dcba334a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 286 ms_handle_reset con 0x556dceaeac00 session 0x556dcacd61e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 45883392 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 286 ms_handle_reset con 0x556dcc680800 session 0x556dcd1a30e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7285806 data_alloc: 234881024 data_used: 15933440
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 286 heartbeat osd_stat(store_statfs(0x4cbcc4000/0x0/0x4ffc00000, data 0x2f1df1d3/0x2ebfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 149757952 unmapped: 45875200 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.446450233s of 10.082459450s, submitted: 181
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 149766144 unmapped: 45867008 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 286 ms_handle_reset con 0x556dce232800 session 0x556dcad67a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 286 ms_handle_reset con 0x556dca101c00 session 0x556dcd35e780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 149323776 unmapped: 46309376 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 287 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd26d860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 287 ms_handle_reset con 0x556dcc680800 session 0x556dca88f680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 287 ms_handle_reset con 0x556dcc70bc00 session 0x556dcb50da40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 149389312 unmapped: 46243840 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 287 heartbeat osd_stat(store_statfs(0x4cb1a4000/0x0/0x4ffc00000, data 0x2fcfbdb2/0x2f719000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,2,4,1])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 151068672 unmapped: 44564480 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7411361 data_alloc: 234881024 data_used: 16920576
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 151265280 unmapped: 44367872 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 151273472 unmapped: 44359680 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 151273472 unmapped: 44359680 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 287 heartbeat osd_stat(store_statfs(0x4cae50000/0x0/0x4ffc00000, data 0x30046db2/0x2fa64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 151306240 unmapped: 44326912 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 151306240 unmapped: 44326912 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7425237 data_alloc: 234881024 data_used: 17006592
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 151314432 unmapped: 44318720 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 151314432 unmapped: 44318720 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 151314432 unmapped: 44318720 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 287 heartbeat osd_stat(store_statfs(0x4cae50000/0x0/0x4ffc00000, data 0x30046db2/0x2fa64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 151355392 unmapped: 44277760 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 151355392 unmapped: 44277760 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7425413 data_alloc: 234881024 data_used: 17010688
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 151355392 unmapped: 44277760 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 287 heartbeat osd_stat(store_statfs(0x4cae50000/0x0/0x4ffc00000, data 0x30046db2/0x2fa64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 151388160 unmapped: 44244992 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.228307724s of 15.635784149s, submitted: 87
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 287 ms_handle_reset con 0x556dce232400 session 0x556dcbab1a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 150724608 unmapped: 44908544 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 287 ms_handle_reset con 0x556dca101c00 session 0x556dcc6390e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 287 ms_handle_reset con 0x556dccd24400 session 0x556dcabe3680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 287 ms_handle_reset con 0x556dcd377800 session 0x556dca0945a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 287 ms_handle_reset con 0x556dce4f2000 session 0x556dcc65a000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 287 heartbeat osd_stat(store_statfs(0x4cae57000/0x0/0x4ffc00000, data 0x30048db2/0x2fa66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 287 ms_handle_reset con 0x556dceaeac00 session 0x556dcd249680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 150732800 unmapped: 44900352 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 287 ms_handle_reset con 0x556dca101c00 session 0x556dcd3fd4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 287 ms_handle_reset con 0x556dceaeac00 session 0x556dcabe2000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 287 ms_handle_reset con 0x556dccd24400 session 0x556dcd1a34a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 148348928 unmapped: 47284224 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7305259 data_alloc: 218103808 data_used: 8331264
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 288 ms_handle_reset con 0x556dcd377800 session 0x556dcc8b52c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 288 heartbeat osd_stat(store_statfs(0x4cb67d000/0x0/0x4ffc00000, data 0x2f822d60/0x2f240000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 148348928 unmapped: 47284224 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 152559616 unmapped: 43073536 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 288 ms_handle_reset con 0x556dcc70bc00 session 0x556dcd847680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 289 ms_handle_reset con 0x556dccd24400 session 0x556dccf4a1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 289 ms_handle_reset con 0x556dca101c00 session 0x556dcc6eba40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 152543232 unmapped: 43089920 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 290 ms_handle_reset con 0x556dcd377800 session 0x556dcad66d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 290 ms_handle_reset con 0x556dcc70bc00 session 0x556dcd846f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 290 ms_handle_reset con 0x556dcc680800 session 0x556dcd249c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 290 ms_handle_reset con 0x556dceaeac00 session 0x556dcd837680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 290 heartbeat osd_stat(store_statfs(0x4cb44c000/0x0/0x4ffc00000, data 0x2fd941fd/0x2f470000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,1])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 166076416 unmapped: 29556736 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 290 ms_handle_reset con 0x556dca101c00 session 0x556dca8a1860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 291 ms_handle_reset con 0x556dcc70bc00 session 0x556dcd68a780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157868032 unmapped: 37765120 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7651557 data_alloc: 234881024 data_used: 23494656
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 292 ms_handle_reset con 0x556dccd24400 session 0x556dccf4c3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 292 ms_handle_reset con 0x556dcd377800 session 0x556dcae0f4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157876224 unmapped: 37756928 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 292 ms_handle_reset con 0x556dca101c00 session 0x556dca8a0960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 292 ms_handle_reset con 0x556dcc70bc00 session 0x556dca8a10e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 292 heartbeat osd_stat(store_statfs(0x4c89e5000/0x0/0x4ffc00000, data 0x31659aab/0x30d37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157884416 unmapped: 37748736 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 292 ms_handle_reset con 0x556dccd24400 session 0x556dcd248000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157884416 unmapped: 37748736 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.974592209s of 10.629494667s, submitted: 134
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 292 ms_handle_reset con 0x556dcc680c00 session 0x556dca905c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 292 ms_handle_reset con 0x556dccc00000 session 0x556dccf4a1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 292 heartbeat osd_stat(store_statfs(0x4c89e6000/0x0/0x4ffc00000, data 0x31659b0d/0x30d38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157900800 unmapped: 37732352 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 293 ms_handle_reset con 0x556dceaeac00 session 0x556dcc6eba40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157949952 unmapped: 37683200 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7593429 data_alloc: 234881024 data_used: 23502848
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 293 ms_handle_reset con 0x556dccc00000 session 0x556dcd1a34a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157958144 unmapped: 37675008 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157958144 unmapped: 37675008 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 158482432 unmapped: 37150720 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 293 handle_osd_map epochs [293,294], i have 293, src has [1,294]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 294 ms_handle_reset con 0x556dcc680c00 session 0x556dcabe3680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 294 ms_handle_reset con 0x556dca101c00 session 0x556dcd3fd4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 294 ms_handle_reset con 0x556dccd24400 session 0x556dcbab1a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 294 ms_handle_reset con 0x556dcc70bc00 session 0x556dcc6390e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 294 ms_handle_reset con 0x556dca101c00 session 0x556dcd847c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 294 ms_handle_reset con 0x556dcc680c00 session 0x556dcae0fe00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 294 ms_handle_reset con 0x556dccc00000 session 0x556dcd1a30e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 294 heartbeat osd_stat(store_statfs(0x4c874a000/0x0/0x4ffc00000, data 0x318f0383/0x30fd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 159621120 unmapped: 36012032 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 295 ms_handle_reset con 0x556dcd365000 session 0x556dcd70d0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 295 ms_handle_reset con 0x556dceaeac00 session 0x556dcd767a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 159506432 unmapped: 36126720 heap: 195633152 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7710780 data_alloc: 234881024 data_used: 23703552
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 184795136 unmapped: 23437312 heap: 208232448 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 295 heartbeat osd_stat(store_statfs(0x4c86b6000/0x0/0x4ffc00000, data 0x3197be9c/0x31060000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [0,0,0,0,0,1,1,3,3])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 295 heartbeat osd_stat(store_statfs(0x4c66b6000/0x0/0x4ffc00000, data 0x3397be9c/0x33060000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,8])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 201957376 unmapped: 27271168 heap: 229228544 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.254443645s of 10.001265526s, submitted: 142
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 68018176 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 295 ms_handle_reset con 0x556dcc680c00 session 0x556dcc10c1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 160202752 unmapped: 73228288 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 168960000 unmapped: 64471040 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8827882 data_alloc: 234881024 data_used: 23724032
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 296 ms_handle_reset con 0x556dce4f2000 session 0x556dcc747e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 296 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd21f0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 162996224 unmapped: 70434816 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 296 heartbeat osd_stat(store_statfs(0x4be675000/0x0/0x4ffc00000, data 0x3b9c38ff/0x3b0a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 182435840 unmapped: 50995200 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178405376 unmapped: 55025664 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 296 ms_handle_reset con 0x556dce233c00 session 0x556dcc746b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 182722560 unmapped: 50708480 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178790400 unmapped: 54640640 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 9549993 data_alloc: 251658240 data_used: 37916672
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 296 heartbeat osd_stat(store_statfs(0x4b8e9a000/0x0/0x4ffc00000, data 0x4119f8ef/0x40884000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 174710784 unmapped: 58720256 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179085312 unmapped: 54345728 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 296 heartbeat osd_stat(store_statfs(0x4b6e8a000/0x0/0x4ffc00000, data 0x42d9f8ef/0x42484000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.631648064s of 10.010790825s, submitted: 81
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 189865984 unmapped: 43565056 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 181805056 unmapped: 51625984 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 296 ms_handle_reset con 0x556dca101c00 session 0x556dcae10780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 296 ms_handle_reset con 0x556dce4f2000 session 0x556dca88f0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 296 ms_handle_reset con 0x556dcd365000 session 0x556dcb50da40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 296 ms_handle_reset con 0x556dceaeac00 session 0x556dcd248d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 296 ms_handle_reset con 0x556dcdfeac00 session 0x556dcba32d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177627136 unmapped: 55803904 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 297 ms_handle_reset con 0x556dcc680c00 session 0x556dcc746f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 297 ms_handle_reset con 0x556dce233c00 session 0x556dcc8992c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 10143588 data_alloc: 251658240 data_used: 37928960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 297 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd6f1e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 297 ms_handle_reset con 0x556dcdfeac00 session 0x556dcd847e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 297 ms_handle_reset con 0x556dca101c00 session 0x556dcad62780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 174522368 unmapped: 58908672 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 297 ms_handle_reset con 0x556dca101c00 session 0x556dccf4a780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 297 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd21ed20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 171286528 unmapped: 62144512 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 297 ms_handle_reset con 0x556dcc680c00 session 0x556dcd249680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 298 ms_handle_reset con 0x556dcdfeac00 session 0x556dccf4bc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 298 heartbeat osd_stat(store_statfs(0x4ca8d5000/0x0/0x4ffc00000, data 0x2f00e592/0x2ea39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 171343872 unmapped: 62087168 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 298 ms_handle_reset con 0x556dce233c00 session 0x556dcad670e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 298 ms_handle_reset con 0x556dce233c00 session 0x556dcc899c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 298 heartbeat osd_stat(store_statfs(0x4ca8d1000/0x0/0x4ffc00000, data 0x2f01018f/0x2ea3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 171679744 unmapped: 61751296 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 172490752 unmapped: 60940288 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7552820 data_alloc: 251658240 data_used: 29327360
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 298 handle_osd_map epochs [298,299], i have 298, src has [1,299]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 172515328 unmapped: 60915712 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 300 ms_handle_reset con 0x556dca101c00 session 0x556dcbab0f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 300 ms_handle_reset con 0x556dcacd4c00 session 0x556dcc8b52c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 172548096 unmapped: 60882944 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 172548096 unmapped: 60882944 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.956796646s of 10.332565308s, submitted: 285
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 301 handle_osd_map epochs [301,302], i have 301, src has [1,302]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 302 handle_osd_map epochs [302,302], i have 302, src has [1,302]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 302 heartbeat osd_stat(store_statfs(0x4cbd4b000/0x0/0x4ffc00000, data 0x2d418927/0x2d5c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 302 ms_handle_reset con 0x556dcc680c00 session 0x556dcc8985a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 170811392 unmapped: 62619648 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 302 ms_handle_reset con 0x556dcdfeac00 session 0x556dcb50d680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 168075264 unmapped: 65355776 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4973418 data_alloc: 251658240 data_used: 27967488
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 168771584 unmapped: 64659456 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 303 ms_handle_reset con 0x556dcdfeac00 session 0x556dcacd7c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 167837696 unmapped: 65593344 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 303 ms_handle_reset con 0x556dca101c00 session 0x556dcc8b5e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 167854080 unmapped: 65576960 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 303 heartbeat osd_stat(store_statfs(0x4f68d4000/0x0/0x4ffc00000, data 0x288bdc6/0x2a39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 168902656 unmapped: 64528384 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 168902656 unmapped: 64528384 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2748300 data_alloc: 251658240 data_used: 27963392
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 168902656 unmapped: 64528384 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 304 heartbeat osd_stat(store_statfs(0x4f68d0000/0x0/0x4ffc00000, data 0x288d885/0x2a3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 168902656 unmapped: 64528384 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 168902656 unmapped: 64528384 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.634363174s of 10.447886467s, submitted: 274
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 169959424 unmapped: 63471616 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 306 ms_handle_reset con 0x556dcacd4c00 session 0x556dcb50c3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 169992192 unmapped: 63438848 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2763643 data_alloc: 251658240 data_used: 29700096
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 306 ms_handle_reset con 0x556dcc680c00 session 0x556dcc7472c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 169992192 unmapped: 63438848 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 169992192 unmapped: 63438848 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 306 heartbeat osd_stat(store_statfs(0x4f68c8000/0x0/0x4ffc00000, data 0x2891eb5/0x2a44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 306 ms_handle_reset con 0x556dcd365000 session 0x556dcaccb860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 306 ms_handle_reset con 0x556dca101c00 session 0x556dcc898000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 306 ms_handle_reset con 0x556dcd365000 session 0x556dcae0c000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 306 ms_handle_reset con 0x556dcacd4c00 session 0x556dcba332c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 307 ms_handle_reset con 0x556dce233c00 session 0x556dcd35f680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 170041344 unmapped: 63389696 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 170057728 unmapped: 63373312 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 307 ms_handle_reset con 0x556dcdfeac00 session 0x556dcc639680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 307 ms_handle_reset con 0x556dcc680c00 session 0x556dcad67a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 307 ms_handle_reset con 0x556dca101c00 session 0x556dcab663c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 307 heartbeat osd_stat(store_statfs(0x4f68c5000/0x0/0x4ffc00000, data 0x2893a52/0x2a49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 170090496 unmapped: 63340544 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 307 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd21fc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2769849 data_alloc: 251658240 data_used: 29720576
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 170131456 unmapped: 63299584 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 308 ms_handle_reset con 0x556dcd365000 session 0x556dcb50cd20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 308 ms_handle_reset con 0x556dce233c00 session 0x556dcaccbc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 170188800 unmapped: 63242240 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 308 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd68a1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 308 ms_handle_reset con 0x556dca101c00 session 0x556dcd26cd20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 170221568 unmapped: 63209472 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 308 heartbeat osd_stat(store_statfs(0x4f68c3000/0x0/0x4ffc00000, data 0x28957b7/0x2a4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 170221568 unmapped: 63209472 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 170287104 unmapped: 63143936 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.199175835s of 11.471850395s, submitted: 93
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 309 heartbeat osd_stat(store_statfs(0x4f68c3000/0x0/0x4ffc00000, data 0x28957b7/0x2a4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2783672 data_alloc: 251658240 data_used: 29986816
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 309 ms_handle_reset con 0x556dcd365000 session 0x556dcd21e1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 170369024 unmapped: 63062016 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 309 ms_handle_reset con 0x556dcc680c00 session 0x556dccf4c960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 309 ms_handle_reset con 0x556dce4f2000 session 0x556dcae114a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 309 ms_handle_reset con 0x556dce4f2000 session 0x556dcd847860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 309 ms_handle_reset con 0x556dca101c00 session 0x556dcd366d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 309 heartbeat osd_stat(store_statfs(0x4f68bb000/0x0/0x4ffc00000, data 0x28993c2/0x2a52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 170385408 unmapped: 63045632 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 309 ms_handle_reset con 0x556dcc70bc00 session 0x556dcd68a3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 309 ms_handle_reset con 0x556dccc00000 session 0x556dccf4a5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 309 ms_handle_reset con 0x556dcacd4c00 session 0x556dcae0e000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 309 ms_handle_reset con 0x556dcacd4c00 session 0x556dcb9c7c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 170459136 unmapped: 62971904 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 309 handle_osd_map epochs [309,310], i have 309, src has [1,310]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 310 ms_handle_reset con 0x556dccc00000 session 0x556dcbab1a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 310 ms_handle_reset con 0x556dcc70bc00 session 0x556dcc10c5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 311 ms_handle_reset con 0x556dca101c00 session 0x556dcabe30e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 170573824 unmapped: 62857216 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 311 ms_handle_reset con 0x556dce4f2000 session 0x556dcc6ea5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 311 ms_handle_reset con 0x556dca101c00 session 0x556dcad62f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 311 heartbeat osd_stat(store_statfs(0x4f6949000/0x0/0x4ffc00000, data 0x2807aae/0x29c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 311 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd248d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 170573824 unmapped: 62857216 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 311 ms_handle_reset con 0x556dcc70bc00 session 0x556dcd1a34a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 311 ms_handle_reset con 0x556dccc00000 session 0x556dcc6eaf00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 311 ms_handle_reset con 0x556dcc680c00 session 0x556dcc8b5c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2775338 data_alloc: 251658240 data_used: 29638656
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 311 ms_handle_reset con 0x556dca101c00 session 0x556dcc6ea3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 311 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd846f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 311 ms_handle_reset con 0x556dcc70bc00 session 0x556dcad62000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 170590208 unmapped: 62840832 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 312 ms_handle_reset con 0x556dccc00000 session 0x556dcd68ba40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 312 ms_handle_reset con 0x556dcd365000 session 0x556dcd837e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 312 ms_handle_reset con 0x556dcc680c00 session 0x556dcc65a3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 312 ms_handle_reset con 0x556dca101c00 session 0x556dcae105a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 170606592 unmapped: 62824448 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 312 heartbeat osd_stat(store_statfs(0x4f694a000/0x0/0x4ffc00000, data 0x2807b20/0x29c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 312 ms_handle_reset con 0x556dcacd4c00 session 0x556dccb7be00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 312 ms_handle_reset con 0x556dcc70bc00 session 0x556dcd21ef00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 312 ms_handle_reset con 0x556dccc00000 session 0x556dcc899e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 170614784 unmapped: 62816256 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 313 handle_osd_map epochs [313,313], i have 313, src has [1,313]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 313 ms_handle_reset con 0x556dcacd4c00 session 0x556dcaca9860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 313 ms_handle_reset con 0x556dceaeac00 session 0x556dca0954a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 313 ms_handle_reset con 0x556dcc70bc00 session 0x556dcc10d2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 313 ms_handle_reset con 0x556dccbe4400 session 0x556dcaca81e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 313 ms_handle_reset con 0x556dce64e800 session 0x556dccf4a3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 160612352 unmapped: 72818688 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 314 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd837a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 314 ms_handle_reset con 0x556dcc680c00 session 0x556dcc7470e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 314 ms_handle_reset con 0x556dca101c00 session 0x556dcacca1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 314 ms_handle_reset con 0x556dcc70bc00 session 0x556dcb5265a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.308519363s of 10.019262314s, submitted: 210
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 160636928 unmapped: 72794112 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 314 ms_handle_reset con 0x556dccbe4400 session 0x556dcd3670e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 314 ms_handle_reset con 0x556dccbe4400 session 0x556dcab66f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2462439 data_alloc: 234881024 data_used: 10792960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 314 ms_handle_reset con 0x556dca101c00 session 0x556dcc6390e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 160636928 unmapped: 72794112 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 314 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd70c3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 314 ms_handle_reset con 0x556dcd91f000 session 0x556dca8a0000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 314 ms_handle_reset con 0x556dcc680c00 session 0x556dcabe2f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 314 ms_handle_reset con 0x556dca101c00 session 0x556dccb7ab40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 314 ms_handle_reset con 0x556dcc680c00 session 0x556dccf4d680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155049984 unmapped: 78381056 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 314 heartbeat osd_stat(store_statfs(0x4f8b2a000/0x0/0x4ffc00000, data 0x62cf01/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 314 ms_handle_reset con 0x556dcacd4c00 session 0x556dcaca8960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 314 ms_handle_reset con 0x556dccbe4400 session 0x556dcb527a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155058176 unmapped: 78372864 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 314 handle_osd_map epochs [314,315], i have 314, src has [1,315]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 315 handle_osd_map epochs [315,315], i have 315, src has [1,315]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 315 ms_handle_reset con 0x556dcd91f000 session 0x556dcc10c1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 315 heartbeat osd_stat(store_statfs(0x4f8b2a000/0x0/0x4ffc00000, data 0x62cf01/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155058176 unmapped: 78372864 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 317 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd846000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 317 ms_handle_reset con 0x556dca101c00 session 0x556dcb682780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 317 ms_handle_reset con 0x556dccbe4400 session 0x556dcba32d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 317 ms_handle_reset con 0x556dcc680c00 session 0x556dcd1a25a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155058176 unmapped: 78372864 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 317 ms_handle_reset con 0x556dceaeac00 session 0x556dca8a0960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 317 ms_handle_reset con 0x556dcc70bc00 session 0x556dcd3661e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2345862 data_alloc: 218103808 data_used: 1232896
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155058176 unmapped: 78372864 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 317 handle_osd_map epochs [317,318], i have 317, src has [1,318]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 318 ms_handle_reset con 0x556dceaeac00 session 0x556dcd35ef00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155058176 unmapped: 78372864 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 318 ms_handle_reset con 0x556dcacd4c00 session 0x556dcad665a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 318 ms_handle_reset con 0x556dca101c00 session 0x556dccf4a1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 318 heartbeat osd_stat(store_statfs(0x4f8b1c000/0x0/0x4ffc00000, data 0x633da7/0x7f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 318 heartbeat osd_stat(store_statfs(0x4f8b1c000/0x0/0x4ffc00000, data 0x633da7/0x7f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155107328 unmapped: 78323712 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 318 ms_handle_reset con 0x556dcc680c00 session 0x556dcabe3680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 319 ms_handle_reset con 0x556dca101c00 session 0x556dcc899e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155107328 unmapped: 78323712 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 319 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd68ba40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 319 ms_handle_reset con 0x556dcc70bc00 session 0x556dcbab1860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 319 ms_handle_reset con 0x556dceaeac00 session 0x556dcd70d2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155123712 unmapped: 78307328 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2357472 data_alloc: 218103808 data_used: 1253376
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.465471268s of 10.819207191s, submitted: 119
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 319 ms_handle_reset con 0x556dccbe4400 session 0x556dcad67a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 319 ms_handle_reset con 0x556dca101c00 session 0x556dcd35f680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 319 ms_handle_reset con 0x556dcacd4c00 session 0x556dcae0c000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155115520 unmapped: 78315520 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 319 ms_handle_reset con 0x556dce0f8800 session 0x556dcaca8f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 319 ms_handle_reset con 0x556dceaeac00 session 0x556dcb50d680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 319 ms_handle_reset con 0x556dce64ec00 session 0x556dcae0f0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 319 ms_handle_reset con 0x556dca101c00 session 0x556dcc899c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155189248 unmapped: 78241792 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 319 heartbeat osd_stat(store_statfs(0x4f8b14000/0x0/0x4ffc00000, data 0x635a88/0x7fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 320 ms_handle_reset con 0x556dcacd4c00 session 0x556dcae0d2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 154656768 unmapped: 78774272 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 320 ms_handle_reset con 0x556dcc70bc00 session 0x556dcc7472c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 320 ms_handle_reset con 0x556dce0f8800 session 0x556dcd766d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 320 heartbeat osd_stat(store_statfs(0x4f8b11000/0x0/0x4ffc00000, data 0x637669/0x7fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 321 ms_handle_reset con 0x556dceaeac00 session 0x556dcd7672c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 154697728 unmapped: 78733312 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 321 ms_handle_reset con 0x556dceaeac00 session 0x556dcd21ef00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 321 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd21fe00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 154705920 unmapped: 78725120 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 321 ms_handle_reset con 0x556dce0f8800 session 0x556dcbab1860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 321 ms_handle_reset con 0x556dcc64dc00 session 0x556dcabe3680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2370596 data_alloc: 218103808 data_used: 1282048
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 321 handle_osd_map epochs [321,322], i have 321, src has [1,322]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 322 ms_handle_reset con 0x556dcc70bc00 session 0x556dcd837e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 322 ms_handle_reset con 0x556dca101c00 session 0x556dcd21e000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 154714112 unmapped: 78716928 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 322 ms_handle_reset con 0x556dcc70bc00 session 0x556dcd8372c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 322 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd836000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 322 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x63abe5/0x7ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 154714112 unmapped: 78716928 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 322 ms_handle_reset con 0x556dcc64dc00 session 0x556dcad62000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 322 ms_handle_reset con 0x556dce0f8800 session 0x556dcad62d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 322 ms_handle_reset con 0x556dca101c00 session 0x556dcd26de00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 322 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd26cf00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 154755072 unmapped: 78675968 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 322 heartbeat osd_stat(store_statfs(0x4f8b11000/0x0/0x4ffc00000, data 0x63ab21/0x7fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 322 handle_osd_map epochs [322,323], i have 322, src has [1,323]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 323 ms_handle_reset con 0x556dcc64dc00 session 0x556dccb7be00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 323 ms_handle_reset con 0x556dcc70bc00 session 0x556dccb7a960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 154755072 unmapped: 78675968 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 323 ms_handle_reset con 0x556dceaeac00 session 0x556dcc7461e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 154755072 unmapped: 78675968 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2377057 data_alloc: 218103808 data_used: 1282048
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 323 ms_handle_reset con 0x556dca101c00 session 0x556dcd68b860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.216643333s of 10.000334740s, submitted: 164
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 323 ms_handle_reset con 0x556dcacd4c00 session 0x556dcbab1e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 154771456 unmapped: 78659584 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 324 ms_handle_reset con 0x556dceaeac00 session 0x556dcc746d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 324 heartbeat osd_stat(store_statfs(0x4f8b0f000/0x0/0x4ffc00000, data 0x63c53e/0x7ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 154771456 unmapped: 78659584 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 324 heartbeat osd_stat(store_statfs(0x4f8b0b000/0x0/0x4ffc00000, data 0x63e0d7/0x802000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 154771456 unmapped: 78659584 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 324 ms_handle_reset con 0x556dcc70bc00 session 0x556dcae0de00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 324 ms_handle_reset con 0x556dcc64dc00 session 0x556dccf4a000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155820032 unmapped: 77611008 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 325 ms_handle_reset con 0x556dca101c00 session 0x556dcc65b2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 325 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd3fd4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155820032 unmapped: 77611008 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2382414 data_alloc: 218103808 data_used: 1306624
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 325 ms_handle_reset con 0x556dcc70bc00 session 0x556dcae0c5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 325 ms_handle_reset con 0x556dce4f3000 session 0x556dccb7ab40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155836416 unmapped: 77594624 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 326 ms_handle_reset con 0x556dceaeac00 session 0x556dcd1a2960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 326 ms_handle_reset con 0x556dce64e000 session 0x556dcc6390e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 326 ms_handle_reset con 0x556dcd379400 session 0x556dcd3fd0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155844608 unmapped: 77586432 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155844608 unmapped: 77586432 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 326 ms_handle_reset con 0x556dceaeac00 session 0x556dcabe3680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 326 heartbeat osd_stat(store_statfs(0x4f8b05000/0x0/0x4ffc00000, data 0x641870/0x808000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155844608 unmapped: 77586432 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 326 ms_handle_reset con 0x556dca101c00 session 0x556dcd21fe00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 326 ms_handle_reset con 0x556dcc70bc00 session 0x556dcae0d2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 326 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd766d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 326 ms_handle_reset con 0x556dcc70bc00 session 0x556dcae0c000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 327 heartbeat osd_stat(store_statfs(0x4f8b05000/0x0/0x4ffc00000, data 0x6418d2/0x809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [1])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 327 ms_handle_reset con 0x556dca101c00 session 0x556dcd26de00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155877376 unmapped: 77553664 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2398222 data_alloc: 218103808 data_used: 1327104
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 327 ms_handle_reset con 0x556dcd379400 session 0x556dcc6eba40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155877376 unmapped: 77553664 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 327 ms_handle_reset con 0x556dceaeac00 session 0x556dcb527680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.089593887s of 10.417663574s, submitted: 103
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 327 ms_handle_reset con 0x556dceaeac00 session 0x556dca905c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 327 ms_handle_reset con 0x556dce64e000 session 0x556dcae0f680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155877376 unmapped: 77553664 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 327 ms_handle_reset con 0x556dcacd4c00 session 0x556dcc8992c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 327 ms_handle_reset con 0x556dcc70bc00 session 0x556dca904960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155926528 unmapped: 77504512 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 328 ms_handle_reset con 0x556dca101c00 session 0x556dcd766960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 329 ms_handle_reset con 0x556dcc70bc00 session 0x556dccb7af00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 329 ms_handle_reset con 0x556dce64e000 session 0x556dccb7a960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155934720 unmapped: 77496320 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 330 ms_handle_reset con 0x556dcacd4c00 session 0x556dcc639680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 330 ms_handle_reset con 0x556dca101c00 session 0x556dcad670e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155942912 unmapped: 77488128 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2409183 data_alloc: 218103808 data_used: 1339392
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 330 heartbeat osd_stat(store_statfs(0x4f8af6000/0x0/0x4ffc00000, data 0x648ac0/0x817000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155942912 unmapped: 77488128 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 330 ms_handle_reset con 0x556dceaeac00 session 0x556dcd248f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 330 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd68b2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 330 ms_handle_reset con 0x556dca101c00 session 0x556dcd68ad20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 330 ms_handle_reset con 0x556dcc70bc00 session 0x556dcd26c3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155942912 unmapped: 77488128 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155942912 unmapped: 77488128 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 330 handle_osd_map epochs [331,331], i have 330, src has [1,331]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 331 ms_handle_reset con 0x556dce64e000 session 0x556dcc638f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 331 ms_handle_reset con 0x556dcd379400 session 0x556dcc7465a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155967488 unmapped: 77463552 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 331 ms_handle_reset con 0x556dcd379400 session 0x556dcd836d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155975680 unmapped: 77455360 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2421341 data_alloc: 218103808 data_used: 1355776
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 331 ms_handle_reset con 0x556dca101c00 session 0x556dcab670e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155975680 unmapped: 77455360 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.728518486s of 10.154348373s, submitted: 162
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 331 ms_handle_reset con 0x556dcacd4c00 session 0x556dccf4cf00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 331 heartbeat osd_stat(store_statfs(0x4f8aef000/0x0/0x4ffc00000, data 0x64a74b/0x81e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155992064 unmapped: 77438976 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 331 handle_osd_map epochs [331,332], i have 331, src has [1,332]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 332 ms_handle_reset con 0x556dcc70bc00 session 0x556dcae0d680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 332 heartbeat osd_stat(store_statfs(0x4f8aef000/0x0/0x4ffc00000, data 0x64ab4b/0x81f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155992064 unmapped: 77438976 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 332 ms_handle_reset con 0x556dce64e000 session 0x556dcd248000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 332 ms_handle_reset con 0x556dcacd4c00 session 0x556dcae0d680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 332 handle_osd_map epochs [333,333], i have 332, src has [1,333]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 333 ms_handle_reset con 0x556dcc70bc00 session 0x556dcd836d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 156024832 unmapped: 77406208 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 333 ms_handle_reset con 0x556dcd379400 session 0x556dcb527680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 333 handle_osd_map epochs [333,334], i have 333, src has [1,334]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 334 ms_handle_reset con 0x556dce4f3000 session 0x556dcc6eba40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 334 ms_handle_reset con 0x556dca101c00 session 0x556dca8a10e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 334 ms_handle_reset con 0x556dcacd4c00 session 0x556dcae0d2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 156033024 unmapped: 77398016 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 334 ms_handle_reset con 0x556dce4f3000 session 0x556dccb7bc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2437788 data_alloc: 218103808 data_used: 1376256
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 334 ms_handle_reset con 0x556dcc64c400 session 0x556dccb7b2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 334 ms_handle_reset con 0x556dcd379400 session 0x556dcd3fd0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 334 ms_handle_reset con 0x556dcc70bc00 session 0x556dcabe3680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 334 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd1a2960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 156033024 unmapped: 77398016 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 334 ms_handle_reset con 0x556dcc91f000 session 0x556dcacd7e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 156033024 unmapped: 77398016 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 334 handle_osd_map epochs [334,335], i have 334, src has [1,335]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 335 ms_handle_reset con 0x556dcc64d800 session 0x556dcc6eb860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 335 heartbeat osd_stat(store_statfs(0x4f8ae2000/0x0/0x4ffc00000, data 0x64feb9/0x82c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 156041216 unmapped: 77389824 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 335 ms_handle_reset con 0x556dcec27c00 session 0x556dcc6eb0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 335 heartbeat osd_stat(store_statfs(0x4f8ae2000/0x0/0x4ffc00000, data 0x64feb9/0x82c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 156065792 unmapped: 77365248 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 335 handle_osd_map epochs [336,336], i have 335, src has [1,336]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 336 ms_handle_reset con 0x556dcf2a4c00 session 0x556dcd2483c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 336 ms_handle_reset con 0x556dce4f3000 session 0x556dccf4a000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 336 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd70cf00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 336 ms_handle_reset con 0x556dcacd4400 session 0x556dcacd6b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 156073984 unmapped: 77357056 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 336 heartbeat osd_stat(store_statfs(0x4f86cb000/0x0/0x4ffc00000, data 0x6535b3/0x832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2448577 data_alloc: 218103808 data_used: 1396736
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 336 ms_handle_reset con 0x556dcc91f000 session 0x556dcd70c960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 336 ms_handle_reset con 0x556dcec27c00 session 0x556dcd26a000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 156073984 unmapped: 77357056 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.509442329s of 10.016919136s, submitted: 40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 336 ms_handle_reset con 0x556dcc64c400 session 0x556dcc638f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 336 handle_osd_map epochs [336,337], i have 336, src has [1,337]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 336 ms_handle_reset con 0x556dcd379400 session 0x556dcd68ab40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 337 ms_handle_reset con 0x556dcacd4400 session 0x556dccb7b0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 337 ms_handle_reset con 0x556dcc64d800 session 0x556dcd8372c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 337 ms_handle_reset con 0x556dcacd4c00 session 0x556dcae112c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 337 ms_handle_reset con 0x556dcc91f000 session 0x556dcad62000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 156106752 unmapped: 77324288 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 337 handle_osd_map epochs [338,338], i have 337, src has [1,338]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 338 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd70d0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 156114944 unmapped: 77316096 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 338 ms_handle_reset con 0x556dcc64d800 session 0x556dcc746d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 338 ms_handle_reset con 0x556dcc64c400 session 0x556dcad665a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 156114944 unmapped: 77316096 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 338 ms_handle_reset con 0x556dcd379400 session 0x556dcbab0780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 338 ms_handle_reset con 0x556dcacd4c00 session 0x556dca8a0000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 338 ms_handle_reset con 0x556dcd379400 session 0x556dcba32d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 338 handle_osd_map epochs [339,339], i have 338, src has [1,339]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 339 ms_handle_reset con 0x556dcc64d800 session 0x556dccf4da40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 156172288 unmapped: 77258752 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2460116 data_alloc: 218103808 data_used: 1384448
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 156172288 unmapped: 77258752 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 339 handle_osd_map epochs [339,340], i have 339, src has [1,340]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 340 handle_osd_map epochs [340,340], i have 340, src has [1,340]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 340 heartbeat osd_stat(store_statfs(0x4f86c3000/0x0/0x4ffc00000, data 0x6588e2/0x838000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 340 ms_handle_reset con 0x556dcc91f000 session 0x556dcba32d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 340 ms_handle_reset con 0x556dcc64c400 session 0x556dcd21e1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 340 ms_handle_reset con 0x556dcacd4400 session 0x556dcd26c000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 156172288 unmapped: 77258752 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 340 ms_handle_reset con 0x556dcacd4c00 session 0x556dccf4c960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 156172288 unmapped: 77258752 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 340 handle_osd_map epochs [340,341], i have 340, src has [1,341]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 340 ms_handle_reset con 0x556dce4f3000 session 0x556dcad62000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 340 handle_osd_map epochs [341,341], i have 341, src has [1,341]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 341 heartbeat osd_stat(store_statfs(0x4f86c2000/0x0/0x4ffc00000, data 0x65a63d/0x83b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,3])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 341 ms_handle_reset con 0x556dcc91f000 session 0x556dcd68a5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 341 ms_handle_reset con 0x556dcd431400 session 0x556dcd8372c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 341 handle_osd_map epochs [342,342], i have 341, src has [1,342]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 342 ms_handle_reset con 0x556dcd379400 session 0x556dcd70d860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 342 heartbeat osd_stat(store_statfs(0x4f86be000/0x0/0x4ffc00000, data 0x65de2b/0x83e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,2])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157278208 unmapped: 76152832 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 342 ms_handle_reset con 0x556dcc64d800 session 0x556dcad62d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 342 ms_handle_reset con 0x556dcd365c00 session 0x556dcc7465a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157278208 unmapped: 76152832 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 342 ms_handle_reset con 0x556dcacd4400 session 0x556dcb682f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2464445 data_alloc: 218103808 data_used: 1396736
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 342 ms_handle_reset con 0x556dcacd4c00 session 0x556dcad68780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157278208 unmapped: 76152832 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157278208 unmapped: 76152832 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 342 ms_handle_reset con 0x556dcacd4c00 session 0x556dcc8b5680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.120287895s of 11.542097092s, submitted: 250
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 342 ms_handle_reset con 0x556dcd365c00 session 0x556dcd21fc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 342 ms_handle_reset con 0x556dcacd4400 session 0x556dcd68b4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157286400 unmapped: 76144640 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 342 handle_osd_map epochs [343,343], i have 342, src has [1,344]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 343 handle_osd_map epochs [344,344], i have 343, src has [1,344]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 344 ms_handle_reset con 0x556dcd379400 session 0x556dcae0e000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 344 ms_handle_reset con 0x556dcd431400 session 0x556dcd836000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 156983296 unmapped: 76447744 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 344 heartbeat osd_stat(store_statfs(0x4f80dd000/0x0/0x4ffc00000, data 0xc3ee12/0xe21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,0,2])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 344 ms_handle_reset con 0x556dcc91f000 session 0x556dcbab1860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 344 ms_handle_reset con 0x556dcacd4400 session 0x556dcacca1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 344 ms_handle_reset con 0x556dce4f3000 session 0x556dccb7b0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 345 ms_handle_reset con 0x556dcacd4c00 session 0x556dcaccab40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 345 ms_handle_reset con 0x556dcc64d800 session 0x556dcacd70e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 156983296 unmapped: 76447744 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2534664 data_alloc: 218103808 data_used: 1417216
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 345 handle_osd_map epochs [346,346], i have 345, src has [1,346]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 346 ms_handle_reset con 0x556dcacd4c00 session 0x556dccb7a960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 346 ms_handle_reset con 0x556dcacd4400 session 0x556dcae0c000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 346 ms_handle_reset con 0x556dce4f3000 session 0x556dca88ef00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157007872 unmapped: 76423168 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 346 ms_handle_reset con 0x556dcc91f000 session 0x556dcd767c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157016064 unmapped: 76414976 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 346 ms_handle_reset con 0x556dcd365c00 session 0x556dcd248000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 347 ms_handle_reset con 0x556dcd379400 session 0x556dccf4a1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157057024 unmapped: 76374016 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 347 ms_handle_reset con 0x556dcacd4400 session 0x556dcd8470e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157057024 unmapped: 76374016 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 347 ms_handle_reset con 0x556dce4f3000 session 0x556dcaccb860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 347 heartbeat osd_stat(store_statfs(0x4f80ca000/0x0/0x4ffc00000, data 0xc47de1/0xe33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 347 handle_osd_map epochs [348,348], i have 347, src has [1,348]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 348 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd21fe00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157089792 unmapped: 76341248 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 348 ms_handle_reset con 0x556dceaea400 session 0x556dca8a0960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2553450 data_alloc: 218103808 data_used: 1433600
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 348 handle_osd_map epochs [348,349], i have 348, src has [1,349]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 349 ms_handle_reset con 0x556dcdeeb000 session 0x556dcd836780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 349 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd26a000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 349 ms_handle_reset con 0x556dcacd4400 session 0x556dcc65a3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 349 ms_handle_reset con 0x556dcd379400 session 0x556dccf4af00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 349 ms_handle_reset con 0x556dcc91f000 session 0x556dcc10c780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157122560 unmapped: 76308480 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 349 heartbeat osd_stat(store_statfs(0x4f80c4000/0x0/0x4ffc00000, data 0xc49a34/0xe38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157122560 unmapped: 76308480 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 349 ms_handle_reset con 0x556dce4f3000 session 0x556dcd7674a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.917152405s of 10.416923523s, submitted: 172
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157147136 unmapped: 76283904 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 350 ms_handle_reset con 0x556dcc91f000 session 0x556dcb527a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 350 ms_handle_reset con 0x556dcacd4400 session 0x556dcd767c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 350 ms_handle_reset con 0x556dcacd4c00 session 0x556dcc8b5680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 350 handle_osd_map epochs [350,351], i have 350, src has [1,351]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 351 ms_handle_reset con 0x556dcd379400 session 0x556dcc7465a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 351 heartbeat osd_stat(store_statfs(0x4f80c1000/0x0/0x4ffc00000, data 0xc4d16c/0xe3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157138944 unmapped: 76292096 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 351 ms_handle_reset con 0x556dcd379400 session 0x556dcd21fc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157171712 unmapped: 76259328 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 351 heartbeat osd_stat(store_statfs(0x4f80bc000/0x0/0x4ffc00000, data 0xc4ec5d/0xe41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2564077 data_alloc: 218103808 data_used: 1458176
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 351 ms_handle_reset con 0x556dcacd4400 session 0x556dcd836000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157171712 unmapped: 76259328 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 351 ms_handle_reset con 0x556dcc91f000 session 0x556dccb7b4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 351 ms_handle_reset con 0x556dcdeeb000 session 0x556dca88ef00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 351 ms_handle_reset con 0x556dceaea400 session 0x556dcd21e780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157179904 unmapped: 76251136 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 351 handle_osd_map epochs [351,352], i have 351, src has [1,352]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 352 ms_handle_reset con 0x556dce4f3000 session 0x556dcd26c960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 352 ms_handle_reset con 0x556dcacd4400 session 0x556dcbab0000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 352 ms_handle_reset con 0x556dcacd4c00 session 0x556dccb7b0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157237248 unmapped: 76193792 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 352 ms_handle_reset con 0x556dccbe5000 session 0x556dcd2483c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 352 handle_osd_map epochs [353,353], i have 352, src has [1,353]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 353 ms_handle_reset con 0x556dcdeeb000 session 0x556dcad63e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 353 ms_handle_reset con 0x556dcacd4400 session 0x556dcd35fe00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157278208 unmapped: 76152832 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 353 heartbeat osd_stat(store_statfs(0x4f80b4000/0x0/0x4ffc00000, data 0xc5241f/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 353 handle_osd_map epochs [354,354], i have 353, src has [1,354]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 354 ms_handle_reset con 0x556dcc91f000 session 0x556dcd847680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 354 ms_handle_reset con 0x556dcd379400 session 0x556dcb57c780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 354 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd35e3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157900800 unmapped: 75530240 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2615657 data_alloc: 218103808 data_used: 7544832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 354 ms_handle_reset con 0x556dce4f3000 session 0x556dcd249680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 354 ms_handle_reset con 0x556dccbe5000 session 0x556dca9045a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157949952 unmapped: 75481088 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 354 handle_osd_map epochs [354,355], i have 354, src has [1,355]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 355 ms_handle_reset con 0x556dcacd4c00 session 0x556dcae0f860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 355 ms_handle_reset con 0x556dcacd4400 session 0x556dcd8370e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 355 ms_handle_reset con 0x556dcd379400 session 0x556dcd6f12c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157949952 unmapped: 75481088 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 355 heartbeat osd_stat(store_statfs(0x4f80b6000/0x0/0x4ffc00000, data 0xc53ef6/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 355 ms_handle_reset con 0x556dcec27400 session 0x556dcac441e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 355 handle_osd_map epochs [356,356], i have 355, src has [1,356]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 356 ms_handle_reset con 0x556dcc91f000 session 0x556dcc6eb860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 356 ms_handle_reset con 0x556dcacd4c00 session 0x556dca8a1c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 356 ms_handle_reset con 0x556dcdeeb000 session 0x556dcd35ef00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 356 ms_handle_reset con 0x556dcacd4400 session 0x556dcac44b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.314867020s of 10.024769783s, submitted: 207
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157958144 unmapped: 75472896 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 356 handle_osd_map epochs [357,357], i have 356, src has [1,357]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 357 ms_handle_reset con 0x556dccbe5000 session 0x556dca9045a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157958144 unmapped: 75472896 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 357 ms_handle_reset con 0x556dcacd4400 session 0x556dcd35fe00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 357 handle_osd_map epochs [357,358], i have 357, src has [1,358]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 358 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd26c960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 358 ms_handle_reset con 0x556dcdeeb000 session 0x556dcc8b4960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157982720 unmapped: 75448320 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2630949 data_alloc: 218103808 data_used: 7544832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 358 handle_osd_map epochs [359,359], i have 358, src has [1,359]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 359 ms_handle_reset con 0x556dccbe5000 session 0x556dcd35e3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 359 ms_handle_reset con 0x556dcd379400 session 0x556dccf4f4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 359 ms_handle_reset con 0x556dcec26c00 session 0x556dcd1a2960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155492352 unmapped: 77938688 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 359 ms_handle_reset con 0x556dcd37a400 session 0x556dcc65a3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 359 ms_handle_reset con 0x556dcacd4400 session 0x556dcd70d680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 359 heartbeat osd_stat(store_statfs(0x4f80ac000/0x0/0x4ffc00000, data 0xc5b06a/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 359 handle_osd_map epochs [360,360], i have 359, src has [1,360]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 360 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd26a000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 360 ms_handle_reset con 0x556dcc91f000 session 0x556dcb527a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155492352 unmapped: 77938688 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 360 heartbeat osd_stat(store_statfs(0x4f8684000/0x0/0x4ffc00000, data 0x67d86c/0x877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155492352 unmapped: 77938688 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 360 ms_handle_reset con 0x556dcacd4c00 session 0x556dcad62f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 360 handle_osd_map epochs [361,361], i have 360, src has [1,361]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 361 ms_handle_reset con 0x556dcacd4400 session 0x556dcacd7860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155525120 unmapped: 77905920 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 361 ms_handle_reset con 0x556dcd37a400 session 0x556dcd68af00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 361 heartbeat osd_stat(store_statfs(0x4f8685000/0x0/0x4ffc00000, data 0x67f425/0x878000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 361 ms_handle_reset con 0x556dcec26c00 session 0x556dcc7461e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155525120 unmapped: 77905920 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 361 ms_handle_reset con 0x556dcd379400 session 0x556dcd6f03c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2552620 data_alloc: 218103808 data_used: 1486848
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 361 handle_osd_map epochs [361,362], i have 361, src has [1,362]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 362 ms_handle_reset con 0x556dcacd4400 session 0x556dcd6f1860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155549696 unmapped: 77881344 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 362 ms_handle_reset con 0x556dcd37a400 session 0x556dcacd72c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 362 ms_handle_reset con 0x556dcec26c00 session 0x556dcc10d860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 363 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd248000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 363 ms_handle_reset con 0x556dcdeeb000 session 0x556dcc8b5c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 363 ms_handle_reset con 0x556dccbe5000 session 0x556dcd6f0d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155549696 unmapped: 77881344 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.284003258s of 10.007956505s, submitted: 223
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 155549696 unmapped: 77881344 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 363 ms_handle_reset con 0x556dcd37a400 session 0x556dcacd61e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 363 ms_handle_reset con 0x556dcec26c00 session 0x556dcd68a3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 363 handle_osd_map epochs [364,364], i have 363, src has [1,364]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 364 ms_handle_reset con 0x556dcacd4400 session 0x556dcbab1860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 364 ms_handle_reset con 0x556dcd335c00 session 0x556dcc746780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 364 ms_handle_reset con 0x556dccbe5000 session 0x556dccf4fc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 364 ms_handle_reset con 0x556dcdeeb000 session 0x556dca904960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 364 ms_handle_reset con 0x556dcd378400 session 0x556dcd70c960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 364 ms_handle_reset con 0x556dcec26c00 session 0x556dcd3fc1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 160612352 unmapped: 72818688 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 364 heartbeat osd_stat(store_statfs(0x4f867b000/0x0/0x4ffc00000, data 0x6831ea/0x882000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,3])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 364 ms_handle_reset con 0x556dcd37a400 session 0x556dcd3fd680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 364 ms_handle_reset con 0x556dcd37a400 session 0x556dcd70c960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 364 handle_osd_map epochs [364,365], i have 364, src has [1,365]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 365 ms_handle_reset con 0x556dcacd4400 session 0x556dcad665a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 365 ms_handle_reset con 0x556dccbe5000 session 0x556dca904960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 365 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd8372c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157483008 unmapped: 75948032 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 365 heartbeat osd_stat(store_statfs(0x4f817e000/0x0/0x4ffc00000, data 0xb81aaa/0xd80000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2620720 data_alloc: 218103808 data_used: 1507328
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157483008 unmapped: 75948032 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 365 handle_osd_map epochs [365,366], i have 365, src has [1,366]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 365 handle_osd_map epochs [366,366], i have 366, src has [1,366]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 366 ms_handle_reset con 0x556dcec26c00 session 0x556dcc1221e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157491200 unmapped: 75939840 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 366 ms_handle_reset con 0x556dcd378400 session 0x556dcc746780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 366 handle_osd_map epochs [367,367], i have 366, src has [1,367]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 367 ms_handle_reset con 0x556dcacd4400 session 0x556dcd35fe00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157515776 unmapped: 75915264 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 367 handle_osd_map epochs [368,368], i have 367, src has [1,368]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 368 ms_handle_reset con 0x556dccbe5000 session 0x556dcacd70e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157548544 unmapped: 75882496 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 368 ms_handle_reset con 0x556dcacd4c00 session 0x556dccf4b860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 368 ms_handle_reset con 0x556dcdeeb000 session 0x556dcd6f0d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157564928 unmapped: 75866112 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 368 ms_handle_reset con 0x556dcd37a400 session 0x556dcd1a3680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2630172 data_alloc: 218103808 data_used: 1527808
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 368 heartbeat osd_stat(store_statfs(0x4f816f000/0x0/0x4ffc00000, data 0xb88ba6/0xd8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 368 ms_handle_reset con 0x556dcacd4400 session 0x556dcaca81e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157564928 unmapped: 75866112 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157564928 unmapped: 75866112 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157573120 unmapped: 75857920 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 368 handle_osd_map epochs [369,369], i have 368, src has [1,369]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.986339569s of 10.429478645s, submitted: 220
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 369 ms_handle_reset con 0x556dcacd4c00 session 0x556dccb7a1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 369 handle_osd_map epochs [370,370], i have 369, src has [1,370]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157630464 unmapped: 75800576 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 370 ms_handle_reset con 0x556dccbe5000 session 0x556dca8a1c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 370 heartbeat osd_stat(store_statfs(0x4f8171000/0x0/0x4ffc00000, data 0xb88ba6/0xd8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157630464 unmapped: 75800576 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2634096 data_alloc: 218103808 data_used: 1527808
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 370 heartbeat osd_stat(store_statfs(0x4f816b000/0x0/0x4ffc00000, data 0xb8c218/0xd91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 370 ms_handle_reset con 0x556dcd378400 session 0x556dcaca9e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157630464 unmapped: 75800576 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 157999104 unmapped: 75431936 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 370 ms_handle_reset con 0x556dcacd4400 session 0x556dca905c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 370 heartbeat osd_stat(store_statfs(0x4f8141000/0x0/0x4ffc00000, data 0xbb629d/0xdbd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 370 ms_handle_reset con 0x556dccbe5000 session 0x556dca88f680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 158007296 unmapped: 75423744 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 372 ms_handle_reset con 0x556dcf071800 session 0x556dcc6ebc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 74334208 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 372 handle_osd_map epochs [373,373], i have 372, src has [1,373]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 159121408 unmapped: 74309632 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 373 ms_handle_reset con 0x556dccbe6400 session 0x556dcad69680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2681841 data_alloc: 218103808 data_used: 5353472
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 373 ms_handle_reset con 0x556dcacd4c00 session 0x556dcd8363c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 159121408 unmapped: 74309632 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 374 ms_handle_reset con 0x556dcacd4400 session 0x556dccf4de00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 160194560 unmapped: 73236480 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 374 ms_handle_reset con 0x556dccbe5000 session 0x556dcd70da40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 160194560 unmapped: 73236480 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 374 heartbeat osd_stat(store_statfs(0x4f8135000/0x0/0x4ffc00000, data 0xbbd02f/0xdc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.798820496s of 10.042563438s, submitted: 158
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 374 ms_handle_reset con 0x556dccbe6400 session 0x556dcb50cd20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 160194560 unmapped: 73236480 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 160194560 unmapped: 73236480 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 375 ms_handle_reset con 0x556dcdf3bc00 session 0x556dcc747e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 375 ms_handle_reset con 0x556dcf2a5800 session 0x556dcc7472c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2687485 data_alloc: 218103808 data_used: 5365760
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 375 ms_handle_reset con 0x556dcacd4400 session 0x556dcd35e780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 375 handle_osd_map epochs [376,376], i have 375, src has [1,376]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 376 ms_handle_reset con 0x556dccbe5000 session 0x556dcd836780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 376 ms_handle_reset con 0x556dccbe6400 session 0x556dcabe3680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 160210944 unmapped: 73220096 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 376 ms_handle_reset con 0x556dcdf3bc00 session 0x556dca8a10e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 376 ms_handle_reset con 0x556dcf071800 session 0x556dcbab0780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 160210944 unmapped: 73220096 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 376 ms_handle_reset con 0x556dcacd4400 session 0x556dcad634a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 376 heartbeat osd_stat(store_statfs(0x4f812e000/0x0/0x4ffc00000, data 0xbc0647/0xdcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 376 ms_handle_reset con 0x556dccbe5000 session 0x556dcac44f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 376 handle_osd_map epochs [376,377], i have 376, src has [1,377]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 377 ms_handle_reset con 0x556dcf071800 session 0x556dcd766000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 160210944 unmapped: 73220096 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 377 ms_handle_reset con 0x556dccbe6400 session 0x556dcd70de00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 378 ms_handle_reset con 0x556dcdf3bc00 session 0x556dccf4bc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 160210944 unmapped: 73220096 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 378 handle_osd_map epochs [378,379], i have 378, src has [1,379]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 379 handle_osd_map epochs [379,379], i have 379, src has [1,379]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 160235520 unmapped: 73195520 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 379 ms_handle_reset con 0x556dccbe5000 session 0x556dd04461e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 379 ms_handle_reset con 0x556dcacd4400 session 0x556dcc6ea960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2713426 data_alloc: 218103808 data_used: 5451776
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 379 ms_handle_reset con 0x556dccbe6400 session 0x556dcc746d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 379 heartbeat osd_stat(store_statfs(0x4f8111000/0x0/0x4ffc00000, data 0xbd7894/0xdea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164503552 unmapped: 68927488 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 379 ms_handle_reset con 0x556dcf2a5800 session 0x556dcba321e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163708928 unmapped: 69722112 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 379 ms_handle_reset con 0x556dcf43ac00 session 0x556dcc6eb4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 379 ms_handle_reset con 0x556dcacd4400 session 0x556dcd26cd20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163536896 unmapped: 69894144 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 379 handle_osd_map epochs [380,380], i have 379, src has [1,380]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 380 ms_handle_reset con 0x556dcf071800 session 0x556dcbab1e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.430534363s of 10.508442879s, submitted: 183
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 380 handle_osd_map epochs [380,381], i have 380, src has [1,381]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163594240 unmapped: 69836800 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 381 ms_handle_reset con 0x556dccbe6400 session 0x556dcc8b52c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 381 ms_handle_reset con 0x556dccbe5000 session 0x556dcd767680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 381 ms_handle_reset con 0x556dcf2a5800 session 0x556dcac441e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f7629000/0x0/0x4ffc00000, data 0x12b1ffc/0x14c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163594240 unmapped: 69836800 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2764791 data_alloc: 218103808 data_used: 5570560
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 381 ms_handle_reset con 0x556dccbe5000 session 0x556dcd21fc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163602432 unmapped: 69828608 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 382 ms_handle_reset con 0x556dccbe6400 session 0x556dcc8b5680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163602432 unmapped: 69828608 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 383 ms_handle_reset con 0x556dcacd4400 session 0x556dcaccad20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 383 ms_handle_reset con 0x556dcf071800 session 0x556dccf4e3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163921920 unmapped: 69509120 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163921920 unmapped: 69509120 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 383 heartbeat osd_stat(store_statfs(0x4f7603000/0x0/0x4ffc00000, data 0x12d72d1/0x14ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163921920 unmapped: 69509120 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2770325 data_alloc: 218103808 data_used: 5574656
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163921920 unmapped: 69509120 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163921920 unmapped: 69509120 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163921920 unmapped: 69509120 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 384 ms_handle_reset con 0x556dcc680000 session 0x556dcae0c5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.838051796s of 10.086895943s, submitted: 91
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163938304 unmapped: 69492736 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f75fa000/0x0/0x4ffc00000, data 0x12dd89f/0x14f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163938304 unmapped: 69492736 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2775965 data_alloc: 218103808 data_used: 5574656
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 385 ms_handle_reset con 0x556dcacd4400 session 0x556dcd6f1e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163971072 unmapped: 69459968 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 385 handle_osd_map epochs [386,386], i have 386, src has [1,386]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 386 ms_handle_reset con 0x556dccbe5000 session 0x556dcd836d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163987456 unmapped: 69443584 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 386 handle_osd_map epochs [386,387], i have 386, src has [1,387]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 387 handle_osd_map epochs [387,387], i have 387, src has [1,387]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 387 heartbeat osd_stat(store_statfs(0x4f75f6000/0x0/0x4ffc00000, data 0x12df47e/0x14f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 387 ms_handle_reset con 0x556dccbe6400 session 0x556dcc6ea1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 387 ms_handle_reset con 0x556dcc680000 session 0x556dcd3fda40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 69435392 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 387 heartbeat osd_stat(store_statfs(0x4f75f2000/0x0/0x4ffc00000, data 0x12e0ffb/0x14fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 69435392 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 387 handle_osd_map epochs [387,388], i have 387, src has [1,388]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 388 handle_osd_map epochs [388,388], i have 388, src has [1,388]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 388 ms_handle_reset con 0x556dcf071800 session 0x556dca8a0000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164020224 unmapped: 69410816 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 388 handle_osd_map epochs [388,389], i have 388, src has [1,389]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 389 ms_handle_reset con 0x556dcf071800 session 0x556dccb7be00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2790775 data_alloc: 218103808 data_used: 5574656
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 389 ms_handle_reset con 0x556dcacd4400 session 0x556dcd21f860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164020224 unmapped: 69410816 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164020224 unmapped: 69410816 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f75ea000/0x0/0x4ffc00000, data 0x12e7757/0x1502000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164020224 unmapped: 69410816 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f75ea000/0x0/0x4ffc00000, data 0x12e7757/0x1502000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164020224 unmapped: 69410816 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164020224 unmapped: 69410816 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.073758125s of 11.473550797s, submitted: 108
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 390 ms_handle_reset con 0x556dcc680000 session 0x556dcc746960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2792725 data_alloc: 218103808 data_used: 5574656
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164020224 unmapped: 69410816 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 390 handle_osd_map epochs [390,391], i have 390, src has [1,391]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 391 ms_handle_reset con 0x556dccbe5000 session 0x556dcd8472c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 69378048 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 69378048 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 391 heartbeat osd_stat(store_statfs(0x4f75e2000/0x0/0x4ffc00000, data 0x12eaeef/0x150a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 391 handle_osd_map epochs [391,392], i have 391, src has [1,392]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 69378048 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 69378048 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2816638 data_alloc: 218103808 data_used: 5574656
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 69378048 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 69378048 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f75db000/0x0/0x4ffc00000, data 0x14ad98a/0x1512000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f75db000/0x0/0x4ffc00000, data 0x14ad98a/0x1512000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163987456 unmapped: 69443584 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163987456 unmapped: 69443584 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f75da000/0x0/0x4ffc00000, data 0x14ad98a/0x1514000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163987456 unmapped: 69443584 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2816790 data_alloc: 218103808 data_used: 5574656
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163987456 unmapped: 69443584 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f75da000/0x0/0x4ffc00000, data 0x14ad98a/0x1514000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.901922226s of 10.962021828s, submitted: 36
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 ms_handle_reset con 0x556dccb24800 session 0x556dd0446d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 ms_handle_reset con 0x556dccbe6400 session 0x556dccb7b4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 69435392 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 69435392 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 69435392 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 69435392 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2820592 data_alloc: 218103808 data_used: 5574656
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 ms_handle_reset con 0x556dcacd4400 session 0x556dccf4e5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 69435392 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f75d8000/0x0/0x4ffc00000, data 0x14ad9fc/0x1516000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 ms_handle_reset con 0x556dcc680000 session 0x556dcacd6b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 69435392 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 69435392 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 ms_handle_reset con 0x556dccbe5000 session 0x556dcb682b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 69427200 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f75d7000/0x0/0x4ffc00000, data 0x14ada1f/0x1517000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164012032 unmapped: 69419008 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2824018 data_alloc: 218103808 data_used: 5582848
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164028416 unmapped: 69402624 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f75d7000/0x0/0x4ffc00000, data 0x14ada1f/0x1517000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 69386240 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f75d7000/0x0/0x4ffc00000, data 0x14ada1f/0x1517000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 69386240 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f75d7000/0x0/0x4ffc00000, data 0x14ada1f/0x1517000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 69386240 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.011506081s of 13.054282188s, submitted: 13
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 69386240 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2828979 data_alloc: 218103808 data_used: 5689344
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 ms_handle_reset con 0x556dccb24c00 session 0x556dcc746960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 69386240 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f75d5000/0x0/0x4ffc00000, data 0x14ada92/0x1519000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 69386240 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 69386240 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 69386240 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 69386240 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2828907 data_alloc: 218103808 data_used: 5689344
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 69386240 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 69345280 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f758e000/0x0/0x4ffc00000, data 0x14f4a92/0x1560000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 69345280 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 69345280 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 69345280 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2847458 data_alloc: 218103808 data_used: 8609792
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 69345280 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f758e000/0x0/0x4ffc00000, data 0x14f4a92/0x1560000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 ms_handle_reset con 0x556dccb24c00 session 0x556dcc6ea1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 69345280 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.839614868s of 12.906504631s, submitted: 15
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f758e000/0x0/0x4ffc00000, data 0x14f4a92/0x1560000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 ms_handle_reset con 0x556dcacd4400 session 0x556dcd836d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 168574976 unmapped: 64856064 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 ms_handle_reset con 0x556dcc680000 session 0x556dcc8b5680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 ms_handle_reset con 0x556dccbe5000 session 0x556dcba321e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164388864 unmapped: 69042176 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f6387000/0x0/0x4ffc00000, data 0x373ba2f/0x37a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164388864 unmapped: 69042176 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3090387 data_alloc: 218103808 data_used: 9007104
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164388864 unmapped: 69042176 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164388864 unmapped: 69042176 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f6387000/0x0/0x4ffc00000, data 0x373ba2f/0x37a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164388864 unmapped: 69042176 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f6387000/0x0/0x4ffc00000, data 0x373ba2f/0x37a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164388864 unmapped: 69042176 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164388864 unmapped: 69042176 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3090387 data_alloc: 218103808 data_used: 9007104
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164388864 unmapped: 69042176 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164388864 unmapped: 69042176 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164388864 unmapped: 69042176 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 ms_handle_reset con 0x556dccbe6400 session 0x556dcd70de00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f6387000/0x0/0x4ffc00000, data 0x373ba2f/0x37a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164388864 unmapped: 69042176 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f6387000/0x0/0x4ffc00000, data 0x373ba2f/0x37a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 ms_handle_reset con 0x556dccbe6400 session 0x556dcd766000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164388864 unmapped: 69042176 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3090387 data_alloc: 218103808 data_used: 9007104
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 ms_handle_reset con 0x556dcacd4400 session 0x556dcac44f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 ms_handle_reset con 0x556dcc680000 session 0x556dcad634a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164388864 unmapped: 69042176 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.008003235s of 14.639681816s, submitted: 34
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164388864 unmapped: 69042176 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164388864 unmapped: 69042176 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f6387000/0x0/0x4ffc00000, data 0x373ba2f/0x37a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164397056 unmapped: 69033984 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164806656 unmapped: 68624384 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3115319 data_alloc: 234881024 data_used: 12595200
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f6387000/0x0/0x4ffc00000, data 0x373ba2f/0x37a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164806656 unmapped: 68624384 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 ms_handle_reset con 0x556dcf071800 session 0x556dccb7b4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 ms_handle_reset con 0x556dcd333000 session 0x556dccf4f4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 ms_handle_reset con 0x556dcacd4400 session 0x556dca88e000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164806656 unmapped: 68624384 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 ms_handle_reset con 0x556dcc680000 session 0x556dcae0de00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 ms_handle_reset con 0x556dccbe6400 session 0x556dcb682d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 ms_handle_reset con 0x556dcd333000 session 0x556dcd6f0f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 ms_handle_reset con 0x556dcf071800 session 0x556dcd7672c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164904960 unmapped: 68526080 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 ms_handle_reset con 0x556dcacd4400 session 0x556dcae0c5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164904960 unmapped: 68526080 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 ms_handle_reset con 0x556dcc680000 session 0x556dcd6f0b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164904960 unmapped: 68526080 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 ms_handle_reset con 0x556dccbe6400 session 0x556dcc65a3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3177307 data_alloc: 234881024 data_used: 12476416
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164913152 unmapped: 68517888 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f5ab1000/0x0/0x4ffc00000, data 0x401499a/0x407c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 ms_handle_reset con 0x556dcd333000 session 0x556dcaccb2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 164913152 unmapped: 68517888 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 392 handle_osd_map epochs [392,393], i have 392, src has [1,393]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.208984375s of 11.443075180s, submitted: 55
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 165978112 unmapped: 67452928 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 393 ms_handle_reset con 0x556dcf071800 session 0x556dcd21ed20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 393 heartbeat osd_stat(store_statfs(0x4f5ab1000/0x0/0x4ffc00000, data 0x401499a/0x407c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 165978112 unmapped: 67452928 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 393 ms_handle_reset con 0x556dcd37a400 session 0x556dcc747860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 393 ms_handle_reset con 0x556dccb29000 session 0x556dcd68a3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 393 ms_handle_reset con 0x556dcacd4400 session 0x556dca095a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 393 ms_handle_reset con 0x556dcc680000 session 0x556dccb7b2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 166363136 unmapped: 67067904 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3167289 data_alloc: 234881024 data_used: 12349440
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 166363136 unmapped: 67067904 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 166363136 unmapped: 67067904 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 393 heartbeat osd_stat(store_statfs(0x4f5a87000/0x0/0x4ffc00000, data 0x3e5a548/0x407c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 393 ms_handle_reset con 0x556dccbe5000 session 0x556dccf4b860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 393 ms_handle_reset con 0x556dcacd4400 session 0x556dcabe2f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 181059584 unmapped: 52371456 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 182747136 unmapped: 50683904 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 175169536 unmapped: 58261504 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3146213 data_alloc: 234881024 data_used: 14213120
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176521216 unmapped: 56909824 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 394 ms_handle_reset con 0x556dcc680000 session 0x556dcd6f03c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 394 heartbeat osd_stat(store_statfs(0x4f5a03000/0x0/0x4ffc00000, data 0x3f09107/0x412a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 173514752 unmapped: 59916288 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 173514752 unmapped: 59916288 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 394 handle_osd_map epochs [394,395], i have 394, src has [1,395]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.695365906s of 10.440656662s, submitted: 164
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 395 heartbeat osd_stat(store_statfs(0x4f5a03000/0x0/0x4ffc00000, data 0x3f09107/0x412a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 173514752 unmapped: 59916288 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 395 heartbeat osd_stat(store_statfs(0x4f59ff000/0x0/0x4ffc00000, data 0x3f0ab86/0x412d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 173514752 unmapped: 59916288 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3191208 data_alloc: 234881024 data_used: 14221312
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 173514752 unmapped: 59916288 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 395 ms_handle_reset con 0x556dccb24c00 session 0x556dcd21fc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 395 ms_handle_reset con 0x556dccb29000 session 0x556dcd68a000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 173522944 unmapped: 59908096 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 395 heartbeat osd_stat(store_statfs(0x4f5a01000/0x0/0x4ffc00000, data 0x3f0ab86/0x412d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 395 ms_handle_reset con 0x556dcd37a400 session 0x556dcd6f10e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 173522944 unmapped: 59908096 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 395 ms_handle_reset con 0x556dcd37a400 session 0x556dcd1a32c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 173015040 unmapped: 60416000 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 396 ms_handle_reset con 0x556dcacd4400 session 0x556dcd3fc960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 173400064 unmapped: 60030976 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3270978 data_alloc: 234881024 data_used: 15241216
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 173432832 unmapped: 59998208 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 396 ms_handle_reset con 0x556dcc680000 session 0x556dcd366d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 172294144 unmapped: 61136896 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 396 ms_handle_reset con 0x556dccb24c00 session 0x556dcc7461e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 396 ms_handle_reset con 0x556dccb29000 session 0x556dccb7be00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 396 ms_handle_reset con 0x556dccb29000 session 0x556dccf4e5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 172793856 unmapped: 60637184 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 396 heartbeat osd_stat(store_statfs(0x4f4b6b000/0x0/0x4ffc00000, data 0x4d9f5e9/0x4fc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 172793856 unmapped: 60637184 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 172793856 unmapped: 60637184 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3326363 data_alloc: 234881024 data_used: 15249408
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 172793856 unmapped: 60637184 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 396 heartbeat osd_stat(store_statfs(0x4f4b6b000/0x0/0x4ffc00000, data 0x4d9f5e9/0x4fc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 172793856 unmapped: 60637184 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 396 heartbeat osd_stat(store_statfs(0x4f4b6b000/0x0/0x4ffc00000, data 0x4d9f5e9/0x4fc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 172793856 unmapped: 60637184 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 172793856 unmapped: 60637184 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 172793856 unmapped: 60637184 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3326363 data_alloc: 234881024 data_used: 15249408
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 396 heartbeat osd_stat(store_statfs(0x4f4b6b000/0x0/0x4ffc00000, data 0x4d9f5e9/0x4fc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 172793856 unmapped: 60637184 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 172793856 unmapped: 60637184 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 172810240 unmapped: 60620800 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 172810240 unmapped: 60620800 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 172810240 unmapped: 60620800 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3326363 data_alloc: 234881024 data_used: 15249408
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 172810240 unmapped: 60620800 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 396 heartbeat osd_stat(store_statfs(0x4f4b6b000/0x0/0x4ffc00000, data 0x4d9f5e9/0x4fc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 396 ms_handle_reset con 0x556dcacd4400 session 0x556dcbab0f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 172810240 unmapped: 60620800 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 396 ms_handle_reset con 0x556dcc680000 session 0x556dccb7b0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 396 ms_handle_reset con 0x556dccb24c00 session 0x556dcc10c780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 172810240 unmapped: 60620800 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.136558533s of 24.747951508s, submitted: 131
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 396 ms_handle_reset con 0x556dcd37a400 session 0x556dcd6f0960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 172826624 unmapped: 60604416 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 396 heartbeat osd_stat(store_statfs(0x4f4b69000/0x0/0x4ffc00000, data 0x4d9f61c/0x4fc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 172826624 unmapped: 60604416 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 396 heartbeat osd_stat(store_statfs(0x4f4b69000/0x0/0x4ffc00000, data 0x4d9f61c/0x4fc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3329134 data_alloc: 234881024 data_used: 15253504
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 172965888 unmapped: 60465152 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177168384 unmapped: 56262656 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 396 heartbeat osd_stat(store_statfs(0x4f4b69000/0x0/0x4ffc00000, data 0x4d9f61c/0x4fc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177201152 unmapped: 56229888 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 396 ms_handle_reset con 0x556dcc680000 session 0x556dcc8b52c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 396 ms_handle_reset con 0x556dccb24c00 session 0x556dcd7670e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177225728 unmapped: 56205312 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177225728 unmapped: 56205312 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3380487 data_alloc: 234881024 data_used: 21667840
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 396 heartbeat osd_stat(store_statfs(0x4f4b68000/0x0/0x4ffc00000, data 0x4d9f67e/0x4fc6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177225728 unmapped: 56205312 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177283072 unmapped: 56147968 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177324032 unmapped: 56107008 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.771056175s of 10.829683304s, submitted: 17
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178470912 unmapped: 54960128 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 396 heartbeat osd_stat(store_statfs(0x4f4b68000/0x0/0x4ffc00000, data 0x4d9f67e/0x4fc6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178470912 unmapped: 54960128 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388221 data_alloc: 234881024 data_used: 21782528
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178470912 unmapped: 54960128 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 397 ms_handle_reset con 0x556dcc70b000 session 0x556dcb57dc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178503680 unmapped: 54927360 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178880512 unmapped: 54550528 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 397 handle_osd_map epochs [398,398], i have 398, src has [1,398]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 398 ms_handle_reset con 0x556dccbee000 session 0x556dcacd7860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 182730752 unmapped: 50700288 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 398 heartbeat osd_stat(store_statfs(0x4f42e3000/0x0/0x4ffc00000, data 0x5619dda/0x5844000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 183787520 unmapped: 49643520 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dccb27c00 session 0x556dcc747a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3486185 data_alloc: 234881024 data_used: 22593536
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 183894016 unmapped: 49537024 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dcc681c00 session 0x556dcd1a23c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 183894016 unmapped: 49537024 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f4281000/0x0/0x4ffc00000, data 0x56719b9/0x589c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 183894016 unmapped: 49537024 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dcc680000 session 0x556dcd6f03c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dccb27c00 session 0x556dcad63a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 183894016 unmapped: 49537024 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.079855919s of 10.840197563s, submitted: 151
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 184041472 unmapped: 49389568 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3488157 data_alloc: 234881024 data_used: 23687168
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 184549376 unmapped: 48881664 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 184647680 unmapped: 48783360 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f428f000/0x0/0x4ffc00000, data 0x56749b9/0x589f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 184655872 unmapped: 48775168 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dcd37a400 session 0x556dcae0fe00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dcacd4400 session 0x556dcd8374a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dcacd4400 session 0x556dcd6f10e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 185442304 unmapped: 47988736 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dcc70b000 session 0x556dd0446f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 185442304 unmapped: 47988736 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3507410 data_alloc: 234881024 data_used: 24145920
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 186081280 unmapped: 47349760 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dcc681c00 session 0x556dcb50c780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dcc680000 session 0x556dca8a1c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f41fd000/0x0/0x4ffc00000, data 0x5708986/0x5931000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 186081280 unmapped: 47349760 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 186081280 unmapped: 47349760 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 186081280 unmapped: 47349760 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 186081280 unmapped: 47349760 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3501106 data_alloc: 234881024 data_used: 24150016
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 186081280 unmapped: 47349760 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 186081280 unmapped: 47349760 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f41fd000/0x0/0x4ffc00000, data 0x5708986/0x5931000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dccb27c00 session 0x556dccb7a780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 186081280 unmapped: 47349760 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f41fd000/0x0/0x4ffc00000, data 0x5708986/0x5931000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.984109879s of 14.149359703s, submitted: 66
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dcc680000 session 0x556dcacca1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dcacd4400 session 0x556dcc746f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dcc681c00 session 0x556dcc7472c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 187277312 unmapped: 46153728 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dcc70b000 session 0x556dd0447680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 187277312 unmapped: 46153728 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dcd37a400 session 0x556dcd8363c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505306 data_alloc: 234881024 data_used: 24154112
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dcacd4400 session 0x556dcc747e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 24K writes, 97K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s#012Cumulative WAL: 24K writes, 8490 syncs, 2.83 writes per sync, written: 0.06 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 49K keys, 11K commit groups, 1.0 writes per commit group, ingest: 31.12 MB, 0.05 MB/s#012Interval WAL: 11K writes, 4809 syncs, 2.40 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f41fd000/0x0/0x4ffc00000, data 0x5708986/0x5931000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 187334656 unmapped: 46096384 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 187383808 unmapped: 46047232 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 187465728 unmapped: 45965312 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 188186624 unmapped: 45244416 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dcc680000 session 0x556dccf4b680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 188219392 unmapped: 45211648 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f4166000/0x0/0x4ffc00000, data 0x579d9a9/0x59c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3518949 data_alloc: 234881024 data_used: 24399872
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 188284928 unmapped: 45146112 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dccb24c00 session 0x556dcd70de00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dccbee000 session 0x556dccf4b680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 187957248 unmapped: 45473792 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dcb506400 session 0x556dcae0fe00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dccb29000 session 0x556dd0447c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dcb3d0000 session 0x556dcd3fc780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 187858944 unmapped: 45572096 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dcacd4400 session 0x556dccb7b0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dcb506400 session 0x556dcad69680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 187883520 unmapped: 45547520 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dcc680000 session 0x556dce31eb40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.723754883s of 10.980282784s, submitted: 82
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dcc680000 session 0x556dcd3fcf00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 187891712 unmapped: 45539328 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3508971 data_alloc: 234881024 data_used: 26091520
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 ms_handle_reset con 0x556dcacd4400 session 0x556dcae10780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 187891712 unmapped: 45539328 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f4290000/0x0/0x4ffc00000, data 0x5675947/0x589e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f4290000/0x0/0x4ffc00000, data 0x5675947/0x589e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 400 ms_handle_reset con 0x556dcb3d0000 session 0x556dcc1232c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 187891712 unmapped: 45539328 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 400 heartbeat osd_stat(store_statfs(0x4f428c000/0x0/0x4ffc00000, data 0x5677518/0x58a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 400 heartbeat osd_stat(store_statfs(0x4f428c000/0x0/0x4ffc00000, data 0x5677518/0x58a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 187891712 unmapped: 45539328 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 400 ms_handle_reset con 0x556dcb506400 session 0x556dcd766b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 401 ms_handle_reset con 0x556dccb24c00 session 0x556dcd68ab40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 401 ms_handle_reset con 0x556dccb29000 session 0x556dcd21fa40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 188022784 unmapped: 45408256 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 401 heartbeat osd_stat(store_statfs(0x4f4289000/0x0/0x4ffc00000, data 0x5679087/0x58a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 401 ms_handle_reset con 0x556dcacd4400 session 0x556dccb7a960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 401 ms_handle_reset con 0x556dcb3d0000 session 0x556dcd249860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 402 ms_handle_reset con 0x556dcb506400 session 0x556dcb682f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 183787520 unmapped: 49643520 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3162887 data_alloc: 234881024 data_used: 20639744
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 183787520 unmapped: 49643520 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f6b11000/0x0/0x4ffc00000, data 0x2771c48/0x299e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,0,1])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 402 ms_handle_reset con 0x556dcc680000 session 0x556dcb5265a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 402 ms_handle_reset con 0x556dcacd4400 session 0x556dcd847860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 183795712 unmapped: 49635328 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 402 ms_handle_reset con 0x556dccbe6400 session 0x556dcd8470e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 402 ms_handle_reset con 0x556dcd333000 session 0x556dcd7663c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178618368 unmapped: 54812672 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 402 ms_handle_reset con 0x556dcb3d0000 session 0x556dcad670e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178618368 unmapped: 54812672 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178618368 unmapped: 54812672 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f824a000/0x0/0x4ffc00000, data 0x16b8be6/0x18e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2954561 data_alloc: 234881024 data_used: 11075584
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 402 ms_handle_reset con 0x556dcb506400 session 0x556dcae0fa40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178618368 unmapped: 54812672 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 402 ms_handle_reset con 0x556dcacd4400 session 0x556dce31f2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178618368 unmapped: 54812672 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f824a000/0x0/0x4ffc00000, data 0x16b8be6/0x18e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178618368 unmapped: 54812672 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.724150658s of 13.479183197s, submitted: 180
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178667520 unmapped: 54763520 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f8246000/0x0/0x4ffc00000, data 0x16ba649/0x18e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178667520 unmapped: 54763520 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2958735 data_alloc: 234881024 data_used: 11083776
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178667520 unmapped: 54763520 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178667520 unmapped: 54763520 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178667520 unmapped: 54763520 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178675712 unmapped: 54755328 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 403 ms_handle_reset con 0x556dcb3d0000 session 0x556dce31e5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f8245000/0x0/0x4ffc00000, data 0x16ba6bc/0x18e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178675712 unmapped: 54755328 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2962785 data_alloc: 234881024 data_used: 11087872
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178675712 unmapped: 54755328 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178675712 unmapped: 54755328 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178675712 unmapped: 54755328 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178675712 unmapped: 54755328 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.499935150s of 11.551033974s, submitted: 23
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f8245000/0x0/0x4ffc00000, data 0x16ba6bc/0x18e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177848320 unmapped: 55582720 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f8245000/0x0/0x4ffc00000, data 0x16ba6bc/0x18e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2964257 data_alloc: 234881024 data_used: 11071488
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177848320 unmapped: 55582720 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177848320 unmapped: 55582720 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177848320 unmapped: 55582720 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 ms_handle_reset con 0x556dccbe6400 session 0x556dcf2f6000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177840128 unmapped: 55590912 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 ms_handle_reset con 0x556dccb29000 session 0x556dcd68ad20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 ms_handle_reset con 0x556dcd333000 session 0x556dcd846d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177840128 unmapped: 55590912 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2977917 data_alloc: 234881024 data_used: 12550144
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f823f000/0x0/0x4ffc00000, data 0x16bc2fd/0x18ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177840128 unmapped: 55590912 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 ms_handle_reset con 0x556dcd333000 session 0x556dcd35fe00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177856512 unmapped: 55574528 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 ms_handle_reset con 0x556dcacd4400 session 0x556dcc6392c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 182337536 unmapped: 51093504 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 ms_handle_reset con 0x556dcb3d0000 session 0x556dcf2f6960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 ms_handle_reset con 0x556dccb29000 session 0x556dcd6f0d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177209344 unmapped: 56221696 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177209344 unmapped: 56221696 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f6bf9000/0x0/0x4ffc00000, data 0x2d032fc/0x2f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134026 data_alloc: 234881024 data_used: 12546048
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177209344 unmapped: 56221696 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177209344 unmapped: 56221696 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177209344 unmapped: 56221696 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f6bf9000/0x0/0x4ffc00000, data 0x2d032fc/0x2f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177209344 unmapped: 56221696 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 ms_handle_reset con 0x556dccbe6400 session 0x556dcd8370e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177209344 unmapped: 56221696 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134026 data_alloc: 234881024 data_used: 12546048
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 ms_handle_reset con 0x556dccbe6400 session 0x556dcd836000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177209344 unmapped: 56221696 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 ms_handle_reset con 0x556dcacd4400 session 0x556dcac45e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.140062332s of 16.684583664s, submitted: 86
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 ms_handle_reset con 0x556dcb3d0000 session 0x556dcac450e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177217536 unmapped: 56213504 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177233920 unmapped: 56197120 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f6bf8000/0x0/0x4ffc00000, data 0x2d0330b/0x2f36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177233920 unmapped: 56197120 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f6bf8000/0x0/0x4ffc00000, data 0x2d0330b/0x2f36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177233920 unmapped: 56197120 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 ms_handle_reset con 0x556dccb24c00 session 0x556dcd3fd860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3138301 data_alloc: 234881024 data_used: 12587008
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177233920 unmapped: 56197120 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177233920 unmapped: 56197120 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177233920 unmapped: 56197120 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f6bf7000/0x0/0x4ffc00000, data 0x2d0332e/0x2f37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177233920 unmapped: 56197120 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177250304 unmapped: 56180736 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f6bf7000/0x0/0x4ffc00000, data 0x2d0332e/0x2f37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3168381 data_alloc: 234881024 data_used: 16924672
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177250304 unmapped: 56180736 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f6bf7000/0x0/0x4ffc00000, data 0x2d0332e/0x2f37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177250304 unmapped: 56180736 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177250304 unmapped: 56180736 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177250304 unmapped: 56180736 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f6bf7000/0x0/0x4ffc00000, data 0x2d0332e/0x2f37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.092064857s of 13.109714508s, submitted: 5
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177717248 unmapped: 55713792 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3177700 data_alloc: 234881024 data_used: 18132992
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f6bf7000/0x0/0x4ffc00000, data 0x2d0332e/0x2f37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177717248 unmapped: 55713792 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177717248 unmapped: 55713792 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177717248 unmapped: 55713792 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177717248 unmapped: 55713792 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177717248 unmapped: 55713792 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3180652 data_alloc: 234881024 data_used: 18210816
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f6bf7000/0x0/0x4ffc00000, data 0x2d0332e/0x2f37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177717248 unmapped: 55713792 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179707904 unmapped: 53723136 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179798016 unmapped: 53633024 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f57cf000/0x0/0x4ffc00000, data 0x3d5f32e/0x3f4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179798016 unmapped: 53633024 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179798016 unmapped: 53633024 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3314256 data_alloc: 234881024 data_used: 18690048
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f57cf000/0x0/0x4ffc00000, data 0x3d5f32e/0x3f4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179798016 unmapped: 53633024 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179798016 unmapped: 53633024 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179798016 unmapped: 53633024 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179806208 unmapped: 53624832 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f57cf000/0x0/0x4ffc00000, data 0x3d5f32e/0x3f4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.001739502s of 15.373538971s, submitted: 68
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179806208 unmapped: 53624832 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f57cf000/0x0/0x4ffc00000, data 0x3d5f32e/0x3f4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3314288 data_alloc: 234881024 data_used: 18685952
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 ms_handle_reset con 0x556dccbee000 session 0x556dcc899c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 ms_handle_reset con 0x556dccbee000 session 0x556dccb7a780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179814400 unmapped: 53616640 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179814400 unmapped: 53616640 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179814400 unmapped: 53616640 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f57cf000/0x0/0x4ffc00000, data 0x3d5f30b/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179814400 unmapped: 53616640 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179814400 unmapped: 53616640 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3309248 data_alloc: 234881024 data_used: 18690048
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f57cf000/0x0/0x4ffc00000, data 0x3d5f30b/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179814400 unmapped: 53616640 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179814400 unmapped: 53616640 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179814400 unmapped: 53616640 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179814400 unmapped: 53616640 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f57cf000/0x0/0x4ffc00000, data 0x3d5f30b/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179814400 unmapped: 53616640 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3309248 data_alloc: 234881024 data_used: 18690048
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.471634865s of 11.532032967s, submitted: 16
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179838976 unmapped: 53592064 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f57d0000/0x0/0x4ffc00000, data 0x3d5f30b/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179863552 unmapped: 53567488 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f57d0000/0x0/0x4ffc00000, data 0x3d5f30b/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179888128 unmapped: 53542912 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 ms_handle_reset con 0x556dccb29000 session 0x556dcd26cf00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 ms_handle_reset con 0x556dcd333000 session 0x556dcbab1e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 ms_handle_reset con 0x556dcacd4400 session 0x556dcae0e000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179888128 unmapped: 53542912 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179888128 unmapped: 53542912 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3308275 data_alloc: 234881024 data_used: 18690048
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 ms_handle_reset con 0x556dcb3d0000 session 0x556dccf4f0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179896320 unmapped: 53534720 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f57d1000/0x0/0x4ffc00000, data 0x3d5f2fc/0x3f4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 ms_handle_reset con 0x556dcb3d0000 session 0x556dcba323c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179904512 unmapped: 53526528 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 ms_handle_reset con 0x556dcacd4400 session 0x556dca0945a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 ms_handle_reset con 0x556dccb29000 session 0x556dcd846f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 ms_handle_reset con 0x556dccbee000 session 0x556dcb5263c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 ms_handle_reset con 0x556dcd333000 session 0x556dcd6f14a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 ms_handle_reset con 0x556dcacd4400 session 0x556dcd21f0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 180224000 unmapped: 53207040 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 ms_handle_reset con 0x556dcb3d0000 session 0x556dcd8372c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 180232192 unmapped: 53198848 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 405 heartbeat osd_stat(store_statfs(0x4f57ad000/0x0/0x4ffc00000, data 0x3d3ee09/0x3f70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 405 ms_handle_reset con 0x556dcc681c00 session 0x556dca904780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 405 ms_handle_reset con 0x556dcc70b000 session 0x556dcabe25a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 180232192 unmapped: 53198848 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3312734 data_alloc: 234881024 data_used: 18690048
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 405 ms_handle_reset con 0x556dccb24c00 session 0x556dcd6f0d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 180256768 unmapped: 53174272 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 180256768 unmapped: 53174272 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 180256768 unmapped: 53174272 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 180256768 unmapped: 53174272 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 405 ms_handle_reset con 0x556dccb24c00 session 0x556dcd70da40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 405 heartbeat osd_stat(store_statfs(0x4f460d000/0x0/0x4ffc00000, data 0x3d3ede6/0x3f6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.172468185s of 13.682212830s, submitted: 157
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 405 ms_handle_reset con 0x556dcacd4400 session 0x556dcad621e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176758784 unmapped: 56672256 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099118 data_alloc: 218103808 data_used: 6565888
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176758784 unmapped: 56672256 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176758784 unmapped: 56672256 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 405 heartbeat osd_stat(store_statfs(0x4f5602000/0x0/0x4ffc00000, data 0x2d4bde6/0x2f7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176758784 unmapped: 56672256 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 406 heartbeat osd_stat(store_statfs(0x4f5602000/0x0/0x4ffc00000, data 0x2d4bde6/0x2f7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176758784 unmapped: 56672256 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176758784 unmapped: 56672256 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3103292 data_alloc: 218103808 data_used: 6574080
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176758784 unmapped: 56672256 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176758784 unmapped: 56672256 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 406 heartbeat osd_stat(store_statfs(0x4f55fe000/0x0/0x4ffc00000, data 0x2d4d849/0x2f7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176758784 unmapped: 56672256 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176701440 unmapped: 56729600 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176701440 unmapped: 56729600 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3108804 data_alloc: 218103808 data_used: 6713344
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 406 heartbeat osd_stat(store_statfs(0x4f55fe000/0x0/0x4ffc00000, data 0x2d4d849/0x2f7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176701440 unmapped: 56729600 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176701440 unmapped: 56729600 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176701440 unmapped: 56729600 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176701440 unmapped: 56729600 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 406 heartbeat osd_stat(store_statfs(0x4f55fe000/0x0/0x4ffc00000, data 0x2d4d849/0x2f7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176701440 unmapped: 56729600 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.544714928s of 15.652226448s, submitted: 25
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 406 ms_handle_reset con 0x556dcb3d0000 session 0x556dd04472c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3107924 data_alloc: 218103808 data_used: 6713344
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176701440 unmapped: 56729600 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176701440 unmapped: 56729600 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 406 ms_handle_reset con 0x556dcc70b000 session 0x556dccb7a960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176701440 unmapped: 56729600 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 406 handle_osd_map epochs [406,407], i have 406, src has [1,407]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 407 ms_handle_reset con 0x556dcdeea000 session 0x556dcd3fdc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 407 heartbeat osd_stat(store_statfs(0x4f55fe000/0x0/0x4ffc00000, data 0x2d4d8ab/0x2f80000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176701440 unmapped: 56729600 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 407 heartbeat osd_stat(store_statfs(0x4f55f9000/0x0/0x4ffc00000, data 0x2d4f48a/0x2f84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 408 ms_handle_reset con 0x556dccbe6400 session 0x556dcd21fa40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 408 ms_handle_reset con 0x556dcc681c00 session 0x556dcba321e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176709632 unmapped: 56721408 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3121149 data_alloc: 218103808 data_used: 6721536
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 408 heartbeat osd_stat(store_statfs(0x4f55f5000/0x0/0x4ffc00000, data 0x2d51007/0x2f87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 409 ms_handle_reset con 0x556dcdeea000 session 0x556dcd70cf00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176726016 unmapped: 56705024 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 409 ms_handle_reset con 0x556dcacd4400 session 0x556dcae11a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176726016 unmapped: 56705024 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176726016 unmapped: 56705024 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 409 ms_handle_reset con 0x556dcc70b000 session 0x556dce31eb40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 409 heartbeat osd_stat(store_statfs(0x4f55f3000/0x0/0x4ffc00000, data 0x2d52bd8/0x2f8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176734208 unmapped: 56696832 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 410 ms_handle_reset con 0x556dcacd4400 session 0x556dcad69680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176750592 unmapped: 56680448 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.787930489s of 10.021409035s, submitted: 67
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 411 ms_handle_reset con 0x556dcc681c00 session 0x556dcaccab40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3135985 data_alloc: 218103808 data_used: 6733824
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 411 ms_handle_reset con 0x556dccbe6400 session 0x556dcac44f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 411 ms_handle_reset con 0x556dcb3d0000 session 0x556dccf4fc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176750592 unmapped: 56680448 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176750592 unmapped: 56680448 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 412 ms_handle_reset con 0x556dcdeea000 session 0x556dcd1a3a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 412 ms_handle_reset con 0x556dcdeea000 session 0x556dcd6f05a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 412 heartbeat osd_stat(store_statfs(0x4f55e9000/0x0/0x4ffc00000, data 0x2d5686e/0x2f93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176766976 unmapped: 56664064 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 413 ms_handle_reset con 0x556dcacd4400 session 0x556dcd68a3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176766976 unmapped: 56664064 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176766976 unmapped: 56664064 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3141452 data_alloc: 218103808 data_used: 6733824
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 413 heartbeat osd_stat(store_statfs(0x4f55e5000/0x0/0x4ffc00000, data 0x2d599cc/0x2f97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 413 ms_handle_reset con 0x556dccb29000 session 0x556dcd70cb40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 413 ms_handle_reset con 0x556dccbee000 session 0x556dcd6f12c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 413 ms_handle_reset con 0x556dcc681c00 session 0x556dcd68a000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176775168 unmapped: 56655872 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 414 ms_handle_reset con 0x556dcacd4400 session 0x556dccf4b860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 414 ms_handle_reset con 0x556dccb29000 session 0x556dcd70da40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176791552 unmapped: 56639488 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 415 ms_handle_reset con 0x556dccbee000 session 0x556dccb7a780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 415 ms_handle_reset con 0x556dcdeea000 session 0x556dcd3fd860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 415 ms_handle_reset con 0x556dcb3d0000 session 0x556dcc6ea960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176791552 unmapped: 56639488 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 415 ms_handle_reset con 0x556dccb29000 session 0x556dcac45e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176791552 unmapped: 56639488 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f55fe000/0x0/0x4ffc00000, data 0x2d39788/0x2f7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 416 ms_handle_reset con 0x556dcdeea000 session 0x556dd0446b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176807936 unmapped: 56623104 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.896702766s of 10.117324829s, submitted: 103
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3159950 data_alloc: 218103808 data_used: 6639616
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 417 ms_handle_reset con 0x556dccbee000 session 0x556dcd8370e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 417 ms_handle_reset con 0x556dcacd4400 session 0x556dcac450e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 417 ms_handle_reset con 0x556dccbe6400 session 0x556dcd8461e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 417 ms_handle_reset con 0x556dccbe6400 session 0x556dd0446780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 174940160 unmapped: 58490880 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 174940160 unmapped: 58490880 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 418 ms_handle_reset con 0x556dccb29000 session 0x556dcd249860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 419 ms_handle_reset con 0x556dccbee000 session 0x556dcd1a30e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 419 ms_handle_reset con 0x556dcacd4400 session 0x556dcd35fe00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 174940160 unmapped: 58490880 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 174940160 unmapped: 58490880 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 419 heartbeat osd_stat(store_statfs(0x4f7b11000/0x0/0x4ffc00000, data 0x6e365a/0x92e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 419 ms_handle_reset con 0x556dccb24c00 session 0x556dccf4a960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 419 heartbeat osd_stat(store_statfs(0x4f7b11000/0x0/0x4ffc00000, data 0x6e365a/0x92e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 174940160 unmapped: 58490880 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2868497 data_alloc: 218103808 data_used: 1703936
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 420 ms_handle_reset con 0x556dcacd4400 session 0x556dccf4c5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 174981120 unmapped: 58449920 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 421 ms_handle_reset con 0x556dccb29000 session 0x556dcd70c960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 421 ms_handle_reset con 0x556dcdeea000 session 0x556dccf4f0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 174989312 unmapped: 58441728 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 174997504 unmapped: 58433536 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 422 handle_osd_map epochs [422,422], i have 422, src has [1,422]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 422 ms_handle_reset con 0x556dccbe6400 session 0x556dce31fc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 175005696 unmapped: 58425344 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f7c44000/0x0/0x4ffc00000, data 0x6e8987/0x938000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 175013888 unmapped: 58417152 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2875826 data_alloc: 218103808 data_used: 1708032
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.916275978s of 10.156276703s, submitted: 87
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 423 ms_handle_reset con 0x556dccbee000 session 0x556dcb50de00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 175013888 unmapped: 58417152 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 424 ms_handle_reset con 0x556dccbee000 session 0x556dcd7672c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 175038464 unmapped: 58392576 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 425 ms_handle_reset con 0x556dcacd4400 session 0x556dcc7474a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 425 ms_handle_reset con 0x556dccb29000 session 0x556dce31ed20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 175013888 unmapped: 58417152 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 175013888 unmapped: 58417152 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 425 heartbeat osd_stat(store_statfs(0x4f7c43000/0x0/0x4ffc00000, data 0x6ed60e/0x93a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 175013888 unmapped: 58417152 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2884044 data_alloc: 218103808 data_used: 1720320
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 175013888 unmapped: 58417152 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 428 heartbeat osd_stat(store_statfs(0x4f7c3e000/0x0/0x4ffc00000, data 0x6f0d86/0x93f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 175022080 unmapped: 58408960 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 428 handle_osd_map epochs [429,430], i have 428, src has [1,430]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 430 ms_handle_reset con 0x556dccbe6400 session 0x556dcd70d860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 175030272 unmapped: 58400768 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 430 ms_handle_reset con 0x556dcdeea000 session 0x556dcc638b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 175030272 unmapped: 58400768 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 175038464 unmapped: 58392576 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2893415 data_alloc: 218103808 data_used: 1724416
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 430 heartbeat osd_stat(store_statfs(0x4f7c38000/0x0/0x4ffc00000, data 0x6f5f37/0x946000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.371139526s of 10.427136421s, submitted: 166
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 431 ms_handle_reset con 0x556dcacd4400 session 0x556dcacd72c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 431 ms_handle_reset con 0x556dccb29000 session 0x556dce31fc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 175054848 unmapped: 58376192 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 431 heartbeat osd_stat(store_statfs(0x4f7c35000/0x0/0x4ffc00000, data 0x6f7ac2/0x948000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 175054848 unmapped: 58376192 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 432 ms_handle_reset con 0x556dccbe6400 session 0x556dcc6ea960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 432 ms_handle_reset con 0x556dccbee000 session 0x556dccb7a780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 175054848 unmapped: 58376192 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 175054848 unmapped: 58376192 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 175054848 unmapped: 58376192 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2898090 data_alloc: 218103808 data_used: 1728512
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 175054848 unmapped: 58376192 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 432 heartbeat osd_stat(store_statfs(0x4f7c33000/0x0/0x4ffc00000, data 0x6f96b1/0x94a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 175054848 unmapped: 58376192 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 175054848 unmapped: 58376192 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 433 ms_handle_reset con 0x556dccbf1400 session 0x556dccf4b860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 175063040 unmapped: 58368000 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 175063040 unmapped: 58368000 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2903556 data_alloc: 218103808 data_used: 1728512
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.989645958s of 10.298195839s, submitted: 93
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 434 ms_handle_reset con 0x556dccbf1400 session 0x556dcd70cb40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 175063040 unmapped: 58368000 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 434 ms_handle_reset con 0x556dcacd4400 session 0x556dd04474a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 434 heartbeat osd_stat(store_statfs(0x4f7c2b000/0x0/0x4ffc00000, data 0x6fcd5b/0x952000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 434 ms_handle_reset con 0x556dccb29000 session 0x556dcd3fdc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176111616 unmapped: 57319424 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 434 ms_handle_reset con 0x556dccbe6400 session 0x556dcd3fc960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 434 ms_handle_reset con 0x556dccbee000 session 0x556dcd3fc000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 434 ms_handle_reset con 0x556dccbee000 session 0x556dcd3fda40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 434 handle_osd_map epochs [434,435], i have 434, src has [1,435]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176119808 unmapped: 57311232 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 435 ms_handle_reset con 0x556dcacd4400 session 0x556dcd3fd4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 435 ms_handle_reset con 0x556dccb29000 session 0x556dcac45c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176119808 unmapped: 57311232 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 435 ms_handle_reset con 0x556dccbe6400 session 0x556dcd70de00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 435 heartbeat osd_stat(store_statfs(0x4f7c24000/0x0/0x4ffc00000, data 0x6fe8e4/0x958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 435 ms_handle_reset con 0x556dccbf1400 session 0x556dcd70c000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176119808 unmapped: 57311232 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2916836 data_alloc: 218103808 data_used: 1736704
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 435 ms_handle_reset con 0x556dccbf1400 session 0x556dca8a0000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176119808 unmapped: 57311232 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 435 ms_handle_reset con 0x556dcacd4400 session 0x556dca8a1c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 435 ms_handle_reset con 0x556dccb29000 session 0x556dcaca81e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176119808 unmapped: 57311232 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 435 ms_handle_reset con 0x556dccbe6400 session 0x556dca904960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176128000 unmapped: 57303040 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 435 ms_handle_reset con 0x556dccbee000 session 0x556dcd35eb40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 435 ms_handle_reset con 0x556dcacd4400 session 0x556dcd846f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176152576 unmapped: 57278464 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 435 ms_handle_reset con 0x556dccb29000 session 0x556dcd836f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 435 heartbeat osd_stat(store_statfs(0x4f7c29000/0x0/0x4ffc00000, data 0x6fe7be/0x955000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176160768 unmapped: 57270272 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2914542 data_alloc: 218103808 data_used: 1740800
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 435 handle_osd_map epochs [435,436], i have 435, src has [1,436]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 436 ms_handle_reset con 0x556dccbe6400 session 0x556dcae105a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.772053719s of 10.128597260s, submitted: 94
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 436 ms_handle_reset con 0x556dccbf1400 session 0x556dcac445a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176177152 unmapped: 57253888 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 436 ms_handle_reset con 0x556dce83dc00 session 0x556dcf2f63c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176185344 unmapped: 57245696 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176185344 unmapped: 57245696 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176193536 unmapped: 57237504 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f7c27000/0x0/0x4ffc00000, data 0x70031d/0x956000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 436 ms_handle_reset con 0x556dcacd4400 session 0x556dcc747a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176193536 unmapped: 57237504 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2917021 data_alloc: 218103808 data_used: 1744896
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f7c27000/0x0/0x4ffc00000, data 0x70031d/0x956000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176193536 unmapped: 57237504 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f7c27000/0x0/0x4ffc00000, data 0x70031d/0x956000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 436 ms_handle_reset con 0x556dccb29000 session 0x556dcacd61e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 176193536 unmapped: 57237504 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 436 handle_osd_map epochs [436,437], i have 436, src has [1,437]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 437 ms_handle_reset con 0x556dccbe6400 session 0x556dcacd6b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177258496 unmapped: 56172544 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 438 handle_osd_map epochs [439,439], i have 438, src has [1,439]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 439 ms_handle_reset con 0x556dccbf1400 session 0x556dcd766960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 56164352 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 439 ms_handle_reset con 0x556dccb24400 session 0x556dcf2f7a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 439 ms_handle_reset con 0x556dcacd4400 session 0x556dcf2f72c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 56164352 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2933345 data_alloc: 218103808 data_used: 1761280
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 439 ms_handle_reset con 0x556dccb29000 session 0x556dcad62d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 56164352 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 439 heartbeat osd_stat(store_statfs(0x4f7c1b000/0x0/0x4ffc00000, data 0x705569/0x962000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 56164352 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 439 handle_osd_map epochs [440,440], i have 439, src has [1,440]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.710509300s of 11.900293350s, submitted: 70
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 440 ms_handle_reset con 0x556dccbe6400 session 0x556dcc6385a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 56164352 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 440 heartbeat osd_stat(store_statfs(0x4f7c1b000/0x0/0x4ffc00000, data 0x705569/0x962000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 440 ms_handle_reset con 0x556dccbf1400 session 0x556dcab66f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 56164352 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 440 ms_handle_reset con 0x556dcd333400 session 0x556dcd6f1680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 56164352 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2935967 data_alloc: 218103808 data_used: 1761280
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 440 handle_osd_map epochs [440,441], i have 440, src has [1,441]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 441 ms_handle_reset con 0x556dcacd4400 session 0x556dcad68d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177274880 unmapped: 56156160 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 441 ms_handle_reset con 0x556dccb29000 session 0x556dcc8b4780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 441 ms_handle_reset con 0x556dccbe6400 session 0x556dcabe30e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177274880 unmapped: 56156160 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 441 heartbeat osd_stat(store_statfs(0x4f7c15000/0x0/0x4ffc00000, data 0x708cb7/0x968000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 441 ms_handle_reset con 0x556dccbf1400 session 0x556dd0446d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 442 ms_handle_reset con 0x556dccbe7c00 session 0x556dcae101e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177283072 unmapped: 56147968 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 442 ms_handle_reset con 0x556dccbe7c00 session 0x556dcd8470e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 442 ms_handle_reset con 0x556dcacd4400 session 0x556dcd68ba40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177471488 unmapped: 55959552 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 442 ms_handle_reset con 0x556dccb29000 session 0x556dcd1a2960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177479680 unmapped: 55951360 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 442 heartbeat osd_stat(store_statfs(0x4f73b3000/0x0/0x4ffc00000, data 0xf6a826/0x11ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3005795 data_alloc: 218103808 data_used: 1773568
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177479680 unmapped: 55951360 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 442 heartbeat osd_stat(store_statfs(0x4f73b4000/0x0/0x4ffc00000, data 0xf6a7c4/0x11c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177479680 unmapped: 55951360 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177479680 unmapped: 55951360 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 442 handle_osd_map epochs [443,443], i have 442, src has [1,443]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.085521698s of 10.387833595s, submitted: 76
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 443 ms_handle_reset con 0x556dccbe6400 session 0x556dcd6f1c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 443 ms_handle_reset con 0x556dccbf1400 session 0x556dcd35ef00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177479680 unmapped: 55951360 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 443 ms_handle_reset con 0x556dccbf1400 session 0x556dca904780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 443 ms_handle_reset con 0x556dcacd4400 session 0x556dcd3661e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 443 ms_handle_reset con 0x556dccb29000 session 0x556dcaca94a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177520640 unmapped: 55910400 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013335 data_alloc: 218103808 data_used: 1777664
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f73af000/0x0/0x4ffc00000, data 0xf6c276/0x11ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 177520640 unmapped: 55910400 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 443 ms_handle_reset con 0x556dcd379000 session 0x556dcf2f7680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178610176 unmapped: 54820864 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 443 handle_osd_map epochs [444,444], i have 443, src has [1,444]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178610176 unmapped: 54820864 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 444 ms_handle_reset con 0x556dcd364800 session 0x556dcd3670e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 444 handle_osd_map epochs [444,445], i have 444, src has [1,445]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 445 ms_handle_reset con 0x556dcacd4400 session 0x556dccf4e000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 445 ms_handle_reset con 0x556dcb941400 session 0x556dcad66d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178618368 unmapped: 54812672 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178618368 unmapped: 54812672 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3078083 data_alloc: 218103808 data_used: 9437184
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 445 heartbeat osd_stat(store_statfs(0x4f73a6000/0x0/0x4ffc00000, data 0xf6fa6c/0x11d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178618368 unmapped: 54812672 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 445 handle_osd_map epochs [445,446], i have 445, src has [1,446]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178634752 unmapped: 54796288 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f73a5000/0x0/0x4ffc00000, data 0xf715f7/0x11d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 446 ms_handle_reset con 0x556dccb29000 session 0x556dcd8374a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 446 handle_osd_map epochs [447,447], i have 446, src has [1,447]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178642944 unmapped: 54788096 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.263101578s of 10.627244949s, submitted: 69
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 448 ms_handle_reset con 0x556dccbf1400 session 0x556dcd21ed20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178642944 unmapped: 54788096 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f739f000/0x0/0x4ffc00000, data 0xf74c01/0x11dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 448 ms_handle_reset con 0x556dcd379000 session 0x556dccf4a000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178642944 unmapped: 54788096 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3083579 data_alloc: 218103808 data_used: 9437184
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178642944 unmapped: 54788096 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179339264 unmapped: 54091776 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f7085000/0x0/0x4ffc00000, data 0x1291bf1/0x14f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 182738944 unmapped: 50692096 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 182788096 unmapped: 50642944 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f6b02000/0x0/0x4ffc00000, data 0x1814bf1/0x1a7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 448 ms_handle_reset con 0x556dcacd4400 session 0x556dcd837680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 182788096 unmapped: 50642944 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156175 data_alloc: 218103808 data_used: 9891840
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 182788096 unmapped: 50642944 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f6af9000/0x0/0x4ffc00000, data 0x181ac63/0x1a84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 448 ms_handle_reset con 0x556dccb29000 session 0x556dcacd7e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 448 handle_osd_map epochs [449,449], i have 448, src has [1,449]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 449 ms_handle_reset con 0x556dccbf1400 session 0x556dcc6eb860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 182796288 unmapped: 50634752 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 449 handle_osd_map epochs [449,450], i have 449, src has [1,450]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 450 ms_handle_reset con 0x556dcd379000 session 0x556dcb9c6b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 450 ms_handle_reset con 0x556dcb941400 session 0x556dd04470e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 182796288 unmapped: 50634752 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 450 ms_handle_reset con 0x556dcacd4400 session 0x556dd04463c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 182796288 unmapped: 50634752 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 182796288 unmapped: 50634752 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3168863 data_alloc: 218103808 data_used: 9908224
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 450 handle_osd_map epochs [451,451], i have 450, src has [1,451]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.207205772s of 12.084877968s, submitted: 105
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 451 ms_handle_reset con 0x556dccbf1400 session 0x556dcae10780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 451 heartbeat osd_stat(store_statfs(0x4f6aef000/0x0/0x4ffc00000, data 0x181e52b/0x1a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 182820864 unmapped: 50610176 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 451 handle_osd_map epochs [452,452], i have 451, src has [1,452]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 452 ms_handle_reset con 0x556dccb29000 session 0x556dcd248b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 182829056 unmapped: 50601984 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 452 handle_osd_map epochs [453,453], i have 452, src has [1,453]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 453 ms_handle_reset con 0x556dcd379000 session 0x556dcd249c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 182829056 unmapped: 50601984 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 453 ms_handle_reset con 0x556dcf43a800 session 0x556dcd68be00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 182829056 unmapped: 50601984 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 182829056 unmapped: 50601984 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3176905 data_alloc: 218103808 data_used: 9916416
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 453 ms_handle_reset con 0x556dccb29000 session 0x556dcaccba40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 182837248 unmapped: 50593792 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 453 heartbeat osd_stat(store_statfs(0x4f6ae8000/0x0/0x4ffc00000, data 0x18237f6/0x1a96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 453 heartbeat osd_stat(store_statfs(0x4f6ae8000/0x0/0x4ffc00000, data 0x18237f6/0x1a96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 182837248 unmapped: 50593792 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 453 ms_handle_reset con 0x556dcd379000 session 0x556dccf4f4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 453 handle_osd_map epochs [454,454], i have 453, src has [1,454]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 454 ms_handle_reset con 0x556dcacd4400 session 0x556dcad62000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 454 ms_handle_reset con 0x556dcf43a800 session 0x556dccf4d4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 182837248 unmapped: 50593792 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 454 handle_osd_map epochs [455,455], i have 454, src has [1,455]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 455 ms_handle_reset con 0x556dccc00400 session 0x556dcd68af00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 182845440 unmapped: 50585600 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 455 ms_handle_reset con 0x556dccbe6400 session 0x556dcd8474a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 455 ms_handle_reset con 0x556dccbe7c00 session 0x556dcd8370e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 455 ms_handle_reset con 0x556dccbf1400 session 0x556dcd847a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178241536 unmapped: 55189504 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3007327 data_alloc: 218103808 data_used: 1822720
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 455 ms_handle_reset con 0x556dcacd4400 session 0x556dcabe3680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 455 handle_osd_map epochs [456,456], i have 455, src has [1,456]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.251274109s of 10.175974846s, submitted: 111
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 456 ms_handle_reset con 0x556dccb29000 session 0x556dccb7a3c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178249728 unmapped: 55181312 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 456 ms_handle_reset con 0x556dcacd4400 session 0x556dccf4b680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178282496 unmapped: 55148544 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 456 handle_osd_map epochs [456,457], i have 456, src has [1,457]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 457 heartbeat osd_stat(store_statfs(0x4f7be4000/0x0/0x4ffc00000, data 0x722b58/0x99a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178282496 unmapped: 55148544 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 457 handle_osd_map epochs [458,458], i have 457, src has [1,458]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 458 ms_handle_reset con 0x556dccbe6400 session 0x556dcd366d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179347456 unmapped: 54083584 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 458 ms_handle_reset con 0x556dccbe7c00 session 0x556dcf2f6f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179347456 unmapped: 54083584 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013253 data_alloc: 218103808 data_used: 1822720
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 458 ms_handle_reset con 0x556dccbf1400 session 0x556dcac443c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 458 handle_osd_map epochs [459,459], i have 458, src has [1,459]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 459 ms_handle_reset con 0x556dcd379000 session 0x556dcd70d0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179355648 unmapped: 54075392 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 459 ms_handle_reset con 0x556dcd379000 session 0x556dcc6ea1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 459 ms_handle_reset con 0x556dcacd4400 session 0x556dcd8472c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179363840 unmapped: 54067200 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 459 heartbeat osd_stat(store_statfs(0x4f7bde000/0x0/0x4ffc00000, data 0x727cce/0x99f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 459 handle_osd_map epochs [459,460], i have 459, src has [1,460]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 460 ms_handle_reset con 0x556dccbe6400 session 0x556dccf4be00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179363840 unmapped: 54067200 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179363840 unmapped: 54067200 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 460 handle_osd_map epochs [461,461], i have 460, src has [1,461]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 461 ms_handle_reset con 0x556dccbe7c00 session 0x556dcc65ad20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179363840 unmapped: 54067200 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3018755 data_alloc: 218103808 data_used: 1826816
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 461 ms_handle_reset con 0x556dccbf1400 session 0x556dd0446b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 461 heartbeat osd_stat(store_statfs(0x4f77ca000/0x0/0x4ffc00000, data 0x72b382/0x9a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179363840 unmapped: 54067200 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179363840 unmapped: 54067200 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 461 handle_osd_map epochs [462,462], i have 461, src has [1,462]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.034558296s of 12.110437393s, submitted: 129
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179363840 unmapped: 54067200 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179363840 unmapped: 54067200 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f77c9000/0x0/0x4ffc00000, data 0x72ce55/0x9a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179363840 unmapped: 54067200 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3020185 data_alloc: 218103808 data_used: 1822720
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 462 ms_handle_reset con 0x556dccbf1400 session 0x556dcad69680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 462 ms_handle_reset con 0x556dcacd4400 session 0x556dccb7b4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179372032 unmapped: 54059008 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179372032 unmapped: 54059008 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 462 handle_osd_map epochs [463,463], i have 462, src has [1,463]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 463 ms_handle_reset con 0x556dccbe6400 session 0x556dccf4dc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 463 heartbeat osd_stat(store_statfs(0x4f77c9000/0x0/0x4ffc00000, data 0x72ce55/0x9a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179372032 unmapped: 54059008 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179372032 unmapped: 54059008 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 463 handle_osd_map epochs [463,464], i have 463, src has [1,464]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 464 ms_handle_reset con 0x556dccbe7c00 session 0x556dcd21ef00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179372032 unmapped: 54059008 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3027949 data_alloc: 218103808 data_used: 1822720
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 464 ms_handle_reset con 0x556dcd379000 session 0x556dcba332c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 464 heartbeat osd_stat(store_statfs(0x4f77c2000/0x0/0x4ffc00000, data 0x7304cf/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 464 handle_osd_map epochs [464,465], i have 464, src has [1,465]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 465 ms_handle_reset con 0x556dcacd4400 session 0x556dcc10c1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179380224 unmapped: 54050816 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 465 ms_handle_reset con 0x556dccbe6400 session 0x556dcb6825a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 465 ms_handle_reset con 0x556dccbe7c00 session 0x556dcd6f1e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 179380224 unmapped: 54050816 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.023361206s of 10.122493744s, submitted: 75
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178520064 unmapped: 54910976 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 465 ms_handle_reset con 0x556dcf43a800 session 0x556dcb682b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178520064 unmapped: 54910976 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 465 heartbeat osd_stat(store_statfs(0x4f7780000/0x0/0x4ffc00000, data 0x7720a1/0x9ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 465 ms_handle_reset con 0x556dccbf1400 session 0x556dccb7af00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178520064 unmapped: 54910976 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035422 data_alloc: 218103808 data_used: 1826816
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 465 handle_osd_map epochs [466,466], i have 465, src has [1,466]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 466 ms_handle_reset con 0x556dceaea800 session 0x556dccf4e1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178528256 unmapped: 54902784 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 466 ms_handle_reset con 0x556dcacd4400 session 0x556dcd35fe00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178536448 unmapped: 54894592 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 466 handle_osd_map epochs [467,467], i have 466, src has [1,467]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 467 ms_handle_reset con 0x556dccbe6400 session 0x556dcf2f7a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178536448 unmapped: 54894592 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 467 heartbeat osd_stat(store_statfs(0x4f7778000/0x0/0x4ffc00000, data 0x77581b/0x9f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 467 ms_handle_reset con 0x556dccbe7c00 session 0x556dcd21e1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 467 ms_handle_reset con 0x556dcf43a800 session 0x556dcb682f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178552832 unmapped: 54878208 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 467 ms_handle_reset con 0x556dcf43a800 session 0x556dcc65ad20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178552832 unmapped: 54878208 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3041147 data_alloc: 218103808 data_used: 1847296
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178552832 unmapped: 54878208 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 467 handle_osd_map epochs [468,468], i have 467, src has [1,468]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 468 ms_handle_reset con 0x556dcacd4400 session 0x556dccf4be00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178552832 unmapped: 54878208 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 468 handle_osd_map epochs [469,469], i have 468, src has [1,469]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 469 ms_handle_reset con 0x556dccbe6400 session 0x556dcc6ea1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178552832 unmapped: 54878208 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 469 ms_handle_reset con 0x556dccbe7c00 session 0x556dcac443c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 469 heartbeat osd_stat(store_statfs(0x4f7772000/0x0/0x4ffc00000, data 0x778e43/0x9fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 469 handle_osd_map epochs [470,470], i have 469, src has [1,470]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 469 handle_osd_map epochs [470,470], i have 470, src has [1,470]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.347959518s of 10.564125061s, submitted: 70
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178561024 unmapped: 54870016 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 470 ms_handle_reset con 0x556dceaea800 session 0x556dcf2f6f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 470 ms_handle_reset con 0x556dceaea800 session 0x556dcd366d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178561024 unmapped: 54870016 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3052613 data_alloc: 218103808 data_used: 1855488
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 470 heartbeat osd_stat(store_statfs(0x4f776e000/0x0/0x4ffc00000, data 0x77aa24/0x9fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178561024 unmapped: 54870016 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 470 ms_handle_reset con 0x556dcacd4400 session 0x556dcabe3680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 470 ms_handle_reset con 0x556dccbe6400 session 0x556dcd8370e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 470 ms_handle_reset con 0x556dccbe7c00 session 0x556dcd8474a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178511872 unmapped: 54919168 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 470 handle_osd_map epochs [471,471], i have 470, src has [1,471]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 471 ms_handle_reset con 0x556dcf43a800 session 0x556dccf4f4a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 471 ms_handle_reset con 0x556dcf43a800 session 0x556dcd248b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178528256 unmapped: 54902784 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 471 handle_osd_map epochs [472,472], i have 471, src has [1,472]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 472 ms_handle_reset con 0x556dcacd4400 session 0x556dcd846b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178544640 unmapped: 54886400 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 472 ms_handle_reset con 0x556dccbe6400 session 0x556dcabe2960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178544640 unmapped: 54886400 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3308923 data_alloc: 218103808 data_used: 1859584
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 472 handle_osd_map epochs [473,473], i have 472, src has [1,473]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 473 ms_handle_reset con 0x556dccbe7c00 session 0x556dcd21f0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 473 ms_handle_reset con 0x556dceaea800 session 0x556dcacd7e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 473 ms_handle_reset con 0x556dceaea800 session 0x556dcd6f0000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178561024 unmapped: 54870016 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f53a8000/0x0/0x4ffc00000, data 0x2b3fbba/0x2dc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178561024 unmapped: 54870016 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 473 handle_osd_map epochs [473,474], i have 473, src has [1,474]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 474 ms_handle_reset con 0x556dcacd4400 session 0x556dccf4e960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178561024 unmapped: 54870016 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.310986519s of 10.185679436s, submitted: 112
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 474 heartbeat osd_stat(store_statfs(0x4f53a4000/0x0/0x4ffc00000, data 0x2b41639/0x2dc8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178561024 unmapped: 54870016 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 474 handle_osd_map epochs [475,475], i have 474, src has [1,475]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 475 ms_handle_reset con 0x556dccbe6400 session 0x556dcd26a000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 475 heartbeat osd_stat(store_statfs(0x4f53a2000/0x0/0x4ffc00000, data 0x2b431d2/0x2dcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178569216 unmapped: 54861824 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3318175 data_alloc: 218103808 data_used: 1867776
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178569216 unmapped: 54861824 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 475 ms_handle_reset con 0x556dccbe7c00 session 0x556dcaccab40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 475 handle_osd_map epochs [476,476], i have 475, src has [1,476]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 476 ms_handle_reset con 0x556dcf43a800 session 0x556dcae0fe00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178569216 unmapped: 54861824 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 476 ms_handle_reset con 0x556dcf43a800 session 0x556dcb57dc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 476 ms_handle_reset con 0x556dcacd4400 session 0x556dcd3670e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178585600 unmapped: 54845440 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 476 handle_osd_map epochs [477,477], i have 476, src has [1,477]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 477 ms_handle_reset con 0x556dccbe6400 session 0x556dcc65a000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 477 ms_handle_reset con 0x556dccbe7c00 session 0x556dcbab1c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178601984 unmapped: 54829056 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 477 ms_handle_reset con 0x556dceaea800 session 0x556dccf4b680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 477 heartbeat osd_stat(store_statfs(0x4f539b000/0x0/0x4ffc00000, data 0x2b46832/0x2dd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178601984 unmapped: 54829056 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324683 data_alloc: 218103808 data_used: 1880064
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 477 ms_handle_reset con 0x556dceaea800 session 0x556dcd70d0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 477 ms_handle_reset con 0x556dcacd4400 session 0x556dcc10d860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178610176 unmapped: 54820864 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 477 handle_osd_map epochs [477,478], i have 477, src has [1,478]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 478 ms_handle_reset con 0x556dccbe6400 session 0x556dccb7a1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178610176 unmapped: 54820864 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 478 handle_osd_map epochs [479,479], i have 478, src has [1,479]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 479 ms_handle_reset con 0x556dcf0e0800 session 0x556dca095a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178610176 unmapped: 54820864 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 479 ms_handle_reset con 0x556dccbeec00 session 0x556dccb7ab40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 479 heartbeat osd_stat(store_statfs(0x4f5394000/0x0/0x4ffc00000, data 0x2b49e61/0x2dd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 479 handle_osd_map epochs [480,480], i have 479, src has [1,480]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.003199577s of 10.218853951s, submitted: 97
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 178610176 unmapped: 54820864 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 480 ms_handle_reset con 0x556dcacd4400 session 0x556dca88fa40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 480 ms_handle_reset con 0x556dccbe6400 session 0x556dcd68a1e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 180068352 unmapped: 53362688 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370517 data_alloc: 218103808 data_used: 6246400
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 183328768 unmapped: 50102272 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 480 ms_handle_reset con 0x556dceaea800 session 0x556dcd7672c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f5390000/0x0/0x4ffc00000, data 0x2b4ba32/0x2ddc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 183328768 unmapped: 50102272 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 480 handle_osd_map epochs [481,481], i have 480, src has [1,481]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 481 ms_handle_reset con 0x556dcf0e0800 session 0x556dcd1a25a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 183328768 unmapped: 50102272 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 183328768 unmapped: 50102272 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 481 ms_handle_reset con 0x556dcd376c00 session 0x556dcd68bc20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 481 ms_handle_reset con 0x556dcacd4400 session 0x556dcb50d680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 183328768 unmapped: 50102272 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3422739 data_alloc: 234881024 data_used: 13361152
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 481 handle_osd_map epochs [482,482], i have 481, src has [1,482]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 482 ms_handle_reset con 0x556dccbe6400 session 0x556dcd1a2f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 482 ms_handle_reset con 0x556dceaea800 session 0x556dcd1a2d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 183328768 unmapped: 50102272 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 482 heartbeat osd_stat(store_statfs(0x4f538b000/0x0/0x4ffc00000, data 0x2b4f1ac/0x2de2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 482 ms_handle_reset con 0x556dcf0e0800 session 0x556dcbab1a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 183336960 unmapped: 50094080 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 183336960 unmapped: 50094080 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 482 handle_osd_map epochs [483,483], i have 482, src has [1,483]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 483 handle_osd_map epochs [483,484], i have 483, src has [1,484]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 484 ms_handle_reset con 0x556dcd334800 session 0x556dcc8990e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 183345152 unmapped: 50085888 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 484 ms_handle_reset con 0x556dcd334800 session 0x556dcd68ad20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 183345152 unmapped: 50085888 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432333 data_alloc: 234881024 data_used: 13361152
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f5384000/0x0/0x4ffc00000, data 0x2b527c4/0x2de8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 484 ms_handle_reset con 0x556dcacd4400 session 0x556dcc899e00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 484 handle_osd_map epochs [485,485], i have 484, src has [1,485]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.798059464s of 11.922548294s, submitted: 44
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 191340544 unmapped: 42090496 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 485 ms_handle_reset con 0x556dccbe6400 session 0x556dcaccba40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 191643648 unmapped: 41787392 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 485 ms_handle_reset con 0x556dceaea800 session 0x556dcb682b40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 485 handle_osd_map epochs [486,486], i have 485, src has [1,486]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 194011136 unmapped: 39419904 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 486 ms_handle_reset con 0x556dcf0e0800 session 0x556dcb50cd20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 194289664 unmapped: 39141376 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 486 ms_handle_reset con 0x556dca26d400 session 0x556dccf4c780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 486 handle_osd_map epochs [486,487], i have 486, src has [1,487]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 487 ms_handle_reset con 0x556dcf0e0800 session 0x556dcad63a40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 191070208 unmapped: 42360832 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3482532 data_alloc: 234881024 data_used: 13508608
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 487 heartbeat osd_stat(store_statfs(0x4f3b36000/0x0/0x4ffc00000, data 0x2fade34/0x3247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 487 ms_handle_reset con 0x556dcb58c000 session 0x556dcae0c780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 487 ms_handle_reset con 0x556dcb941800 session 0x556dca0952c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 191070208 unmapped: 42360832 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 487 ms_handle_reset con 0x556dcb941800 session 0x556dccf4cd20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 487 handle_osd_map epochs [488,488], i have 487, src has [1,488]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 488 ms_handle_reset con 0x556dccbf1000 session 0x556dd04465a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192135168 unmapped: 41295872 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 488 ms_handle_reset con 0x556dcb65e000 session 0x556dcacd7860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 488 ms_handle_reset con 0x556dccbf1c00 session 0x556dcd3670e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192143360 unmapped: 41287680 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192274432 unmapped: 41156608 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 488 ms_handle_reset con 0x556dccbe7c00 session 0x556dce31e5a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 488 ms_handle_reset con 0x556dcf43a800 session 0x556dcd35ef00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192274432 unmapped: 41156608 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3482582 data_alloc: 234881024 data_used: 13512704
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 488 heartbeat osd_stat(store_statfs(0x4f3d82000/0x0/0x4ffc00000, data 0x2fb159e/0x324c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [0,0,1])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 488 ms_handle_reset con 0x556dcb65e000 session 0x556dcc10d2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192299008 unmapped: 41132032 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192299008 unmapped: 41132032 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 488 handle_osd_map epochs [488,489], i have 488, src has [1,489]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.182831764s of 12.046422958s, submitted: 200
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192299008 unmapped: 41132032 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192299008 unmapped: 41132032 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 489 handle_osd_map epochs [490,490], i have 489, src has [1,490]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 490 ms_handle_reset con 0x556dcb941800 session 0x556dcae105a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 193355776 unmapped: 40075264 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3486485 data_alloc: 234881024 data_used: 13516800
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 490 ms_handle_reset con 0x556dccbf1000 session 0x556dcd846f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 490 ms_handle_reset con 0x556dcb65e000 session 0x556dca904960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192307200 unmapped: 41123840 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f3d7c000/0x0/0x4ffc00000, data 0x2fb4b93/0x3251000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 490 handle_osd_map epochs [490,491], i have 490, src has [1,491]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 491 ms_handle_reset con 0x556dcb941800 session 0x556dca8a1c20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192315392 unmapped: 41115648 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192315392 unmapped: 41115648 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 491 ms_handle_reset con 0x556dccbe7c00 session 0x556dcd3fc960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 491 handle_osd_map epochs [491,492], i have 491, src has [1,492]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 492 ms_handle_reset con 0x556dccbf1000 session 0x556dccb7a780
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192339968 unmapped: 41091072 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 492 ms_handle_reset con 0x556dcf43a800 session 0x556dcabe3680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 492 ms_handle_reset con 0x556dcb65e000 session 0x556dcacca960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192348160 unmapped: 41082880 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494030 data_alloc: 234881024 data_used: 13533184
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192348160 unmapped: 41082880 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 492 ms_handle_reset con 0x556dcb941800 session 0x556dcc65ad20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 492 heartbeat osd_stat(store_statfs(0x4f3d78000/0x0/0x4ffc00000, data 0x2fb82fd/0x3256000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192356352 unmapped: 41074688 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 492 handle_osd_map epochs [493,493], i have 492, src has [1,493]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.855953217s of 10.094451904s, submitted: 69
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192356352 unmapped: 41074688 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 493 handle_osd_map epochs [494,494], i have 493, src has [1,494]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 494 ms_handle_reset con 0x556dccbe7c00 session 0x556dcd6f0000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192167936 unmapped: 41263104 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 494 ms_handle_reset con 0x556dccbf1000 session 0x556dccf4d680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 494 handle_osd_map epochs [495,495], i have 494, src has [1,495]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 495 ms_handle_reset con 0x556dccbf1c00 session 0x556dcae0f0e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192167936 unmapped: 41263104 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3508805 data_alloc: 234881024 data_used: 13545472
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 495 heartbeat osd_stat(store_statfs(0x4f3d6e000/0x0/0x4ffc00000, data 0x2fbd506/0x3260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192167936 unmapped: 41263104 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 495 heartbeat osd_stat(store_statfs(0x4f3d6e000/0x0/0x4ffc00000, data 0x2fbd506/0x3260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 495 ms_handle_reset con 0x556dcb65e000 session 0x556dcd26c960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 495 ms_handle_reset con 0x556dcb941800 session 0x556dcc746f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192176128 unmapped: 41254912 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 495 handle_osd_map epochs [496,496], i have 495, src has [1,496]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 496 ms_handle_reset con 0x556dccbe7c00 session 0x556dcd847860
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 496 ms_handle_reset con 0x556dccbf1000 session 0x556dcc8b4960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192184320 unmapped: 41246720 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192184320 unmapped: 41246720 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 496 handle_osd_map epochs [496,497], i have 496, src has [1,497]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 497 ms_handle_reset con 0x556dce9d3400 session 0x556dcd21fe00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192192512 unmapped: 41238528 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517991 data_alloc: 234881024 data_used: 13570048
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 497 ms_handle_reset con 0x556dce9d3400 session 0x556dd04472c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 497 handle_osd_map epochs [498,498], i have 497, src has [1,498]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 498 ms_handle_reset con 0x556dcb65e000 session 0x556dcd6f0d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192217088 unmapped: 41213952 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 498 heartbeat osd_stat(store_statfs(0x4f3d65000/0x0/0x4ffc00000, data 0x2fc2712/0x3269000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 498 ms_handle_reset con 0x556dcb941800 session 0x556dcd68b2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192241664 unmapped: 41189376 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 498 ms_handle_reset con 0x556dccbe7c00 session 0x556dcd7674a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 498 handle_osd_map epochs [499,499], i have 498, src has [1,499]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192258048 unmapped: 41172992 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.015634537s of 10.546721458s, submitted: 175
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 499 ms_handle_reset con 0x556dccbf1000 session 0x556dcbab0000
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192258048 unmapped: 41172992 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 499 handle_osd_map epochs [500,500], i have 499, src has [1,500]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 500 ms_handle_reset con 0x556dccbf1000 session 0x556dcae0de00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192266240 unmapped: 41164800 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528938 data_alloc: 234881024 data_used: 13602816
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 500 ms_handle_reset con 0x556dcb65e000 session 0x556dcd3fcb40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192266240 unmapped: 41164800 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 500 heartbeat osd_stat(store_statfs(0x4f3d5d000/0x0/0x4ffc00000, data 0x2fc5d2a/0x326f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 500 handle_osd_map epochs [501,501], i have 500, src has [1,501]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192266240 unmapped: 41164800 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 501 ms_handle_reset con 0x556dcb941800 session 0x556dce31fa40
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 501 heartbeat osd_stat(store_statfs(0x4f3d59000/0x0/0x4ffc00000, data 0x2fc790b/0x3272000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192266240 unmapped: 41164800 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 501 ms_handle_reset con 0x556dccbe7c00 session 0x556dcd6f10e0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 501 ms_handle_reset con 0x556dce9d3400 session 0x556dcabe2f00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192274432 unmapped: 41156608 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 501 ms_handle_reset con 0x556dce9d3400 session 0x556dcc6eb2c0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192274432 unmapped: 41156608 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532040 data_alloc: 234881024 data_used: 13606912
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 501 heartbeat osd_stat(store_statfs(0x4f3d5c000/0x0/0x4ffc00000, data 0x2fc790b/0x3272000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 501 handle_osd_map epochs [502,502], i have 501, src has [1,502]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192282624 unmapped: 41148416 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 502 ms_handle_reset con 0x556dcb65e000 session 0x556dcad68d20
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192282624 unmapped: 41148416 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 502 ms_handle_reset con 0x556dcb941800 session 0x556dcd68a960
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 502 handle_osd_map epochs [502,503], i have 502, src has [1,503]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 503 ms_handle_reset con 0x556dccbe7c00 session 0x556dcd3fcf00
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 192315392 unmapped: 41115648 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.842770576s of 10.483991623s, submitted: 103
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 503 ms_handle_reset con 0x556dccbf1000 session 0x556dcc747680
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 503 ms_handle_reset con 0x556dccbf1000 session 0x556dcac445a0
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 194740224 unmapped: 38690816 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 194928640 unmapped: 38502400 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3604818 data_alloc: 234881024 data_used: 19439616
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 503 heartbeat osd_stat(store_statfs(0x4f3956000/0x0/0x4ffc00000, data 0x33cb0cd/0x3678000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 194928640 unmapped: 38502400 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 194928640 unmapped: 38502400 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 194928640 unmapped: 38502400 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 503 handle_osd_map epochs [504,504], i have 503, src has [1,504]
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 194961408 unmapped: 38469632 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 504 heartbeat osd_stat(store_statfs(0x4f3952000/0x0/0x4ffc00000, data 0x33ccb4c/0x367b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 194961408 unmapped: 38469632 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: bluestore.MempoolThread(0x556dc938fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3608992 data_alloc: 234881024 data_used: 19447808
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: osd.1 504 heartbeat osd_stat(store_statfs(0x4f3952000/0x0/0x4ffc00000, data 0x33ccb4c/0x367b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 194961408 unmapped: 38469632 heap: 233431040 old mem: 2845415832 new mem: 2845415832
Dec  2 06:38:47 np0005542249 ceph-osd[89966]: prioritycache tune_memory target: 4294967296 mapped: 194961408 unmapped: 38469632 heap: 233431040 old mem: 2845415832 new mem: 2845415832
